Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Add Api Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md | See an example of a [validation-error response](#example-of-a-validation-error-r ## Before sending the token (preview) > [!IMPORTANT]-> API connectors used in this step are in preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> API connectors used in this step are in preview. For more information about previews, see [Product Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). An API connector at this step is invoked when a token is about to be issued during sign-ins and sign-ups. An API connector for this step can be used to enrich the token with claim values from external sources. |
active-directory-b2c | Force Password Reset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md | |
active-directory-b2c | Manage Custom Policies Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md | |
active-directory-b2c | Openid Connect Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md | The technical profile also returns claims that aren't returned by the identity p | MarkAsFailureOnStatusCode5xx | No | Indicates whether a request to an external service should be marked as a failure if the Http status code is in the 5xx range. The default is `false`. | | DiscoverMetadataByTokenIssuer | No | Indicates whether the OIDC metadata should be discovered by using the issuer in the JWT token.If you need to build the metadata endpoint URL based on Issuer, set this to `true`.| | IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. |-|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt` (public preview). For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). | +|token_endpoint_auth_method| No | Specifies how Azure AD B2C sends the authentication header to the token endpoint. Possible values: `client_secret_post` (default), and `client_secret_basic` (public preview), `private_key_jwt`. For more information, see [OpenID Connect client authentication section](https://openid.net/specs/openid-connect-core-1_0.html#ClientAuthentication). | |token_signing_algorithm| No | Specifies the signing algorithm to use when `token_endpoint_auth_method` is set to `private_key_jwt`. Possible values: `RS256` (default) or `RS512`.| | SingleLogoutEnabled | No | Indicates whether during sign-in the technical profile attempts to sign out from federated identity providers. For more information, see [Azure AD B2C session sign-out](./session-behavior.md#sign-out). Possible values: `true` (default), or `false`. | |ReadBodyClaimsOnIdpRedirect| No| Set to `true` to read claims from response body on identity provider redirect. This metadata is used with [Apple ID](identity-provider-apple-id.md), where claims return in the response payload.| Examples: - [Add Microsoft Account (MSA) as an identity provider using custom policies](identity-provider-microsoft-account.md) - [Sign in by using Azure AD accounts](identity-provider-azure-ad-single-tenant.md) - [Allow users to sign in to a multi-tenant Azure AD identity provider using custom policies](identity-provider-azure-ad-multi-tenant.md)+ |
active-directory-b2c | Page Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md | Azure AD B2C page layout uses the following versions of the [jQuery library](htt ## Self-asserted page (selfasserted) +**2.1.26** ++- Replaced `Keypress` to `Key Down` event and avoid `Asterisk` for non-required in classic mode. ++**2.1.25** ++- Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version. ++- Introduced Captcha mechanism for Self-asserted and Unified SSP Flows (_Beta-version-Internal use only_). ++**2.1.24** ++- Fixed accessibility bugs. ++- Fixed MFA related issue and IE11 compatibility issues. ++**2.1.23** ++- Fixed accessibility bugs. ++- Reduced `min-width` value for UI viewport for default template. ++**2.1.22** ++- Fixed accessibility bugs. ++- Added logic to adopt QR Code Image generated from backend library. ++**2.1.21** ++- Additional sanitization of script tags to avoid XSS attacks. + **2.1.20**-- Fixed an XSS issue on input from textbox+- Fixed Enter event trigger on MFA. +- CSS changes rendering page text/control in vertical manner for small screens **2.1.19**-- Fixed accessibility bugs-- Handle Undefined Error message for existing user sign up-- Move Password Mismatch Error to Inline instead of Page Level+- Fixed accessibility bugs. +- Handled Undefined Error message for existing user sign up. +- Moved Password mismatch error to Inline instead of page level. - Accessibility changes related to High Contrast button display and anchor focus improvements **2.1.18** Azure AD B2C page layout uses the following versions of the [jQuery library](htt - Enforce Validation Error Update on control change and enable continue on email verified - Added additional field to error code to validation failure response + **2.1.16** - Fixed "Claims for verification control have not been verified" bug while verifying code. - Hide error message on validation succeeds and send code to verify Azure AD B2C page layout uses the following versions of the [jQuery library](htt **2.1.10** - Correcting to the tab index-- Fixing WCAG 2.1 accessibility and screen reader issues +- Fixed WCAG 2.1 accessibility and screen reader issues **2.1.9** Azure AD B2C page layout uses the following versions of the [jQuery library](htt > [!TIP] > If you localize your page to support multiple locales, or languages in a user flow. The [localization IDs](localization-string-ids.md) article provides the list of localization IDs that you can use for the page version you select. +**2.1.14** ++- Replaced `Keypress` to `Key Down` event. ++**2.1.13** ++- Fixed content security policy (CSP) violation and remove additional request header X-Aspnetmvc-Version ++- Introduced Captcha mechanism for Self-asserted and Unified SSP Flows (_Beta-version-Internal use only_) ++**2.1.12** ++- Removed `ReplaceAll` function for IE11 compatibility. ++**2.1.11** ++- Fixed accessibility bugs. ++**2.1.10** ++- Added additional sanitization of script tags to avoid XSS attacks. + **2.1.9**-- Fix accessibility bugs++- Fixed accessibility bugs. + - Accessibility changes related to High Contrast button display and anchor focus improvements- + **2.1.8** - Add descriptive error message and fixed forgotPassword link! Azure AD B2C page layout uses the following versions of the [jQuery library](htt ## MFA page (multifactor) +**1.2.12** ++- Replaced `KeyPress` to `KeyDown` event. ++**1.2.11** ++- Removed `ReplaceAll` function for IE11 compatibility. ++**1.2.10** ++- Fixed accessibility bugs. ++**1.2.9** ++- Fixed `Enter` event trigger on MFA. ++- CSS changes render page text/control in vertical manner for small screens ++- Fixed Multifactor tab navigation bug. ++**1.2.8** ++- Passed the response status for MFA verification with error for backend to further triage. ++**1.2.7** ++- Fixed accessibility issue on label for retries code. ++- Fixed issue caused by incompatibility of default parameter on IE 11. ++- Set up `H1` heading and enable by default. ++- Updated HandlebarJS version to 4.7.7. ++**1.2.6** ++- Corrected the `autocomplete` value on verification code field from false to off. ++- Fixed a few XSS encoding issues. + **1.2.5** - Fixed a language encoding issue that is causing the request to fail. Azure AD B2C page layout uses the following versions of the [jQuery library](htt ## Exception Page (globalexception) +**1.2.5** ++- Removed `ReplaceAl`l function for IE11 compatibility. ++**1.2.4** ++- Fixed accessibility bugs. ++**1.2.3** ++- Updated HandlebarJS version to 4.7.7. ++**1.2.2** ++- Set up `H1` heading and enable by default. + **1.2.1**+ - Updated jQuery version to 3.5.1. - Updated HandlebarJS version to 4.7.6. Azure AD B2C page layout uses the following versions of the [jQuery library](htt ## Other pages (ProviderSelection, ClaimsConsent, UnifiedSSD) +**1.2.4** ++- Remove `ReplaceAll` function for IE11 compatibility. ++**1.2.3** ++- Fixed accessibility bugs. ++**1.2.2** ++- Updated HandlebarJS version to 4.7.7 + **1.2.1**+ - Updated jQuery version to 3.5.1. - Updated HandlebarJS version to 4.7.6. Azure AD B2C page layout uses the following versions of the [jQuery library](htt ## Next steps For details on how to customize the user interface of your applications in custom policies, see [Customize the user interface of your application using a custom policy](customize-ui-with-html.md).++ |
active-directory-b2c | Secure Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md | The following XML snippet is an example of a RESTful technical profile configure ## OAuth2 bearer authentication - Bearer token authentication is defined in [OAuth2.0 Authorization Framework: Bearer Token Usage (RFC 6750)](https://www.rfc-editor.org/rfc/rfc6750.txt). In bearer token authentication, Azure AD B2C sends an HTTP request with a token in the authorization header. ```http A bearer token is an opaque string. It can be a JWT access token or any string t - **Bearer token**. To be able to send the bearer token in the Restful technical profile, your policy needs to first acquire the bearer token and then use it in the RESTful technical profile. - **Static bearer token**. Use this approach when your REST API issues a long-term access token. To use a static bearer token, create a policy key and make a reference from the RESTful technical profile to your policy key. - ## Using OAuth2 Bearer The following steps demonstrate how to use client credentials to obtain a bearer token and pass it into the Authorization header of the REST API calls. Add the validation technical profile reference to the sign up technical profile, ++ For example:- ```XML - <ValidationTechnicalProfiles> - .... - <ValidationTechnicalProfile ReferenceId="REST-AcquireAccessToken" /> - .... - </ValidationTechnicalProfiles> - ``` - +```ruby +```XML +<ValidationTechnicalProfiles> + .... + <ValidationTechnicalProfile ReferenceId="REST-AcquireAccessToken" /> + .... +</ValidationTechnicalProfiles> +``` +``` ::: zone-end To configure a REST API technical profile with API key authentication, create th 1. For **Key usage**, select **Encryption**. 1. Select **Create**. - ### Configure your REST API technical profile to use API key authentication After creating the necessary key, configure your REST API technical profile metadata to reference the credentials. The following XML snippet is an example of a RESTful technical profile configure ::: zone pivot="b2c-custom-policy" - Learn more about the [Restful technical profile](restful-technical-profile.md) element in the custom policy reference. ::: zone-end+ |
active-directory-b2c | Tenant Management Directory Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md | The response from the API call looks similar to the following json: { "directorySizeQuota": { "used": 211802,- "total": 300000 + "total": 50000000 } } ] If your tenant usage is higher that 80%, you can remove inactive users or reques ## Request increase directory quota size -You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md) +You can request to increase the quota size by [contacting support](find-help-open-support-ticket.md) |
active-directory-domain-services | Alert Service Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/alert-service-principal.md | ms.assetid: f168870c-b43a-4dd6-a13f-5cfadc5edf2c + Last updated 01/29/2023 - # Known issues: Service principal alerts in Azure Active Directory Domain Services |
active-directory-domain-services | Create Forest Trust Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-forest-trust-powershell.md | For more conceptual information about forest types in Azure AD DS, see [How do f [Install-Script]: /powershell/module/powershellget/install-script <!-- EXTERNAL LINKS -->-[powershell-gallery]: https://www.powershellgallery.com/ +[powershell-gallery]: https://www.powershellgallery.com/ |
active-directory-domain-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/policy-reference.md | Title: Built-in policy definitions for Azure Active Directory Domain Services description: Lists Azure Policy built-in policy definitions for Azure Active Directory Domain Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
active-directory-domain-services | Powershell Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md | |
active-directory-domain-services | Powershell Scoped Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md | |
active-directory-domain-services | Secure Your Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md | |
active-directory-domain-services | Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/synchronization.md | ms.assetid: 57cbf436-fc1d-4bab-b991-7d25b6e987ef + Last updated 04/03/2023 - # How objects and credentials are synchronized in an Azure Active Directory Domain Services managed domain |
active-directory-domain-services | Template Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md | |
active-directory-domain-services | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md | ms.assetid: 4bc8c604-f57c-4f28-9dac-8b9164a0cf0b + Last updated 01/29/2023 - # Common errors and troubleshooting steps for Azure Active Directory Domain Services |
active-directory-domain-services | Tutorial Create Instance Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance-advanced.md | To see this managed domain in action, create and join a virtual machine to the d [availability-zones]: ../reliability/availability-zones-overview.md [concepts-sku]: administration-concepts.md#azure-ad-ds-skus -<!-- EXTERNAL LINKS --> +<!-- EXTERNAL LINKS --> |
active-directory-domain-services | Tutorial Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tutorial-create-instance.md | Before you domain-join VMs and deploy applications that use the managed domain, [concepts-sku]: administration-concepts.md#azure-ad-ds-skus <!-- EXTERNAL LINKS -->-[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix +[naming-prefix]: /windows-server/identity/ad-ds/plan/selecting-the-forest-root-domain#selecting-a-prefix |
active-directory | Customize Application Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md | Applications and systems that support customization of the attribute list includ > Editing the list of supported attributes is only recommended for administrators who have customized the schema of their applications and systems, and have first-hand knowledge of how their custom attributes have been defined or if a source attribute isn't automatically displayed in the Azure portal UI. This sometimes requires familiarity with the APIs and developer tools provided by an application or system. The ability to edit the list of supported attributes is locked down by default, but customers can enable the capability by navigating to the following URL: https://portal.azure.com/?Microsoft_AAD_Connect_Provisioning_forceSchemaEditorEnabled=true . You can then navigate to your application to view the [attribute list](#editing-the-list-of-supported-attributes). > [!NOTE]-> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. +> When a directory extension attribute in Azure AD doesn't show up automatically in your attribute mapping drop-down, you can manually add it to the "Azure AD attribute list". When manually adding Azure AD directory extension attributes to your provisioning app, note that directory extension attribute names are case-sensitive. For example: If you have a directory extension attribute named `extension_53c9e2c0exxxxxxxxxxxxxxxx_acmeCostCenter`, make sure you enter it in the same format as defined in the directory. Provisioning multi-valued directory extension attributes is not supported. When you're editing the list of supported attributes, the following properties are provided: |
active-directory | Inbound Provisioning Api Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-concepts.md | -> API-driven inbound provisioning is currently in public preview and is governed by [Preview Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> API-driven inbound provisioning is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Introduction |
active-directory | Inbound Provisioning Api Configure App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-configure-app.md | -> API-driven inbound provisioning is currently in public preview and is governed by [Preview Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> API-driven inbound provisioning is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). This feature is available only when you configure the following Enterprise Gallery apps: * API-driven inbound user provisioning to Azure AD If you're configuring inbound user provisioning to on-premises Active Directory, ## Create your API-driven provisioning app -1. Log in to the [Microsoft Entra portal](<https://entra.microsoft.com>). +1. Log in to the [Microsoft Entra admin center](<https://entra.microsoft.com>). 2. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 3. Click on **New application** to create a new provisioning application. [![Screenshot of Entra Admin Center.](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png)](media/inbound-provisioning-api-configure-app/provisioning-entra-admin-center.png#lightbox) |
active-directory | Inbound Provisioning Api Curl Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-curl-tutorial.md | -1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials. +1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials. 1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run. |
active-directory | Inbound Provisioning Api Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-custom-attributes.md | You have configured API-driven provisioning app. You're provisioning app is succ In this step, we'll add the two attributes "HireDate" and "JobCode" that are not part of the standard SCIM schema to the provisioning app and use them in the provisioning data flow. -1. Log in to Microsoft Entra portal with application administrator role. +1. Log in to Microsoft Entra admin center with application administrator role. 1. Go to **Enterprise applications** and open your API-driven provisioning app. 1. Open the **Provisioning** blade. 1. Click on the **Edit Provisioning** button. In this step, we'll add the two attributes "HireDate" and "JobCode" that are not 1. **Save** your changes > [!NOTE]-> If you'd like to add only a few additional attributes to the provisioning app, use Microsoft Entra Portal to extend the schema. If you'd like to add more custom attributes (let's say 20+ attributes), then we recommend using the [`UpdateSchema` mode of the CSV2SCIM PowerShell script](inbound-provisioning-api-powershell.md#extending-provisioning-job-schema) which automates the above manual process. +> If you'd like to add only a few additional attributes to the provisioning app, use Microsoft Entra admin center to extend the schema. If you'd like to add more custom attributes (let's say 20+ attributes), then we recommend using the [`UpdateSchema` mode of the CSV2SCIM PowerShell script](inbound-provisioning-api-powershell.md#extending-provisioning-job-schema) which automates the above manual process. ## Step 2 - Map the custom attributes |
active-directory | Inbound Provisioning Api Grant Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-grant-access.md | Depending on how your API client authenticates with Azure AD, you can select bet ## Configure a service principal This configuration registers an app in Azure AD that represents the external API client and grants it permission to invoke the inbound provisioning API. The service principal client id and client secret can be used in the OAuth client credentials grant flow. -1. Log in to Microsoft Entra portal (https://entra.microsoft.com) with global administrator or application administrator login credentials. +1. Log in to Microsoft Entra admin center (https://entra.microsoft.com) with global administrator or application administrator login credentials. 1. Browse to **Azure Active Directory** -> **Applications** -> **App registrations**. 1. Click on the option **New registration**. 1. Provide an app name, select the default options, and click on **Register**. This section describes how you can assign the necessary permissions to a managed ## Next steps - [Quick start using cURL](inbound-provisioning-api-curl-tutorial.md) - [Quick start using Postman](inbound-provisioning-api-postman.md)-- [Quick start using Postman](inbound-provisioning-api-graph-explorer.md)+- [Quick start using Graph Explorer](inbound-provisioning-api-graph-explorer.md) - [Frequently asked questions about API-driven inbound provisioning](inbound-provisioning-api-faqs.md) |
active-directory | Inbound Provisioning Api Graph Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-graph-explorer.md | This tutorial describes how you can quickly test [API-driven inbound provisionin ## Verify processing of bulk request payload -You can verify the processing either from the Microsoft Entra portal or using Graph Explorer. +You can verify the processing either from the Microsoft Entra admin center or using Graph Explorer. -### Verify processing from Microsoft Entra portal -1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials. +### Verify processing from Microsoft Entra admin center +1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials. 1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run. |
active-directory | Inbound Provisioning Api Postman | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-postman.md | In this step, you'll configure the Postman app and invoke the API using the conf If the API invocation is successful, you see the message `202 Accepted.` Under Headers, the **Location** attribute points to the provisioning logs API endpoint. ## Verify processing of bulk request payload-You can verify the processing either from the Microsoft Entra portal or using Postman. +You can verify the processing either from the Microsoft Entra admin center or using Postman. -### Verify processing from Microsoft Entra portal -1. Log in to [Microsoft Entra portal](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials. +### Verify processing from Microsoft Entra admin center +1. Log in to [Microsoft Entra admin center](https://entra.microsoft.com) with *global administrator* or *application administrator* login credentials. 1. Browse to **Azure Active Directory -> Applications -> Enterprise applications**. 1. Under all applications, use the search filter text box to find and open your API-driven provisioning application. 1. Open the Provisioning blade. The landing page displays the status of the last run. |
active-directory | Inbound Provisioning Api Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/inbound-provisioning-api-powershell.md | To illustrate the procedure, let's use the CSV file `Samples/csv-with-2-records. This section explains how to send the generated bulk request payload to your inbound provisioning API endpoint. -1. Log in to your Entra portal as *Application Administrator* or *Global Administrator*. +1. Log in to your Microsoft Entra admin center as *Application Administrator* or *Global Administrator*. 1. Copy the `ServicePrincipalId` associated with your provisioning app from **Provisioning App** > **Properties** > **Object ID**. :::image type="content" border="true" source="./media/inbound-provisioning-api-powershell/object-id.png" alt-text="Screenshot of the Object ID." lightbox="./media/inbound-provisioning-api-powershell/object-id.png"::: This section explains how to send the generated bulk request payload to your inb $ThumbPrint = $ClientCertificate.ThumbPrint ``` The generated certificate is stored **Current User\Personal\Certificates**. You can view it using the **Control Panel** -> **Manage user certificates** option. -1. To associate this certificate with a valid service principal, log in to your Entra portal as *Application Administrator*. +1. To associate this certificate with a valid service principal, log in to your Microsoft Entra admin center as *Application Administrator*. 1. Open [the service principal you configured](inbound-provisioning-api-grant-access.md#configure-a-service-principal) under **App Registrations**. 1. Copy the **Object ID** from the **Overview** blade. Use the value to replace the string `<AppObjectId>`. Copy the **Application (client) Id**. We will use it later and it is referenced as `<AppClientId>`. 1. Run the following command to upload your certificate to the registered service principal. PS > CSV2SCIM.ps1 -Path <path-to-csv-file> > [!NOTE] > The `AttributeMapping` and `ValidateAttributeMapping` command-line parameters refer to the mapping of CSV column attributes to the standard SCIM schema elements. -It doesn't refer to the attribute mappings that you perform in the Entra portal provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes. +It doesn't refer to the attribute mappings that you perform in the Microsoft Entra admin center provisioning app between source SCIM schema elements and target Azure AD/on-premises AD attributes. | Parameter | Description | Processing remarks | |-|-|--| |
active-directory | On Premises Sap Connector Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-sap-connector-configure.md | Title: Azure AD Provisioning to SAP ERP Central Component (SAP ECC) 7.0 -description: This document describes how to configure Azure AD to provision users into SAP ECC 7. + Title: Azure AD Provisioning into SAP ERP Central Component (SAP ECC, formerly SAP R/3) with NetWeaver AS ABAP 7.0 or later. +description: This document describes how to configure Azure AD to provision users into SAP ERP Central Component (SAP ECC, formerly SAP R/3) with NetWeaver AS ABAP 7.0 or later. -# Configuring Azure AD to provision users into SAP ECC 7.0 -The following documentation provides configuration and tutorial information demonstrating how to provision users from Azure AD into SAP ERP Central Component (SAP ECC) 7.0. If you are using other versions such as SAP R/3, you can still use the guides provided in the [download center](https://www.microsoft.com/download/details.aspx?id=51495) as a reference to build your own template and configure provisioning. +# Configuring Azure AD to provision users into SAP ECC with NetWeaver AS ABAP 7.0 or later +The following documentation provides configuration and tutorial information demonstrating how to provision users from Azure AD into SAP ERP Central Component (SAP ECC, formerly SAP R/3) with NetWeaver 7.0 or later. If you are using other versions such as SAP R/3, you can still use the guides provided in the [download center](https://www.microsoft.com/download/details.aspx?id=51495) as a reference to build your own template and configure provisioning. [!INCLUDE [app-provisioning-sap.md](../../../includes/app-provisioning-sap.md)] |
active-directory | User Provisioning Sync Attributes For Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md | |
active-directory | User Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md | In Azure Active Directory (Azure AD), the term *app provisioning* refers to auto Azure AD application provisioning refers to automatically creating user identities and roles in the applications that users need access to. In addition to creating user identities, automatic provisioning includes the maintenance and removal of user identities as status or roles change. Common scenarios include provisioning an Azure AD user into SaaS applications like [Dropbox](../../active-directory/saas-apps/dropboxforbusiness-provisioning-tutorial.md), [Salesforce](../../active-directory/saas-apps/salesforce-provisioning-tutorial.md), [ServiceNow](../../active-directory/saas-apps/servicenow-provisioning-tutorial.md), and many more. -Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. Your application must support [SCIM](https://aka.ms/scimoverview). Or, you must build a SCIM gateway to connect to your legacy application. If so, you can use the Azure AD Provisioning agent to [directly connect](./on-premises-scim-provisioning.md) with your application and automate provisioning and deprovisioning. If you have legacy applications that don't support SCIM and rely on an [LDAP](./on-premises-ldap-connector-configure.md) user store or a [SQL](./tutorial-ecma-sql-connector.md) database, Azure AD can support these applications as well. --App provisioning lets you: +Azure AD also supports provisioning users into applications hosted on-premises or in a virtual machine, without having to open up any firewalls. The table below provides a mapping of protocols to connectors supported. ++|Protocol |Connector| +|--|--| +| SCIM | [SCIM - SaaS](use-scim-to-provision-users-and-groups.md) <br />[SCIM - On-prem / Private network](./on-premises-scim-provisioning.md) | +| LDAP | [LDAP](./on-premises-ldap-connector-configure.md)| +| SQL | [SQL](./tutorial-ecma-sql-connector.md) | +| REST | [Web Services](./on-premises-web-services-connector.md)| +| SOAP | [Web Services](./on-premises-web-services-connector.md)| +| Flat-file| [PowerShell](./on-premises-powershell-connector.md) | +| Custom | [Custom ECMA connectors](./on-premises-custom-connector.md) <br /> [Connectors and gateways built by partners](./partner-driven-integrations.md)| - **Automate provisioning**: Automatically create new accounts in the right systems for new people when they join your team or organization. - **Automate deprovisioning**: Automatically deactivate accounts in the right systems when people leave the team or organization. |
active-directory | Application Proxy Configure Cookie Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md | |
active-directory | Application Proxy Configure Custom Home Page | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md | |
active-directory | Application Proxy Ping Access Publishing Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md | Azure Active Directory (Azure AD) Application Proxy has partnered with PingAcces With PingAccess for Azure AD, you can give users access and single sign-on (SSO) to applications that use headers for authentication. Application Proxy treats these applications like any other, using Azure AD to authenticate access and then passing traffic through the connector service. PingAccess sits in front of the applications and translates the access token from Azure AD into a header. The application then receives the authentication in the format it can read. -Your users wonΓÇÖt notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so theyΓÇÖll still balance loads automatically. +Your users won't notice anything different when they sign in to use your corporate applications. They can still work from anywhere on any device. The Application Proxy connectors direct remote traffic to all apps without regard to their authentication type, so they'll still balance loads automatically. ## How do I get access? For more information, see [Azure Active Directory editions](../fundamentals/what ## Publish your application in Azure -This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If youΓÇÖve already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section. +This article is for people to publish an application with this scenario for the first time. Besides detailing the publishing steps, it guides you in getting started with both Application Proxy and PingAccess. If you've already configured both services but want a refresher on the publishing steps, skip to the [Add your application to Azure AD with Application Proxy](#add-your-application-to-azure-ad-with-application-proxy) section. > [!NOTE] > Since this scenario is a partnership between Azure AD and PingAccess, some of the instructions exist on the Ping Identity site. To publish your own on-premises application: > [!NOTE] > For a more detailed walkthrough of this step, see [Add an on-premises app to Azure AD](../app-proxy/application-proxy-add-on-premises-application.md#add-an-on-premises-app-to-azure-ad). - 1. **Internal URL**: Normally you provide the URL that takes you to the appΓÇÖs sign-in page when youΓÇÖre on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess. + 1. **Internal URL**: Normally you provide the URL that takes you to the app's sign-in page when you're on the corporate network. For this scenario, the connector needs to treat the PingAccess proxy as the front page of the application. Use this format: `https://<host name of your PingAccess server>:<port>`. The port is 3000 by default, but you can configure it in PingAccess. > [!WARNING] > For this type of single sign-on, the internal URL must use `https` and can't use `http`. Also, there is a constraint when configuring an application that no two apps should have the same internal URL as this allows App Proxy to maintain distinction between applications. To publish your own on-premises application: 1. **Translate URL in Headers**: Choose **No**. > [!NOTE]- > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener youΓÇÖve configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners). + > If this is your first application, use port 3000 to start and come back to update this setting if you change your PingAccess configuration. For subsequent applications, the port will need to match the Listener you've configured in PingAccess. Learn more about [listeners in PingAccess](https://docs.pingidentity.com/access/sources/dita/topic?category=pingaccess&Releasestatus_ce=Current&resourceid=pa_assigning_key_pairs_to_https_listeners). 1. Select **Add**. The overview page for the new application appears. In addition to the external URL, an authorize endpoint of Azure Active Directory Finally, set up your on-premises application so that users have read access and other applications have read/write access: -1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the APIs for Windows Azure Active Directory. +1. From the **App registrations** sidebar for your application, select **API permissions** > **Add a permission** > **Microsoft APIs** > **Microsoft Graph**. The **Request API permissions** page for **Microsoft Graph** appears, which contains the permissions for Microsoft Graph. ![Shows the Request API permissions page](./media/application-proxy-configure-single-sign-on-with-ping-access/required-permissions.png) |
active-directory | Powershell Assign Group To App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md | |
active-directory | Powershell Assign User To App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md | |
active-directory | Powershell Display Users Group Of App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md | |
active-directory | Powershell Get All App Proxy Apps Basic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md | |
active-directory | Powershell Get All App Proxy Apps By Connector Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md | |
active-directory | Powershell Get All App Proxy Apps Extended | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md | |
active-directory | Powershell Get All App Proxy Apps With Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md | |
active-directory | Powershell Get All Connectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md | |
active-directory | Powershell Get All Custom Domain No Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md | |
active-directory | Powershell Get All Custom Domains And Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md | |
active-directory | Powershell Get All Default Domain Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md | |
active-directory | Powershell Get All Wildcard Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md | |
active-directory | Powershell Get Custom Domain Identical Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md | |
active-directory | Powershell Get Custom Domain Replace Cert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md | |
active-directory | Powershell Move All Apps To Connector Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md | |
active-directory | Architecture Icons | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/architecture-icons.md | + + Title: Microsoft Entra architecture icons +description: Learn about the official collection of Microsoft Entra icons that you can use in architectural diagrams, training materials, or documentation. +++++ Last updated : 08/15/2023++++# Customer intent: As a new or existing customer, I want to learn how I can use the official Microsoft Entra icons in architectural diagrams, training materials, or documentation. +++# Microsoft Entra architecture icons ++Helping our customers design and architect new solutions is core to the Microsoft Entra mission. Architecture diagrams can help communicate design decisions and the relationships between components of a given workload. This article provides information about the official collection of Microsoft Entra icons that you can use in architectural diagrams, training materials, or documentation. ++## General guidelines ++### Do's ++- Use the icon to illustrate how products can work together. +- In diagrams, we recommend including the product name somewhere close to the icon. ++### Don'ts ++- Don't crop, flip, or rotate icons. +- Don't distort or change the icon shape in any way. +- Don't use Microsoft product icons to represent your product or service. +- Don't use Microsoft product icons in marketing communications. ++## Icon updates ++| Month | Change description | +|-|--| +| August 2023 | Added a downloadable package that contains the Microsoft Entra architecture icons, branding playbook (which contains guidelines about the Microsoft Security visual identity), and terms of use. | ++## Icon terms ++Microsoft permits the use of these icons in architectural diagrams, training materials, or documentation. You may copy, distribute, and display the icons only for the permitted use unless granted explicit permission by Microsoft. Microsoft reserves all other rights. ++ > [!div class="button"] + > [I agree to the above terms. Download icons.](https://download.microsoft.com/download/a/4/2/a4289cad-4eaf-4580-87fd-ce999a601516/Microsoft-Entra-architecture-icons.zip?wt.mc_id=microsoftentraicons_downloadmicrosoftentraicons_content_cnl_csasci) ++## More icon sets from Microsoft ++- [Azure architecture icons](/azure/architecture/icons) +- [Microsoft 365 architecture icons and templates](/microsoft-365/solutions/architecture-icons-templates) +- [Dynamics 365 icons](/dynamics365/get-started/icons) +- [Microsoft Power Platform icons](/power-platform/guidance/icons) |
active-directory | Govern Service Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/govern-service-accounts.md | |
active-directory | Multi Tenant Common Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-common-considerations.md | Additionally, while you can use the following Conditional Access conditions, be - **Sign-in risk and user risk.** User behavior in their home tenant determines, in part, the sign-in risk and user risk. The home tenant stores the data and risk score. If resource tenant policies block an external user, a resource tenant admin might not be able to enable access. [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md) explains how Identity Protection detects compromised credentials for Azure AD users. - **Locations.** The named location definitions in the resource tenant determine the scope of the policy. The scope of the policy doesn't evaluate trusted locations managed in the home tenant. If your organization wants to share trusted locations across tenants, define the locations in each tenant where you define the resources and Conditional Access policies. -## Other access control considerations +## Securing your multi-tenant environment +Review the [security checklist](/azure/security/fundamentals/steps-secure-identity) and [best practices](/azure/security/fundamentals/operational-best-practices) for guidance on securing your tenant. Ensure these best practices are followed and review them with any tenants that you collaborate closely with. +### Conditional access The following are considerations for configuring access control. - Define [access control policies](../external-identities/authentication-conditional-access.md) to control access to resources. - Design Conditional Access policies with external users in mind. - Create policies specifically for external users.-- If your organization is using the [**all users** dynamic group](../external-identities/use-dynamic-groups.md) condition in your existing Conditional Access policy, this policy affects external users because they are in scope of **all users**. - Create dedicated Conditional Access policies for external accounts. -### Require user assignment +### Monitoring your multi-tenant environment +- Monitor for changes to cross-tenant access policies using the [audit logs UI](../reports-monitoring/concept-audit-logs.md), [API](/graph/api/resources/azure-ad-auditlog-overview), or [Azure Monitor integration](../reports-monitoring/tutorial-configure-log-analytics-workspace.md) (for proactive alerts). The audit events use the categories "CrossTenantAccessSettings" and "CrossTenantIdentitySyncSettings." By monitoring for audit events under these categories, you can identify any cross-tenant access policy changes in your tenant and take action. When creating alerts in Azure Monitor, you can create a query such as the one below to identify any cross-tenant access policy changes. ++``` +AuditLogs +| where Category contains "CrossTenant" +``` ++- Monitor application access in your tenant using the [cross-tenant access activity](../reports-monitoring/workbook-cross-tenant-access-activity.md) dashboard. This allows you to see who is accessing resources in your tenant and where those users are coming from. +++### Dynamic groups ++If your organization is using the [**all users** dynamic group](../external-identities/use-dynamic-groups.md) condition in your existing Conditional Access policy, this policy affects external users because they are in scope of **all users**. ++### Require user assignment for applications If an application has the **User assignment required?** property set to **No**, external users can access the application. Application admins must understand access control impacts, especially if the application contains sensitive information. [Restrict your Azure AD app to a set of users in an Azure AD tenant](../develop/howto-restrict-your-app-to-a-set-of-users.md) explains how registered applications in an Azure Active Directory (Azure AD) tenant are, by default, available to all users of the tenant who successfully authenticate. +### Privileged Identity Management +Minimize persistent administrator access by enabling [privileged identity management](/azure/security/fundamentals/steps-secure-identity#implement-privilege-access-management). ++### Restricted Management Units +When you're using security groups to control who is in scope for cross-tenant synchronization, you will want to limit who can make changes to the security group. Minimize the number of owners of the security groups assigned to the cross-tenant synchronization job and include the groups in a [restricted management unit](../roles/admin-units-restricted-management.md). This will limit the number of people that can add or remove group members and provision accounts across tenants. ++## Other access control considerations + ### Terms and conditions [Azure AD terms of use](../conditional-access/terms-of-use.md) provides a simple method that organizations can use to present information to end users. You can use terms of use to require external users to approve terms of use before accessing your resources. |
active-directory | Multi Tenant User Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/multi-tenant-user-management-scenarios.md | |
active-directory | Recoverability Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/recoverability-overview.md | Create a process of predefined communications to make others aware of the issue Document the state of your tenant and its objects regularly. Then if a hard delete or misconfiguration occurs, you have a roadmap to recovery. The following tools can help you document your current state: - [Microsoft Graph APIs](/graph/overview) can be used to export the current state of many Azure AD configurations.-- [Azure AD Exporter](https://github.com/microsoft/azureadexporter) is a tool you can use to export your configuration settings.+- [Entra Exporter](https://github.com/microsoft/entraexporter) is a tool you can use to export your configuration settings. - [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) is a module of the PowerShell Desired State Configuration framework. You can use it to export configurations for reference and application of the prior state of many settings. - [Conditional Access APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) can be used to manage your Conditional Access policies as code. Microsoft Graph APIs are highly customizable based on your organizational needs. *Securely store these configuration exports with access provided to a limited number of admins. -The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provide most of the documentation you need: +The [Entra Exporter](https://github.com/microsoft/entraexporter) can provide most of the documentation you need: - Verify that you've implemented the desired configuration. - Use the exporter to capture current configurations. The [Azure AD Exporter](https://github.com/microsoft/azureadexporter) can provid - Store the output in a secure location with limited access. > [!NOTE]-> Settings in the legacy multifactor authentication portal for Application Proxy and federation settings might not be exported with the Azure AD Exporter, or with the Microsoft Graph API. +> Settings in the legacy multifactor authentication portal for Application Proxy and federation settings might not be exported with the Entra Exporter, or with the Microsoft Graph API. The [Microsoft 365 Desired State Configuration](https://github.com/microsoft/Microsoft365DSC/wiki/What-is-Microsoft365DSC) module uses Microsoft Graph and PowerShell to retrieve the state of many of the configurations in Azure AD. This information can be used as reference information or, by using PowerShell Desired State Configuration scripting, to reapply a known good state. Use [Conditional Access Graph APIs](https://github.com/Azure-Samples/azure-ad-conditional-access-apis) to manage policies like code. Automate approvals to promote policies from preproduction environments, backup and restore, monitor change, and plan ahead for emergencies. |
active-directory | Resilient External Processes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/resilient-external-processes.md | Identity experience framework (IEF) policies allow you to call an external syste - If the data that is necessary for authentication is relatively static and small, and has no other business reason to be externalized from the directory, then consider having it in the directory. -- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and cripple your application. For example, using CAPTCHA in your sign in, sign up flow can help.+- Remove API calls from the pre-authenticated path whenever possible. If you can't, then you must place strict protections for Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks in front of your APIs. Attackers can load the sign-in page and try to flood your API with DoS attacks and disable your application. For example, using CAPTCHA in your sign in, sign up flow can help. - Use [API connectors of built-in sign-up user flow](../../active-directory-b2c/api-connectors-overview.md) wherever possible to integrate with web APIs either After federating with an identity provider during sign-up or before creating the user. Since the user flows are already extensively tested, it's likely that you don't have to perform user flow-level functional, performance, or scale testing. You still need to test your applications for functionality, performance, and scale. |
active-directory | Service Accounts Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-managed-identities.md | |
active-directory | Service Accounts Principal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/architecture/service-accounts-principal.md | |
active-directory | Certificate Based Authentication Federation Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-android.md | description: Learn about the supported scenarios and the requirements for config + Last updated 09/30/2022 |
active-directory | Certificate Based Authentication Federation Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-get-started.md | description: Learn how to configure certificate-based authentication with federa + Last updated 05/04/2022 |
active-directory | Certificate Based Authentication Federation Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/certificate-based-authentication-federation-ios.md | description: Learn about the supported scenarios and the requirements for config + Last updated 09/30/2022 |
active-directory | Concept Authentication Authenticator App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-authenticator-app.md | To get started with passwordless sign-in, see [Enable passwordless sign-in with The Authenticator app can help prevent unauthorized access to accounts and stop fraudulent transactions by pushing a notification to your smartphone or tablet. Users view the notification, and if it's legitimate, select **Verify**. Otherwise, they can select **Deny**. -![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png) +> [!NOTE] +> Starting in August, 2023, sign-ins from unfamiliar locations no longer generate notifications. Similar to how unfamiliar locations work in [Smart lockout](howto-password-smart-lockout.md), a location becomes "familiar" during the first 14 days of use, or the first 10 sign-ins. If the location is unfamiliar, or if the relevant Google or Apple service responsible for push notifications isn't available, users won't see their notification as usual. In that case, they should open Microsoft Authenticator, or Authenticator Lite in a relevant companion app like Outlook, refresh by either pulling down or hitting **Refresh**, and approve the request. -In some rare instances where the relevant Google or Apple service responsible for push notifications is down, users may not receive their push notifications. In these cases users should manually navigate to the Microsoft Authenticator app (or relevant companion app like Outlook), refresh by either pulling down or hitting the refresh button, and approve the request. +![Screenshot of example web browser prompt for Authenticator app notification to complete sign-in process.](media/tutorial-enable-azure-mfa/tutorial-enable-azure-mfa-browser-prompt.png) -> [!NOTE] -> If your organization has staff working in or traveling to China, the *Notification through mobile app* method on Android devices doesn't work in that country/region as Google play services(including push notifications) are blocked in the region. However iOS notification do work. For Android devices ,alternate authentication methods should be made available for those users. +In China, the *Notification through mobile app* method on Android devices doesn't work because as Google play services (including push notifications) are blocked in the region. However, iOS notifications do work. For Android devices, alternate authentication methods should be made available for those users. ## Verification code from mobile app |
active-directory | Concept Authentication Default Enablement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-default-enablement.md | The following table lists each setting that can be set to Microsoft managed and | [Application name in Microsoft Authenticator notifications](how-to-mfa-additional-context.md) | Disabled | | [System-preferred MFA](concept-system-preferred-multifactor-authentication.md) | Enabled | | [Authenticator Lite](how-to-mfa-authenticator-lite.md) | Enabled | +| [Report suspicious activity](howto-mfa-mfasettings.md#report-suspicious-activity) | Disabled | As threat vectors change, Azure AD may announce default protection for a **Microsoft managed** setting in [release notes](../fundamentals/whats-new.md) and on commonly read forums like [Tech Community](https://techcommunity.microsoft.com/). For example, see our blog post [It's Time to Hang Up on Phone Transports for Authentication](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752) for more information about the need to move away from using SMS and voice calls, which led to default enablement for the registration campaign to help users to set up Authenticator for modern authentication. |
active-directory | Concept Authentication Oath Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md | OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow. -OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse). :::image type="content" border="true" source="./media/concept-authentication-methods/oath-tokens.png" alt-text="Screenshot of OATH token management." lightbox="./media/concept-authentication-methods/oath-tokens.png"::: |
active-directory | Concept Authentication Strengths | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-strengths.md | An authentication strength Conditional Access policy works together with [MFA tr ## Limitations -- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength doesn't restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue.+- **Conditional Access policies are only evaluated after the initial authentication** - As a result, authentication strength doesn't restrict a user's initial authentication. Suppose you are using the built-in phishing-resistant MFA strength. A user can still type in their password, but they will be required to use a phishing-resistant method such as FIDO2 security key before they can continue. - **Require multifactor authentication and Require authentication strength can't be used together in the same Conditional Access policy** - These two Conditional Access grant controls can't be used together because the built-in authentication strength **Multifactor authentication** is equivalent to the **Require multifactor authentication** grant control. An authentication strength Conditional Access policy works together with [MFA tr - **Windows Hello for Business** ΓÇô If the user signed in with Windows Hello for Business as their primary authentication method, it can be used to satisfy an authentication strength requirement that includes Windows Hello for Business. But if the user signed in with another method like password as their primary authenticating method, and the authentication strength requires Windows Hello for Business, they get prompted to sign in with Windows Hello for Business. ++## Known isssues ++The following known issues are currently being addressed: ++- **Sign-in frequency** - If both sign-in frequency and authentication strength requirements apply to a sign-in, and the user has previously signed in using a method that meets the authentication strength requirements, the sign-in frequency requirement doesn't apply. [Sign-in frequency](concepts-azure-multi-factor-authentication-prompts-session-lifetime.md) allows you to set the time interval for re-authentication of users based on their credentials, but it isn't fully integrated with authentication strength yet. It works independently and doesn't currently impact the actual sign-in procedure. Therefore, you may notice that some sign-ins using expired credentials don't prompt re-authentication and the sign-in process proceeds successfully. ++- **FIDO2 security key Advanced options** - Advanced options aren't supported for external users with a home tenant that is located in a different Microsoft cloud than the resource tenant. + ## FAQ ### Should I use authentication strength or the Authentication methods policy? Authentication strength is based on the Authentication methods policy. The Authe For example, the administrator of Contoso wants to allow their users to use Microsoft Authenticator with either push notifications or passwordless authentication mode. The administrator goes to the Microsoft Authenticator settings in the Authentication method policy, scopes the policy for the relevant users and set the **Authentication mode** to **Any**. -Then for ContosoΓÇÖs most sensitive resource, the administrator wants to restrict the access to only passwordless authentication methods. The administrator creates a new Conditional Access policy, using the built-in **Passwordless MFA strength**. +Then for Contoso's most sensitive resource, the administrator wants to restrict the access to only passwordless authentication methods. The administrator creates a new Conditional Access policy, using the built-in **Passwordless MFA strength**. As a result, users in Contoso can access most of the resources in the tenant using password + push notification from the Microsoft Authenticator OR only using Microsoft Authenticator (phone sign-in). However, when the users in the tenant access the sensitive application, they must use Microsoft Authenticator (phone sign-in). |
active-directory | Concept Certificate Based Authentication Certificateuserids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-certificateuserids.md | |
active-directory | Concept Mfa Regional Opt In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-mfa-regional-opt-in.md | For Voice verification, the following region codes require an opt-in. | 236 | Central African Republic | | 237 | Cameroon | | 238 | Cabo Verde |-| 239 | Sao Tome and Principe | +| 239 | São Tomé and Príncipe | | 240 | Equatorial Guinea | | 241 | Gabon | | 242 | Congo | |
active-directory | Concept Password Ban Bad Combined Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-password-ban-bad-combined-policy.md | description: Learn about the combined password policy and check for weak passwor + Last updated 04/02/2023 |
active-directory | Concept Resilient Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md | |
active-directory | Concept Sspr Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md | |
active-directory | Concepts Azure Multi Factor Authentication Prompts Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md | description: Learn about the recommended configuration for reauthentication prom + Previously updated : 03/28/2023 Last updated : 08/22/2023 Azure Active Directory (Azure AD) has multiple settings that determine how often The Azure AD default configuration for user sign-in frequency is a rolling window of 90 days. Asking users for credentials often seems like a sensible thing to do, but it can backfire. If users are trained to enter their credentials without thinking, they can unintentionally supply them to a malicious credential prompt. -It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions using PowerShell](/powershell/module/azuread/revoke-azureaduserallrefreshtoken). +It might sound alarming to not ask for a user to sign back in, though any violation of IT policies revokes the session. Some examples include a password change, an incompliant device, or an account disable operation. You can also explicitly [revoke users' sessions by using Microsoft Graph PowerShell](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession). This article details recommended configurations and how different settings work and interact with each other. To optimize the frequency of authentication prompts for your users, you can conf ### Evaluate session lifetime policies -Without any session lifetime settings, there are no persistent cookies in the browser session. Every time a user closes and open the browser, they get a prompt for reauthentication. In Office clients, the default time period is a rolling window of 90 days. With this default Office configuration, if the user has reset their password or there has been inactivity of over 90 days, the user is required to reauthenticate with all required factors (first and second factor). +Without any session lifetime settings, there are no persistent cookies in the browser session. Every time a user closes and opens the browser, they get a prompt for reauthentication. In Office clients, the default time period is a rolling window of 90 days. With this default Office configuration, if the user has reset their password or there has been inactivity of over 90 days, the user is required to reauthenticate with all required factors (first and second factor). A user might see multiple MFA prompts on a device that doesn't have an identity in Azure AD. Multiple prompts result when each application has its own OAuth Refresh Token that isn't shared with other client apps. In this scenario, MFA prompts multiple times as each application requests an OAuth Refresh Token to be validated with MFA. This setting allows configuration of lifetime for token issued by Azure Active D Now that you understand how different settings works and the recommended configuration, it's time to check your tenants. You can start by looking at the sign-in logs to understand which session lifetime policies were applied during sign-in. -Under each sign-in log, go to the **Authentication Details** tab and explore **Session Lifetime Policies Applied**. For more information, see [Authentication details](../reports-monitoring/concept-sign-ins.md#authentication-details). +Under each sign-in log, go to the **Authentication Details** tab and explore **Session Lifetime Policies Applied**. For more information, see [Authentication details](../reports-monitoring/concept-sign-in-log-activity-details.md#authentication-details). ![Screenshot of authentication details.](./media/concepts-azure-multi-factor-authentication-prompts-session-lifetime/details.png) |
active-directory | Fido2 Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/fido2-compatibility.md | The following tables show which transports are supported for each platform. Supp |||--|--| | Edge | ❌ | ❌ | ❌ | | Chrome | ✅ | ❌ | ❌ |-| Firefox | ❌ | ❌ | ❌ | +| Firefox | ✅ | ❌ | ❌ | ### iOS |
active-directory | How To Authentication Find Coverage Gaps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-authentication-find-coverage-gaps.md | There are different ways to check if your admins are covered by an MFA policy. ![Screenshot of the sign-in log.](./media/how-to-authentication-find-coverage-gaps/auth-requirement.png) - Click **Authentication details** for [details about the MFA requirements](../reports-monitoring/concept-sign-ins.md#authentication-details). + When viewing the details of a specific sign-in, select the **Authentication details** tab for details about the MFA requirements. For more information, see [Sign-in log activity details](../reports-monitoring/concept-sign-in-log-activity-details.md). ![Screenshot of the authentication activity details.](./media/how-to-authentication-find-coverage-gaps/details.png) |
active-directory | How To Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md | |
active-directory | How To Mfa Authenticator Lite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-authenticator-lite.md | Microsoft Authenticator Lite is another surface for Azure Active Directory (Azur Users receive a notification in Outlook mobile to approve or deny sign-in, or they can copy a TOTP to use during sign-in. >[!NOTE]->This is an important security enhancement for users authenticating via telecom transports. On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ. If you no longer wish for this feature to be enabled, move the state from 'default' toΓÇÿdisabledΓÇÖ or set users to include and exclude groups. +>These are important security enhancements for users authenticating via telecom transports: +>- On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ in the Authentication methods policy. If you no longer wish for this feature to be enabled, move the state from 'default' to ΓÇÿdisabledΓÇÖ or scope it to only a group of users. +>- Starting September 18, Authenticator Lite will be enabled as part of the *Notification through mobile app* verification option in the per-user MFA policy. If you don't want this feature enabled, you can disable it in the Authentication methods policy following the steps below. ## Prerequisites -- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for some users or groups by using the modern Authentication methods policy. You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API. Organizations with an active MFA server or that have not started migration from per-user MFA are not eligible for this feature.+- Your organization needs to enable Microsoft Authenticator (second factor) push notifications for all users or select groups. We recommend enabling Microsoft Authenticator by using the modern [Authentication methods policy](concept-authentication-methods-manage.md#authentication-methods-policy). You can edit the Authentication methods policy by using the Azure portal or Microsoft Graph API. Organizations with an active MFA server are not eligible for this feature. >[!TIP] >We recommend that you also enable [system-preferred multifactor authentication (MFA)](concept-system-preferred-multifactor-authentication.md) when you enable Authenticator Lite. With system-preferred MFA enabled, users try to sign-in with Authenticator Lite before they try less secure telephony methods like SMS or voice call. Users receive a notification in Outlook mobile to approve or deny sign-in, or th ## Enable Authenticator Lite -By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings). On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ +By default, Authenticator Lite is [Microsoft managed](concept-authentication-default-enablement.md#microsoft-managed-settings) in the Authentication methods policy. On June 26, the Microsoft managed value of this feature changed from ΓÇÿdisabledΓÇÖ to ΓÇÿenabledΓÇÖ. Authenticator Lite is also included as part of the *Notification through mobile app* verification option in the per-user MFA policy. ### Disabling Authenticator Lite in Azure portal UX To disable Authenticator Lite in the Azure portal, complete the following steps: 1. In the Azure portal, click Azure Active Directory > Security > Authentication methods > Microsoft Authenticator. In the Entra admin center, on the sidebar select Azure Active Directory > Protect & Secure > Authentication methods > Microsoft Authenticator. - 2. On the Enable and Target tab, click Yes and All users to enable the Authenticator policy for everyone or add selected users and groups. Set the Authentication mode for these users/groups to Any or Push. + 2. On the Enable and Target tab, click Enable and All users to enable the Authenticator policy for everyone or add select groups. Set the Authentication mode for these users/groups to Any or Push. - Only users who are enabled for Microsoft Authenticator here can be enabled to use Authenticator Lite for sign-in, or excluded from it. Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. Android users utilizing a personal and work profile on their device may be prompted to register if Authenticator is present on a different profile from the Outlook application. +Users who aren't enabled for Microsoft Authenticator can't see the feature. Users who have Microsoft Authenticator downloaded on the same device Outlook is downloaded on will not be prompted to register for Authenticator Lite in Outlook. Android users utilizing a personal and work profile on their device may be prompted to register if Authenticator is present on a different profile from the Outlook application. -<img width="1112" alt="Entra portal Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png"> +<img width="1112" alt="Microsoft Entra admin center Authenticator settings" src="https://user-images.githubusercontent.com/108090297/228603771-52c5933c-f95e-4f19-82db-eda2ba640b94.png"> 3. On the Configure tab, for **Microsoft Authenticator on companion applications**, change Status to Disabled, and click Save. <img width="664" alt="Authenticator Lite configuration settings" src="https://user-images.githubusercontent.com/108090297/228603364-53f2581f-a4e0-42ee-8016-79b23e5eff6c.png"> +>[!NOTE] +> If your organization still manages authentication methods in the per-user MFA policy, you'll need to disable *Notification through mobile app* as a verification option there in addition to the steps above. We recommend doing this only after you've enabled Microsoft Authenticator in the Authentication methods policy. You can contine to manage the remainder of your authentication methods in the per-user MFA policy while Microsoft Authenticator is managed in the modern Authentication methods policy. However, we recommend [migrating](how-to-authentication-methods-manage.md) management of all authentication methods to the modern Authentication methods policy. The ability to manage authentication methods in the per-user MFA policy will be retired September 30, 2024. + ### Enable Authenticator Lite via Graph APIs | Property | Type | Description | |
active-directory | How To Mfa Server Migration Utility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md | Take a look at our video for an overview of the MFA Server Migration Utility and ## Limitations and requirements -- The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.+- The MFA Server Migration Utility requires a new build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You don't have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically. - The MFA Server Migration Utility copies the data from the database file onto the user objects in Azure AD. During migration, users can be targeted for Azure AD MFA for testing purposes using [Staged Rollout](../hybrid/connect/how-to-connect-staged-rollout.md). Staged migration lets you test without making any changes to your domain federation settings. Once migrations are complete, you must finalize your migration by making changes to your domain federation settings. - AD FS running Windows Server 2016 or higher is required to provide MFA authentication on any AD FS relying parties, not including Azure AD and Office 365. - Review your AD FS access control policies and make sure none requires MFA to be performed on-premises as part of the authentication process. A few important points: During the previous phases, you can remove users from the Staged Rollout folders to take them out of scope of Azure AD MFA and route them back to your on-premises Azure MFA server for all MFA requests originating from Azure AD. -**Phase 3** requires moving all clients that authenticate to the on-premises MFA Server (VPNs, password managers, and so on) to Azure AD federation via SAML/OAUTH. If modern authentication standards arenΓÇÖt supported, you're required to stand up NPS server(s) with the Azure AD MFA extension installed. Once dependencies are migrated, users should no longer use the User portal on the MFA Server, but rather should manage their authentication methods in Azure AD ([aka.ms/mfasetup](https://aka.ms/mfasetup)). Once users begin managing their authentication data in Azure AD, those methods won't be synced back to MFA Server. If you roll back to the on-premises MFA Server after users have made changes to their Authentication Methods in Azure AD, those changes will be lost. After user migrations are complete, change the [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) domain federation setting. The change tells Azure AD to no longer perform MFA on-premises and to perform _all_ MFA requests with Azure AD MFA, regardless of group membership. +**Phase 3** requires moving all clients that authenticate to the on-premises MFA Server (VPNs, password managers, and so on) to Azure AD federation via SAML/OAUTH. If modern authentication standards aren't supported, you're required to stand up NPS server(s) with the Azure AD MFA extension installed. Once dependencies are migrated, users should no longer use the User portal on the MFA Server, but rather should manage their authentication methods in Azure AD ([aka.ms/mfasetup](https://aka.ms/mfasetup)). Once users begin managing their authentication data in Azure AD, those methods won't be synced back to MFA Server. If you roll back to the on-premises MFA Server after users have made changes to their Authentication Methods in Azure AD, those changes will be lost. After user migrations are complete, change the [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) domain federation setting. The change tells Azure AD to no longer perform MFA on-premises and to perform _all_ MFA requests with Azure AD MFA, regardless of group membership. The following sections explain the migration steps in more detail. Open MFA Server, click **Company Settings**: |OATH Token tab|Not applicable; Azure AD MFA uses a default message for OATH tokens| |Reports|[Azure AD Authentication Methods Activity reports](howto-authentication-methods-activity.md)| -<sup>*</sup>When a PIN is used to provide proof-of-presence functionality, the functional equivalent is provided above. PINs that arenΓÇÖt cryptographically tied to a device don't sufficiently protect against scenarios where a device has been compromised. To protect against these scenarios, including [SIM swap attacks](https://wikipedia.org/wiki/SIM_swap_scam), move users to more secure methods according to Microsoft authentication methods [best practices](concept-authentication-methods.md). +<sup>*</sup>When a PIN is used to provide proof-of-presence functionality, the functional equivalent is provided above. PINs that aren't cryptographically tied to a device don't sufficiently protect against scenarios where a device has been compromised. To protect against these scenarios, including [SIM swap attacks](https://wikipedia.org/wiki/SIM_swap_scam), move users to more secure methods according to Microsoft authentication methods [best practices](concept-authentication-methods.md). <sup>**</sup>The default SMS MFA experience in Azure AD MFA sends users a code, which they're required to enter in the login window as part of authentication. The requirement to roundtrip the SMS code provides proof-of-presence functionality. Open MFA Server, click **User Portal**: |Use OATH token for fallback|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)| |Session Timeout|| |**Security Questions tab** |Security questions in MFA Server were used to gain access to the User portal. Azure AD MFA only supports security questions for self-service password reset. See [security questions documentation](concept-authentication-security-questions.md).|-|**Passed Sessions tab**|All authentication method registration flows are managed by Azure AD and donΓÇÖt require configuration| +|**Passed Sessions tab**|All authentication method registration flows are managed by Azure AD and don't require configuration| |**Trusted IPs**|[Azure AD trusted IPs](howto-mfa-mfasettings.md#trusted-ips)| Any MFA methods available in MFA Server must be enabled in Azure AD MFA by using [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings). Users can't try their newly migrated MFA methods unless they're enabled. #### Authentication services Azure MFA Server can provide MFA functionality for third-party solutions that use RADIUS or LDAP by acting as an authentication proxy. To discover RADIUS or LDAP dependencies, click **RADIUS Authentication** and **LDAP Authentication** options in MFA Server. For each of these dependencies, determine if these third parties support modern authentication. If so, consider federation directly with Azure AD. -For RADIUS deployments that canΓÇÖt be upgraded, youΓÇÖll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md). +For RADIUS deployments that can't be upgraded, you'll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md). -For LDAP deployments that canΓÇÖt be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](../architecture/auth-ldap.md). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md). +For LDAP deployments that can't be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](../architecture/auth-ldap.md). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md). -If you enabled the [MFA Server Authentication provider in AD FS 2.0](./howto-mfaserver-adfs-windows-server.md#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, youΓÇÖll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies. +If you enabled the [MFA Server Authentication provider in AD FS 2.0](./howto-mfaserver-adfs-windows-server.md#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, you'll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies. ### Backup Azure AD MFA Server datafile Make a backup of the MFA Server data file located at %programfiles%\Multi-Factor Authentication Server\Data\PhoneFactor.pfdata (default location) on your primary MFA Server. Make sure you have a copy of the installer for your currently installed version in case you need to roll back. If you no longer have a copy, contact Customer Support Services. The **Settings** option allows you to change the settings for the migration proc - User Match ΓÇô Allows you to specify a different on-premises Active Directory attribute for matching Azure AD UPN instead of the default match to userPrincipalName: - The migration utility tries direct matching to UPN before using the on-premises Active Directory attribute. - If no match is found, it calls a Windows API to find the Azure AD UPN and get the SID, which it uses to search the MFA Server user list. - - If the Windows API doesnΓÇÖt find the user or the SID isnΓÇÖt found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list. + - If the Windows API doesn't find the user or the SID isn't found in the MFA Server, then it will use the configured Active Directory attribute to find the user in the on-premises Active Directory, and then use the SID to search the MFA Server user list. - Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined. - Synchronization server ΓÇô Allows the MFA Server Migration Sync service to run on a secondary MFA Server rather than only run on the primary. To configure the Migration Sync service to run on a secondary server, the `Configure-MultiFactorAuthMigrationUtility.ps1` script must be run on the server to register a certificate with the MFA Server Migration Utility app registration. The certificate is used to authenticate to Microsoft Graph. The manual process steps are: 1. To begin the migration process for a user or selection of multiple users, press and hold the Ctrl key while selecting each of the user(s) you wish to migrate. 1. After you select the desired users, click **Migrate Users** > **Selected users** > **OK**. 1. To migrate all users in the group, click **Migrate Users** > **All users in AAD group** > **OK**.-1. You can migrate users even if they are unchanged. By default, the utility is set to **Only migrate users that have changed**. Click **Migrate all users** to re-migrate previously migrated users that are unchanged. Migrating unchanged users can be useful during testing if an administrator needs to reset a userΓÇÖs Azure MFA settings and wants to re-migrate them. +1. You can migrate users even if they are unchanged. By default, the utility is set to **Only migrate users that have changed**. Click **Migrate all users** to re-migrate previously migrated users that are unchanged. Migrating unchanged users can be useful during testing if an administrator needs to reset a user's Azure MFA settings and wants to re-migrate them. :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/migrate-users.png" alt-text="Screenshot of Migrate users dialog."::: The following table lists the sync logic for the various methods. |**Mobile App**|Maximum of five devices will be migrated or only four if the user also has a hardware OATH token.<br>If there are multiple devices with the same name, only migrate the most recent one.<br>Devices will be ordered from newest to oldest.<br>If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If there's no match on OATH Token Secret Key, match on Device Token<br>-- If found, create a Software OATH Token for the MFA Server device to allow OATH Token method to work. Notifications will still work using the existing Azure AD MFA device.<br>-- If not found, create a new device.<br>If adding a new device will exceed the five-device limit, the device will be skipped. | |**OATH Token**|If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If not found, add a new Hardware OATH Token device.<br>If adding a new device will exceed the five-device limit, the OATH token will be skipped.| -MFA Methods will be updated based on what was migrated and the default method will be set. MFA Server will track the last migration timestamp and only migrate the user again if the userΓÇÖs MFA settings change or an admin modifies what to migrate in the **Settings** dialog. +MFA Methods will be updated based on what was migrated and the default method will be set. MFA Server will track the last migration timestamp and only migrate the user again if the user's MFA settings change or an admin modifies what to migrate in the **Settings** dialog. During testing, we recommend doing a manual migration first, and test to ensure a given number of users behave as expected. Once testing is successful, turn on automatic synchronization for the Azure AD group you wish to migrate. As you add users to this group, their information will be automatically synchronized to Azure AD. MFA Server Migration Utility targets one Azure AD group, however that group can encompass both users and nested groups of users. Once complete, a confirmation will inform you of the tasks completed: As mentioned in the confirmation message, it can take several minutes for the migrated data to appear on user objects within Azure AD. Users can view their migrated methods by navigating to [aka.ms/mfasetup](https://aka.ms/mfasetup). +#### View migration details ++You can use Audit logs or Log Analytics to view details of MFA Server to Azure MFA user migrations. ++##### Use Audit logs +To access the Audit logs in the Azure portal to view details of MFA Server to Azure MFA user migrations, follow these steps: ++1. Click **Azure Active Directory** > **Audit logs**. To filter the logs, click **Add filters**. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/add-filter.png" alt-text="Screenshot of how to add filters."::: ++1. Select **Initiated by (actor)** and click **Apply**. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/actor.png" alt-text="Screenshot of Initiated by Actor option."::: ++1. Type _Azure MFA Management_ and click **Apply**. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/apply-actor.png" alt-text="Screenshot of MFA management option."::: ++1. This filter displays only MFA Server Migration Utility logs. To view details for a user migration, click a row, and then choose the **Modified Properties** tab. This tab shows changes to registered MFA methods and phone numbers. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/changes.png" alt-text="Screenshot of user migration details."::: ++ The following table lists the authentication method for each code. ++ | Code | Method | + |:--|:| + | 0 | Voice mobile | + | 2 | Voice office | + | 3 | Voice alternate mobile | + | 5 | SMS | + | 6 | Microsoft Authenticator push notification | + | 7 | Hardware or software token OTP | ++1. If any user devices were migrated, there is a separate log entry. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/migrated-device.png" alt-text="Screenshot of a migrated device."::: +++##### Use Log Analytics ++The details of MFA Server to Azure MFA user migrations can also be queried using Log Analytics. + +```kusto +AuditLogs +| where ActivityDateTime > ago(7d) +| extend InitiatedBy = tostring(InitiatedBy["app"]["displayName"]) +| where InitiatedBy == "Azure MFA Management" +| extend UserObjectId = tostring(TargetResources[0]["id"]) +| extend Upn = tostring(TargetResources[0]["userPrincipalName"]) +| extend ModifiedProperties = TargetResources[0]["modifiedProperties"] +| project ActivityDateTime, InitiatedBy, UserObjectId, Upn, ModifiedProperties +| order by ActivityDateTime asc +``` ++This screenshot shows changes for user migration: +++This screenshot shows changes for device migration: +++Log Analytics can also be used to summarize user migration activity. ++```kusto +AuditLogs +| where ActivityDateTime > ago(7d) +| extend InitiatedBy = tostring(InitiatedBy["app"]["displayName"]) +| where InitiatedBy == "Azure MFA Management" +| extend UserObjectId = tostring(TargetResources[0]["id"]) +| summarize UsersMigrated = dcount(UserObjectId) by InitiatedBy, bin(ActivityDateTime, 1d) +``` ++ ### Validate and test Once you've successfully migrated user data, you can validate the end-user experience using Staged Rollout before making the global tenant change. The following process will allow you to target specific Azure AD group(s) for Staged Rollout for MFA. Staged Rollout tells Azure AD to perform MFA by using Azure AD MFA for users in the targeted groups, rather than sending them on-premises to perform MFA. You can validate and testΓÇöwe recommend using the Azure portal, but if you prefer, you can also use Microsoft Graph. Once you've successfully migrated user data, you can validate the end-user exper 1. Are users able to authenticate successfully using Hardware OATH tokens? ### Educate users-Ensure users know what to expect when they're moved to Azure MFA, including new authentication flows. You may also wish to instruct users to use the Azure AD Combined Registration portal ([aka.ms/mfasetup](https://aka.ms/mfasetup)) to manage their authentication methods rather than the User portal once migrations are complete. Any changes made to authentication methods in Azure AD won't propagate back to your on-premises environment. In a situation where you had to roll back to MFA Server, any changes users have made in Azure AD wonΓÇÖt be available in the MFA Server User portal. +Ensure users know what to expect when they're moved to Azure MFA, including new authentication flows. You may also wish to instruct users to use the Azure AD Combined Registration portal ([aka.ms/mfasetup](https://aka.ms/mfasetup)) to manage their authentication methods rather than the User portal once migrations are complete. Any changes made to authentication methods in Azure AD won't propagate back to your on-premises environment. In a situation where you had to roll back to MFA Server, any changes users have made in Azure AD won't be available in the MFA Server User portal. -If you use third-party solutions that depend on Azure MFA Server for authentication (see [Authentication services](#authentication-services)), youΓÇÖll want users to continue to make changes to their MFA methods in the User portal. These changes will be synced to Azure AD automatically. Once you've migrated these third party solutions, you can move users to the Azure AD combined registration page. +If you use third-party solutions that depend on Azure MFA Server for authentication (see [Authentication services](#authentication-services)), you'll want users to continue to make changes to their MFA methods in the User portal. These changes will be synced to Azure AD automatically. Once you've migrated these third party solutions, you can move users to the Azure AD combined registration page. ### Complete user migration Repeat migration steps found in [Migrate user data](#migrate-user-data) and [Validate and test](#validate-and-test) sections until all user data is migrated. Repeat migration steps found in [Migrate user data](#migrate-user-data) and [Val Using the data points you collected in [Authentication services](#authentication-services), begin carrying out the various migrations necessary. Once this is completed, consider having users manage their authentication methods in the combined registration portal, rather than in the User portal on MFA server. ### Update domain federation settings-Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, itΓÇÖs time to update your domain federation settings. After the update, Azure AD no longer sends MFA request to your on-premises federation server. +Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, it's time to update your domain federation settings. After the update, Azure AD no longer sends MFA request to your on-premises federation server. To configure Azure AD to ignore MFA requests to your on-premises federation server, install the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-&preserve-view=true) and set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `rejectMfaByFederatedIdp`, as shown in the following example. Content-Type: application/json } ``` -Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect. +Users will no longer be redirected to your on-premises federation server for MFA, whether they're targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect. >[!NOTE] >The update of the domain federation setting can take up to 24 hours to take effect. |
active-directory | How To Migrate Mfa Server To Azure Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md | description: Step-by-step guidance to migrate from MFA Server on-premises to Azu + Last updated 01/29/2023 |
active-directory | How To Migrate Mfa Server To Mfa With Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-mfa-with-federation.md | Title: Migrate to Azure AD MFA with federations description: Step-by-step guidance to move from MFA Server on-premises to Azure AD MFA with federation + Last updated 05/23/2023 |
active-directory | Howto Authentication Passwordless Phone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-phone.md | description: Enable passwordless sign-in to Azure AD using Microsoft Authenticat + Last updated 05/16/2023 |
active-directory | Howto Authentication Use Email Signin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-use-email-signin.md | description: Learn how to enable users to sign in to Azure Active Directory with + Last updated 06/01/2023 -> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> Sign-in to Azure AD with email as an alternate login ID is a public preview feature of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse). Many organizations want to let users sign in to Azure Active Directory (Azure AD) using the same credentials as their on-premises directory environment. With this approach, known as hybrid authentication, users only need to remember one set of credentials. |
active-directory | Howto Mfa Getstarted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-getstarted.md | Title: Deployment considerations for Azure AD Multi-Factor Authentication description: Learn about deployment considerations and strategy for successful implementation of Azure AD Multi-Factor Authentication + Last updated 03/06/2023 |
active-directory | Howto Mfa Mfasettings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-mfasettings.md | To unblock a user, complete the following steps: Users who report an MFA prompt as suspicious are set to **High User Risk**. Administrators can use risk-based policies to limit access for these users, or enable self-service password reset (SSPR) for users to remediate problems on their own. If you previously used the **Fraud Alert** automatic blocking feature and don't have an Azure AD P2 license for risk-based policies, you can use risk detection events to identify and disable impacted users and automatically prevent their sign-in. For more information about using risk-based policies, see [Risk-based access policies](../identity-protection/concept-identity-protection-policies.md). -To enable **Report suspicious activity** from the Authentication Methods Settings: +To enable **Report suspicious activity** from the Authentication methods **Settings**: 1. In the Azure portal, click **Azure Active Directory** > **Security** > **Authentication Methods** > **Settings**. -1. Set **Report suspicious activity** to **Enabled**. +1. Set **Report suspicious activity** to **Enabled**. The feature remains disabled if you choose **Microsoft managed**. For more information about Microsoft managed values, see [Protecting authentication methods in Azure Active Directory](concept-authentication-default-enablement.md). 1. Select **All users** or a specific group. +1. Select a **Reporting code**. +1. Click **Save**. ++>[!NOTE] +>If you enable **Report suspicious activity** and specify a custom voice reporting value while the tenant still has **Fraud Alert** enabled in parallel with a custom voice reporting number configured, the **Report suspicious activity** value will be used instead of **Fraud Alert**. ### View suspicious activity events OATH TOTP hardware tokens typically come with a secret key, or seed, pre-program Programmable OATH TOTP hardware tokens that can be reseeded can also be set up with Azure AD in the software token setup flow. -OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms). +OATH hardware tokens are supported as part of a public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://aka.ms/EntraPreviewsTermsOfUse). ![Screenshot that shows the OATH tokens section.](media/concept-authentication-methods/mfa-server-oath-tokens-azure-ad.png) The following table lists more numbers for different countries. | Sri Lanka | +94 117750440 | | Sweden | +46 701924176 | | Taiwan | +886 277515260 |-| Turkey | +90 8505404893 | +| T├╝rkiye | +90 8505404893 | | Ukraine | +380 443332393 | | United Arab Emirates | +971 44015046 | | Vietnam | +84 2039990161 | |
active-directory | Howto Mfa Nps Extension Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md | If you encounter errors with the NPS extension for Azure AD Multi-Factor Authent | **REQUEST_FORMAT_ERROR** <br> Radius Request missing mandatory Radius userName\Identifier attribute.Verify that NPS is receiving RADIUS requests | This error usually reflects an installation issue. The NPS extension must be installed in NPS servers that can receive RADIUS requests. NPS servers that are installed as dependencies for services like RDG and RRAS don't receive radius requests. NPS Extension does not work when installed over such installations and errors out since it cannot read the details from the authentication request. | | **REQUEST_MISSING_CODE** | Make sure that the password encryption protocol between the NPS and NAS servers supports the secondary authentication method that you're using. **PAP** supports all the authentication methods of Azure AD MFA in the cloud: phone call, one-way text message, mobile app notification, and mobile app verification code. **CHAPV2** and **EAP** support phone call and mobile app notification. | | **USERNAME_CANONICALIZATION_ERROR** | Verify that the user is present in your on-premises Active Directory instance, and that the NPS Service has permissions to access the directory. If you are using cross-forest trusts, [contact support](#contact-microsoft-support) for further help. |+| **Challenge requested in Authentication Ext for User** | Organizations using a RADIUS protocol other than PAP will observe user VPN authorization failing with these events appearing in the AuthZOptCh event log of the NPS Extension server. You can configure the NPS Server to support PAP. If PAP is not an option, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to Approve/Deny push notifications. For further help, please check [Number matching using NPS Extension](how-to-mfa-number-match.md#nps-extension). | ### Alternate login ID errors |
active-directory | Howto Mfa Nps Extension Rdg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-rdg.md | description: Integrate your Remote Desktop Gateway infrastructure with Azure AD + Last updated 01/29/2023 |
active-directory | Howto Mfa Nps Extension Vpn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-vpn.md | description: Integrate your VPN infrastructure with Azure AD MFA by using the Ne + Last updated 01/29/2023 |
active-directory | Howto Mfa Nps Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension.md | |
active-directory | Howto Mfa Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-reporting.md | |
active-directory | Howto Mfa Userstates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md | |
active-directory | Howto Password Smart Lockout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-smart-lockout.md | Based on your organizational requirements, you can customize the Azure AD smart To check or modify the smart lockout values for your organization, complete the following steps: -1. Sign in to the [Entra portal](https://entra.microsoft.com/#home). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/#home). 1. Search for and select *Azure Active Directory*, then select **Security** > **Authentication methods** > **Password protection**. 1. Set the **Lockout threshold**, based on how many failed sign-ins are allowed on an account before its first lockout. |
active-directory | Howto Registration Mfa Sspr Combined Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md | description: Troubleshoot Azure AD Multi-Factor Authentication and self-service + Last updated 01/29/2023 |
active-directory | Howto Sspr Authenticationdata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md | |
active-directory | V1 Permissions Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/v1-permissions-consent.md | |
active-directory | Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md | This article answers frequently asked questions (FAQs) about Microsoft Entra Per Microsoft Entra Permissions Management (Permissions Management) is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle. - ## What are the prerequisites to use Permissions Management? Permissions Management supports data collection from AWS, GCP, and/or Microsoft Azure. For data collection and analysis, customers are required to have an Azure Active Directory (Azure AD) account to use Permissions Management. Permissions Management currently supports the three major public clouds: Amazon Permissions Management currently doesn't support hybrid environments. -## What types of identities are supported by Permissions Management? +## What types of identities does Permissions Management support? Permissions Management supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions). The Permissions Creep Index (PCI) is a quantitative measure of risk associated w ## How can customers use Permissions Management to delete unused or excessive permissions? -Permissions Management allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size permissions of that identity to only the permissions that are being used for day-to-day operations. All unused and other risky permissions can be automatically removed. +Permissions Management allows users to right-size excessive permissions and automate least privilege policy enforcement with just a few clicks. The solution continuously analyzes historical permission usage data for each identity and gives customers the ability to right-size the permissions of that identity to permissions that are only being used for day-to-day operations. All unused and other risky permissions can be automatically removed. ## How can customers grant permissions on-demand with Permissions Management? No, Permissions Management doesn't have access to sensitive personal data. You can read our [blog](https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/bg-p/Identity) and visit our [web page](https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-permissions-management). You can also get in touch with your Microsoft point of contact to schedule a demo. -## What is the data destruction/decommission process? +## What is the data destruction/decommission process? ++If a customer initiates a free Permissions Management 45-day trial and does not convert to a paid license within 45 days of the trial expiration, all collected data is deleted within 30 days of the trial expiration date. ++If a customer decides to discontinue licensing the service, all previously collected data is deleted within 30 days of license termination. ++Customers can also remove, export or modify specific data if a Global Administrator using the Permissions Management service files an official Data Subject Request. To file a request: -If a customer initiates a free Permissions Management 45-day trial, but does not follow up and convert to a paid license within 45 days of the free trial expiration, we will delete all collected data on or just before 45 days. +If you're an enterprise customer, you can contact your Microsoft representative, account team, or tenant admin to file a high-priority IcM support ticket requesting a Data Subject Request. Do not include details or any personally identifiable information in the IcM request. We'll reach out to you for these details only after an IcM is filed. -If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 45 days of license termination. +If you're a self-service customer (you set up a trial or paid license in the Microsoft 365 admin center) you can contact the Permissions Management privacy team by selecting your profile drop-down menu, then **Account Settings** in Permissions Management. Follow the instructions to make a Data Subject Access Request. -We also have the ability to remove, export or modify specific data should the Global Administrator using the Entra Permissions Management service file an official Data Subject Request. This can be initiated by opening a ticket in the Azure portal [New support request - Microsoft Entra admin center](https://entra.microsoft.com/#blade/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical), or alternately contacting your local Microsoft representative. +Learn more about [Azure Data Subject Requests](https://go.microsoft.com/fwlink/?linkid=2245178). ## Do I require a license to use Entra Permissions Management? |
active-directory | Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md | This option detects all AWS accounts that are accessible through OIDC role acces On the **Data Collectors** dashboard, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.** - You have now completed onboarding AWS, and Permissions Management has started collecting and processing your data. + The status column in your Permissions Management UI shows you which step of data collection you're at: + + - **Pending**: Permissions Management has not started detecting or onboarding yet. + - **Discovering**: Permissions Management is detecting the authorization systems. + - **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding. + - **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management. ### 7. View the data |
active-directory | Onboard Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md | To view status of onboarding after saving the configuration: ### 2. Review and save. -- In **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.+1. In **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**. The following message appears: **Successfully Created Configuration.** On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.** - You have now completed onboarding Azure, and Permissions Management has started collecting and processing your data. + The status column in your Permissions Management UI shows you which step of data collection you're at: + + - **Pending**: Permissions Management has not started detecting or onboarding yet. + - **Discovering**: Permissions Management is detecting the authorization systems. + - **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding. + - **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management. ### 3. View the data. -- To view the data, select the **Authorization Systems** tab.+1. To view the data, select the **Authorization Systems** tab. The **Status** column in the table displays **Collecting Data.** |
active-directory | Onboard Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md | The required commands to run in Google Cloud Shell are listed in the Manage Auth ### 3. Review and save. -- In the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**.+1. In the **Permissions Management Onboarding ΓÇô Summary** page, review the information you've added, and then select **Verify Now & Save**. The following message appears: **Successfully Created Configuration**. On the **Data Collectors** tab, the **Recently Uploaded On** column displays **Collecting**. The **Recently Transformed On** column displays **Processing.**-- You've completed onboarding GCP, and Permissions Management has started collecting and processing your data. + + The status column in your Permissions Management UI shows you which step of data collection you're at: + + - **Pending**: Permissions Management has not started detecting or onboarding yet. + - **Discovering**: Permissions Management is detecting the authorization systems. + - **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding. + - **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management. ### 4. View the data. -- To view the data, select the **Authorization Systems** tab.+1. To view the data, select the **Authorization Systems** tab. The **Status** column in the table displays **Collecting Data.** |
active-directory | Permissions Management Quickstart Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-quickstart-guide.md | + + Title: Microsoft Entra Permissions Management Quickstart Guide +description: Quickstart guide - How to quickly onboard your Microsoft Entra Permissions Management product +# CustomerIntent: As a security administrator, I want to successfully onboard Permissions Management so that I can enable identity security in my cloud environment as efficiently as possible.' +++++++ Last updated : 08/24/2023++++# Quickstart guide to Microsoft Entra Permissions Management ++Welcome to the Quickstart Guide for Microsoft Entra Permissions Management. ++Permissions Management is a Cloud Infrastructure Entitlement Management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. These identities include over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management helps your organization effectively secure and manage cloud permissions by detecting, automatically right-sizing, and continuously monitoring unused and excessive permissions. ++With this quickstart guide, youΓÇÖll set up your multicloud environment(s), configure data collection, and enable permissions access to ensure your cloud identities are managed and secure. ++## Prerequisites ++Before you begin, you need access to these tools for the onboarding process: ++- Access to a local BASH shell with the Azure CLI or Azure Cloud Shell using BASH environment (Azure CLI is included). +- Access to AWS, Azure, and GCP consoles. +- A user must have *Global Administrator* or *Permissions Management Administrator* role assignments to create a new app registration in Entra ID tenant is required for AWS and GCP onboarding. +++## Step 1: Set-up Permissions Management ++To enable Permissions Management, you must have a Microsoft Entra ID tenant (example, Entra admin center). +- If you have an Azure account, you automatically have an Entra admin center tenant. +- If you donΓÇÖt already have one, create a free account at [entra.microsoft.com.](https://entra.microsoft.com) ++If the above points are met, continue with: ++[Enable Microsoft Entra Permissions Management in your organization](onboard-enable-tenant.md) ++Ensure you're a *Global Administrator* or *Permissions Management Administrator*. Learn more about [Permissions Management roles and permissions](product-roles-permissions.md). ++ +## Step 2: Onboard your multicloud environment ++So far youΓÇÖve, ++1. Been assigned the *Permissions Management Administrator* role in your Entra admin center tenant. +2. Purchased licenses or activated your 45-day free trial for Permissions Management. +3. Successfully launched Permissions Management. ++Now, you're going to learn about the role and settings of the Controller and Data collection modes in Permissions Management. ++### Set the controller +The controller gives you the choice to determine the level of access you grant to users in Permissions Management. ++- Enabling the controller during onboarding grants Permissions Management admin access, or read and write access, so users can right-size permissions and remediate directly through Permissions Management (instead of going to the AWS, Azure, or GCP consoles).ΓÇ» ++- Disabling the controller during onboarding, or never enabling it, grants a Permissions Management user read-only access to your environment(s). ++> [!NOTE] +> If you don't enable the controller during onboarding, you have the option to enable it after onboarding is complete. To set the controller in Permissions Management after onboarding, see [Enable or disable the controller after onboarding](onboard-enable-controller-after-onboarding.md). +> For AWS environments, once you have enabled the controller, you *cannot* disable it. ++To set the controller settings during onboarding: +1. Select **Enable** to give read and write access to Permissions Management. +2. Select **Disable** to give read-only access to Permissions Management. ++### Configure data collection ++There are three modes to choose from in order to collect data in Permissions Management. ++- **Automatic (recommended)** +Permissions Management automatically discovers, onboards, and monitors all current and future subscriptions. ++- **Manual** +Manually enter individual subscriptions for Permissions Management to discover, onboard, and monitor. You can enter up to 100 subscriptions per data collection. ++- **Select** +Permissions Management automatically discovers all current subscriptions. Once discovered, you select which subscriptions to onboard and monitor. ++> [!NOTE] +> To use **Automatic** or **Select** modes, the controller must be enabled while configuring data collection. ++To configure data collection: +1. In Permissions Management, navigate to the data collectors page. +2. Select a cloud environment: AWS, Azure, or GCP. +3. Click **Create configuration**. ++### Onboard Amazon Web Services (AWS) +Since Permissions Management is hosted on Microsoft Entra, there are more steps to take to onboard your AWS environment. ++To connect AWS to Permissions Management, you must create an Entra ID application in the Entra admin center tenant where Permissions Management is enabled. This Entra ID application is used to set up an OIDC connection to your AWS environment. ++*OpenID Connect (OIDC) is an interoperable authentication protocol based on the OAuth 2.0 family of specifications.* ++### Prerequisites ++A user must have *Global Administrator* or *Permissions Management Administrator* role assignments to create a new app registration in Entra ID. ++Account IDs and roles for: +- AWS OIDC account: An AWS member account designated by you to create and host the OIDC connection through an OIDC IdP +- AWS Logging account (optional but recommended) +- AWS Management account (optional but recommended) +- AWS member accounts monitored and managed by Permissions Management (for manual mode) ++To use **Automatic** or **Select** data collection modes, you must connect your AWS Management account. ++During this step, you can enable the controller by entering the name of the S3 bucket with AWS CloudTrail activity logs (found on AWS Trails). ++To onboard your AWS environment and configure data collection, see [Onboard an Amazon Web Services (AWS) account](onboard-aws.md). ++### Onboard Microsoft Azure +When you enabled Permissions Management in the Entra ID tenant, an enterprise application for CIEM was created. To onboard your Azure environment, you grant permissions to this application for Permissions management. ++1. In the Entra ID tenant where Permissions management is enabled, locate the **Cloud Infrastructure Entitlement Management (CIEM)** enterprise application. ++2. Assign the *Reader* role to the CIEM application to allow Permissions management to read the Entra subscriptions in your environment. ++### Prerequisites +- A user with ```Microsoft.Authorization/roleAssignments/write``` permissions at the subscription or management group scope. ++- To use **Automatic** or **Select** data collection modes, you must assign the *Reader* role at the Management group scope. ++- To enable the controller, you must assign the *User Access Administrator* role to the CIEM application. ++To onboard your Azure environment and configure data collection, see [Onboard a Microsoft Azure subscription](onboard-azure.md). +++### Onboard Google Cloud Platform (GCP) +Because Permissions Management is hosted on Microsoft Azure, there are additional steps to take to onboard your GCP environment. ++To connect GCP to Permissions Management, you must create an Entra admin center application in the Entra ID tenant where Permissions Management is enabled. This Entra admin center application is used to set up an OIDC connection to your GCP environment. ++*OpenID Connect (OIDC) is an interoperable authentication protocol based on the OAuth 2.0 family of specifications.* ++ +### Prerequisites +A user with the ability to create a new app registration in Entra (needed to facilitate the OIDC connection) is needed for AWS and GCP onboarding. + +ID details for: +- GCP OIDC project: a GCP project designated by you to create and host the OIDC connection through an OIDC IdP. + - Project number and project ID +- GCP OIDC Workload identity + - Pool ID, pool provider ID +- GCP OIDC service account + - G-suite IdP Secret name and G-suite IdP user email (optional) + - IDs for the GCP projects you wish to onboard (optional, for manual mode) ++Assign the *Viewer* and *Security Reviewer* roles to the GCP service account at the organization, folder, or project levels to grant Permissions management read access to your GCP environment. ++During this step, you have the option to **Enable** controller mode by assigning the *Role Administrator* and *Security Administrator* roles to the GCP service account at the organization, folder, or project levels. ++> [!NOTE] +> The Permissions Management default scope is at the project level. ++To onboard your GCP environment and configure data collection, see [Onboard a GCP project](onboard-gcp.md). ++## Summary ++Congratulations! You have finished configuring data collection for your environment(s), and the data collection process has begun. ++The status column in your Permissions Management UI shows you which step of data collection you're at. ++ +- **Pending**: Permissions Management has not started detecting or onboarding yet. +- **Discovering**: Permissions Management is detecting the authorization systems. +- **In progress**: Permissions Management has finished detecting the authorization systems and is onboarding. +- **Onboarded**: Data collection is complete, and all detected authorization systems are onboarded to Permissions Management. ++> [!NOTE] +> Data collection might take time depending on the amount of authorization systems you've onboarded. While the data collection process continues, you can begin setting up [users and groups in Permissions Management](how-to-add-remove-user-to-group.md). ++## Next steps ++- [Enable or disable the controller after onboarding](onboard-enable-controller-after-onboarding.md) +- [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md) +- [Create folders to organize your authorization systems](how-to-create-folders.md) ++References: +- [Permissions Management Glossary](multi-cloud-glossary.md) +- [Permissions Management FAQs](faqs.md) |
active-directory | Product Roles Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-roles-permissions.md | + + Title: Microsoft Entra Permissions Management roles and permissions +description: Review roles and the level of permissions assigned in Microsoft Entra Permissions Management. +# customerintent: As a cloud administer, I want to understand Permissions Management role assignments, so that I can effectively assign the correct permissions to users. +++++++ Last updated : 08/24/2023+++++# Microsoft Entra Permissions Management roles and permissions levels ++In Microsoft Azure and Microsoft Entra Permissions Management role assignments grant users permissions to monitor and take action in multicloud environments. ++- **Global Administrator**: Manages all aspects of Entra Admin Center and Microsoft services that use Entra Admin Center identities. +- **Billing Administrator**: Performs common billing related tasks like updating payment information. +- **Permissions Management Administrator**: Manages all aspects of Entra Permissions Management. ++See [Microsoft Entra ID built-in roles to learn more.](product-privileged-role-insights.md) ++## Enabling Permissions Management +- To activate a trial or purchase a license, you must have *Global Administrator* or *Billing Administrator* permissions. ++## Onboarding your Amazon Web Service (AWS), Microsoft Entra, or Google Cloud Platform (GCP) environments ++- To configure data collection, you must have *Permissions Management Administrator* or *Global Administrator* permissions. +- A user with *Global Administrator* or *Permissions Management Administrator* role assignments is required for AWS and GCP onboarding. ++## Notes on permissions and roles in Permissions Management ++- Users can have the following permissions: + - Admin for all authorization system types + - Admin for selected authorization system types + - Fine-grained permissions for all or selected authorization system types +- If a user isn't an admin, they're assigned Microsoft Entra ID security group-based, fine-grained permissions for all or selected authorization system types: + - Viewers: View the specified AWS accounts, Azure subscriptions, and GCP projects + - Controller: Modify Cloud Infrastructure Entitlement Management (CIEM) properties and use the Remediation dashboard. + - Approvers: Able to approve permission requests + - Requestors: Request permissions in the specified AWS accounts, Entra subscriptions, and GCP projects. ++## Permissions Management actions and required roles ++Remediation +- To view the **Remediation** tab, you must have *Viewer*, *Controller*, or *Approver* permissions. +- To make changes in the **Remediation** tab, you must have *Controller* or *Approver* permissions. ++Autopilot +- To view and make changes in the **Autopilot** tab, you must be a *Permissions Management Administrator*. ++Alert +- Any user (admin, nonadmin) can create an alert. +- Only the user who creates the alert can edit, rename, deactivate, or delete the alert. ++Manage users or groups +- Only the owner of a group can add or remove a user from the group. +- Managing users and groups is only done in the Entra Admin Center. +++## Next steps ++For information about managing roles, policies and permissions requests in your organization, see [View roles/policies and requests for permission in the Remediation dashboard](ui-remediation.md). |
active-directory | Block Legacy Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/block-legacy-authentication.md | The following messaging protocols support legacy authentication: - Universal Outlook - Used by the Mail and Calendar app for Windows 10. - Other clients - Other protocols identified as utilizing legacy authentication. -For more information about these authentication protocols and services, see [Sign-in activity reports in the Azure portal](../reports-monitoring/concept-sign-ins.md#filter-sign-in-activities). +For more information about these authentication protocols and services, see [Sign-in log activity details](../reports-monitoring/concept-sign-in-log-activity-details.md). ### Identify legacy authentication use Before you can block legacy authentication in your directory, you need to first #### Sign-in log indicators -1. Navigate to the **Azure portal** > **Azure Active Directory** > **Sign-in logs**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Add the **Client App** column if it isn't shown by clicking on **Columns** > **Client App**. 1. Select **Add filters** > **Client App** > choose all of the legacy authentication protocols and select **Apply**. 1. If you've activated the [new sign-in activity reports preview](../reports-monitoring/concept-all-sign-ins.md), repeat the above steps also on the **User sign-ins (non-interactive)** tab. |
active-directory | Concept Condition Filters For Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-condition-filters-for-devices.md | There are multiple scenarios that organizations can now enable using filter for ## Create a Conditional Access policy -Filter for devices is an option when creating a Conditional Access policy in the Azure portal or using the Microsoft Graph API. +Filter for devices is an optional control when creating a Conditional Access policy. The following steps will help create two Conditional Access policies to support the first scenario under [Common scenarios](#common-scenarios). Policy 1: All users with the directory role of Global Administrator, accessing the Microsoft Azure Management cloud app, and for Access controls, Grant access, but require multifactor authentication and require device to be marked as compliant. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Concept Conditional Access Cloud Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md | description: What are cloud apps, actions, and authentication context in an Azur + Previously updated : 06/27/2023 Last updated : 08/25/2023 -Target resources (formerly Cloud apps, actions, and authentication context) are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, actions, or authentication context. +Target resources (formerly Cloud apps, actions, and authentication context) are key signals in a Conditional Access policy. Conditional Access policies allow administrators to assign controls to specific applications, services, actions, or authentication context. -- Administrators can choose from the list of applications that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../app-proxy/what-is-application-proxy.md).+- Administrators can choose from the list of applications or services that include built-in Microsoft applications and any [Azure AD integrated applications](../manage-apps/what-is-application-management.md) including gallery, non-gallery, and applications published through [Application Proxy](../app-proxy/what-is-application-proxy.md). - Administrators may choose to define policy not based on a cloud application but on a [user action](#user-actions) like **Register security information** or **Register or join devices**, allowing Conditional Access to enforce controls around those actions. - Administrators can target [traffic forwarding profiles](#traffic-forwarding-profiles) from Global Secure Access for enhanced functionality. - Administrators can use [authentication context](#authentication-context) to provide an extra layer of security in applications. -![Define a Conditional Access policy and specify cloud apps](./media/concept-conditional-access-cloud-apps/conditional-access-cloud-apps-or-actions.png) ## Microsoft cloud applications Targeting this group of applications helps to avoid issues that may arise becaus Administrators can exclude the entire Office 365 suite or specific Office 365 cloud apps from the Conditional Access policy. -The following key applications are affected by the Office 365 cloud app: --- Exchange Online-- Microsoft 365 Search Service-- Microsoft Forms-- Microsoft Planner (ProjectWorkManagement)-- Microsoft Stream-- Microsoft Teams-- Microsoft To-Do-- Microsoft Flow-- Microsoft Office 365 Portal-- Microsoft Office client application-- Microsoft To-Do WebApp-- Microsoft Whiteboard Services-- Office Delve-- Office Online-- OneDrive-- Power Apps-- Power Automate-- Security & compliance portal-- SharePoint Online-- Skype for Business Online-- Skype and Teams Tenant Admin API-- Sway-- Yammer- A complete list of all services included can be found in the article [Apps included in Conditional Access Office 365 app suite](reference-office-365-application-contents.md). ### Microsoft Azure Management Because the policy is applied to the Azure management portal and API, services, - Azure Data Factory portal - Azure Event Hubs - Azure Service Bus -- [Azure SQL Database](/azure/azure-sql/database/conditional-access-configure)+- Azure SQL Database - SQL Managed Instance - Azure Synapse - Visual Studio subscriptions administrator portal -- [Microsoft IoT Central](https://apps.azureiotcentral.com/)+- Microsoft IoT Central > [!NOTE] > The Microsoft Azure Management application applies to [Azure PowerShell](/powershell/azure/what-is-azure-powershell), which calls the [Azure Resource Manager API](../../azure-resource-manager/management/overview.md). It does not apply to [Azure AD PowerShell](/powershell/azure/active-directory/overview), which calls the [Microsoft Graph API](/graph/overview). For more information on how to set up a sample policy for Microsoft Azure Manage When a Conditional Access policy targets the Microsoft Admin Portals cloud app, the policy is enforced for tokens issued to application IDs of the following Microsoft administrative portals: -- Microsoft 365 Admin Center-- Exchange admin center - Azure portal+- Exchange admin center +- Microsoft 365 admin center +- Microsoft 365 Defender portal - Microsoft Entra admin center-- Security and Microsoft Purview compliance portal+- Microsoft Intune admin center +- Microsoft Purview compliance portal -Other Microsoft admin portals will be added over time. +We're continually adding more administrative portals to the list. > [!IMPORTANT]-> Microsoft Admin Poratls (preview) is not currently supported in Government clouds. +> Microsoft Admin Portals (preview) is not currently supported in Government clouds. > [!NOTE] > The Microsoft Admin Portals app applies to interactive sign-ins to the listed admin portals only. Sign-ins to the underlying resources or services like Microsoft Graph or Azure Resource Manager APIs are not covered by this application. Those resources are protected by the [Microsoft Azure Management](#microsoft-azure-management) app. This enables customers to move along the MFA adoption journey for admins without impacting automation that relies on APIs and PowerShell. When you are ready, Microsoft recommends using a [policy requiring administrators perform MFA always](howto-conditional-access-policy-admin-mfa.md) for comprehensive protection. For example, an organization may keep files in SharePoint sites like the lunch m ### Configure authentication contexts -Authentication contexts are managed in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Authentication context**. +Authentication contexts are managed under **Azure Active Directory** > **Security** > **Conditional Access** > **Authentication context**. -![Manage authentication context in the Azure portal](./media/concept-conditional-access-cloud-apps/conditional-access-authentication-context-get-started.png) -Create new authentication context definitions by selecting **New authentication context** in the Azure portal. Organizations are limited to a total of 25 authentication context definitions. Configure the following attributes: +Create new authentication context definitions by selecting **New authentication context**. Organizations are limited to a total of 25 authentication context definitions. Configure the following attributes: - **Display name** is the name that is used to identify the authentication context in Azure AD and across applications that consume authentication contexts. We recommend names that can be used across resources, like "trusted devices", to reduce the number of authentication contexts needed. Having a reduced set limits the number of redirects and provides a better end to end-user experience. - **Description** provides more information about the policies it's used by Azure AD administrators and those applying authentication contexts to resources. Create new authentication context definitions by selecting **New authentication Administrators can select published authentication contexts in their Conditional Access policies under **Assignments** > **Cloud apps or actions** and selecting **Authentication context** from the **Select what this policy applies to** menu. #### Delete an authentication context |
active-directory | Concept Conditional Access Conditions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-conditions.md | This setting has an effect on access attempts made from the following mobile app | Outlook mobile app | Exchange Online | Android, iOS | | Power BI app | Power BI service | Windows 10, Windows 8.1, Windows 7, Android, and iOS | | Skype for Business | Exchange Online| Android, iOS |-| Visual Studio Team Services app | Visual Studio Team Services | Windows 10, Windows 8.1, Windows 7, iOS, and Android | +| Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) app | Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) | Windows 10, Windows 8.1, Windows 7, iOS, and Android | ### Exchange ActiveSync clients |
active-directory | Concept Conditional Access Policy Common | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-policy-common.md | Policies in this category provide new ways to protect against compromise. -Find these templates in the **[Microsoft Entra admin center](https://entra.microsoft.com)** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category. +Find these templates in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Conditional Access** > **Create new policy from templates**. Select **Show more** to see all policy templates in each category. :::image type="content" source="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png" alt-text="Screenshot that shows how to create a Conditional Access policy from a preconfigured template in the Microsoft Entra admin center." lightbox="media/concept-conditional-access-policy-common/create-policy-from-template-identity.png"::: > [!IMPORTANT]-> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. Simply navigate to **Microsoft Entra admin center** > **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Policies**, select the policy to open the editor and modify the excluded users and groups to select accounts you want to exclude. +> Conditional Access template policies will exclude only the user creating the policy from the template. If your organization needs to [exclude other accounts](../roles/security-emergency-access.md), you will be able to modify the policy once they are created. You can find these policies in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Conditional Access** > **Policies**. Select a policy to open the editor and modify the excluded users and groups to select accounts you want to exclude. By default, each policy is created in [report-only mode](concept-conditional-access-report-only.md), we recommended organizations test and monitor usage, to ensure intended result, before turning on each policy. |
active-directory | Concept Conditional Access Session | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-session.md | For more information, see the article [Configure authentication session manageme - **Disable** only work when **All cloud apps** are selected, no conditions are selected, and **Disable** is selected under **Session** > **Customize continuous access evaluation** in a Conditional Access policy. You can choose to disable all users or specific users and groups. ## Disable resilience defaults |
active-directory | Concept Conditional Access Users Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-conditional-access-users-groups.md | By default the policy provides an option to exclude the current user from the po ![Warning, don't lock yourself out!](./media/concept-conditional-access-users-groups/conditional-access-users-and-groups-lockout-warning.png) -If you do find yourself locked out, see [What to do if you're locked out of the Azure portal?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out-of-the-azure-portal) +If you do find yourself locked out, see [What to do if you're locked out?](troubleshoot-conditional-access.md#what-to-do-if-youre-locked-out) ### External partner access |
active-directory | Concept Continuous Access Evaluation Strict Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-strict-enforcement.md | Repeat steps 2 and 3 with expanding groups of users until Strictly Enforce Locat Administrators can investigate the Sign-in logs to find cases with **IP address (seen by resource)**. -1. Sign in to the **Azure portal** as at least a Global Reader. -1. Browse to **Azure Active Directory** > **Sign-ins**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Find events to review by adding filters and columns to filter out unnecessary information. 1. Add the **IP address (seen by resource)** column and filter out any blank items to narrow the scope. The **IP address (seen by resource)** is blank when that IP seen by Azure AD matches the IP address seen by the resource. |
active-directory | Concept Continuous Access Evaluation Workload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation-workload.md | Last updated 07/22/2022 -+ When a clientΓÇÖs access to a resource is blocked due to CAE being triggered, th The following steps detail how an admin can verify sign in activity in the sign-in logs: -1. Sign into the Azure portal as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs** > **Service Principal Sign-ins**. You can use filters to ease the debugging process. 1. Select an entry to see activity details. The **Continuous access evaluation** field indicates whether a CAE token was issued in a particular sign-in attempt. ## Next steps The following steps detail how an admin can verify sign in activity in the sign- - [Register an application with Azure AD and create a service principal](../develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal) - [How to use Continuous Access Evaluation enabled APIs in your applications](../develop/app-resilience-continuous-access-evaluation.md) - [Sample application using continuous access evaluation](https://github.com/Azure-Samples/ms-identity-dotnetcore-daemon-graph-cae)+- [Securing workload identities with Azure AD Identity Protection](../identity-protection/concept-workload-identity-risk.md) - [What is continuous access evaluation?](../conditional-access/concept-continuous-access-evaluation.md) |
active-directory | Concept Continuous Access Evaluation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md | The CAE setting has been moved to under the Conditional Access blade. New CAE cu #### Migration -Customers who have configured CAE settings under Security before have to migrate settings to a new Conditional Access policy. Use the steps that follow to migrate your CAE settings to a Conditional Access policy. ---1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Continuous access evaluation**. -1. You have the option to **Migrate** your policy. This action is the only one that you have access to at this point. -1. Browse to **Conditional Access** and you find a new policy named **Conditional Access policy created from CAE settings** with your settings configured. Administrators can choose to customize this policy or create their own to replace it. +Customers who have configured CAE settings under Security before have to migrate settings to a new Conditional Access policy. The following table describes the migration experience of each customer group based on previously configured CAE settings. Changes made to Conditional Access policies and group membership made by adminis When Conditional Access policy or group membership changes need to be applied to certain users immediately, you have two options. - Run the [revoke-mgusersign PowerShell command](/powershell/module/microsoft.graph.users.actions/revoke-mgusersigninsession) to revoke all refresh tokens of a specified user.-- Select "Revoke Session" on the user profile page in the Azure portal to revoke the user's session to ensure that the updated policies are applied immediately.+- Select "Revoke Session" on the user profile page to revoke the user's session to ensure that the updated policies are applied immediately. ### IP address variation and networks with IP address shared or unknown egress IPs |
active-directory | Concept Filter For Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md | Application filters are a new feature for Conditional Access that allows organiz In this document, you create a custom attribute set, assign a custom security attribute to your application, and create a Conditional Access policy to secure the application. > [!IMPORTANT]-> Filter for applications is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> Filter for applications is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Assign roles Custom security attributes are security sensitive and can only be managed by del 1. Assign the appropriate role to the users who will manage or report on these attributes at the directory scope. - For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). + For detailed steps, see [Assign Azure roles](../../role-based-access-control/role-assignments-portal.md). ## Create custom security attributes Follow the instructions in the article, [Add or deactivate custom security attri :::image type="content" source="media/concept-filter-for-applications/edit-filter-for-applications.png" alt-text="A screenshot showing a Conditional Access policy with the edit filter window showing an attribute of require MFA." lightbox="media/concept-filter-for-applications/edit-filter-for-applications.png"::: -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. Set up a sample application that, demonstrates how a job or a Windows service ca When you don't have a service principal listed in your tenant, it can't be targeted. The Office 365 suite is an example of one such service principal. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Enterprise applications**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications**. 1. Select the service principal you want to apply a custom security attribute to. 1. Under **Manage** > **Custom security attributes (preview)**, select **Add assignment**. 1. Under **Attribute set**, select **ConditionalAccessTest**. |
active-directory | Concept Token Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-token-protection.md | Token protection (sometimes referred to as token binding in the industry) attemp Token protection creates a cryptographically secure tie between the token and the device (client secret) it's issued to. Without the client secret, the bound token is useless. When a user registers a Windows 10 or newer device in Azure AD, their primary identity is [bound to the device](../devices/concept-primary-refresh-token.md#how-is-the-prt-protected). What this means: A policy can ensure that only bound sign-in session (or refresh) tokens, otherwise known as Primary Refresh Tokens (PRTs) are used by applications when requesting access to a resource. > [!IMPORTANT]-> Token protection is currently in public preview. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). -+> Token protection is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). With this preview, we're giving you the ability to create a Conditional Access policy to require token protection for sign-in tokens (refresh tokens) for specific services. We support token protection for sign-in tokens in Conditional Access for desktop applications accessing Exchange Online and SharePoint Online on Windows devices. > [!IMPORTANT] Users who perform specialized roles like those described in [Privileged access s The steps that follow help create a Conditional Access policy to require token protection for Exchange Online and SharePoint Online on Windows devices. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. Monitoring Conditional Access enforcement of token protection before and after e Use Azure AD sign-in log to verify the outcome of a token protection enforcement policy in report only mode or in enabled mode. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Sign-in logs**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Select a specific request to determine if the policy is applied or not. 1. Go to the **Conditional Access** or **Report-Only** pane depending on its state and select the name of your policy requiring token protection. 1. Under **Session Controls** check to see if the policy requirements were satisfied or not. |
active-directory | How To App Protection Policy Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-app-protection-policy-windows.md | The following policy is put in to [Report-only mode](howto-conditional-access-in The following steps help create a Conditional Access policy requiring an app protection policy when using a Windows device. The app protection policy must also be configured and assigned to your users in Microsoft Intune. For more information about how to create the app protection policy, see the article [Preview: App protection policy settings for Windows](/mem/intune/apps/app-protection-policy-settings-windows). -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | How To Policy Mfa Admin Portals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-mfa-admin-portals.md | Microsoft recommends securing access to any Microsoft admin portals like Microso ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | How To Policy Phish Resistant Admin Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/how-to-policy-phish-resistant-admin-mfa.md | Organizations can choose to include or exclude roles as they see fit. ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-apis.md | description: Using the Azure AD Conditional Access APIs and PowerShell to manage + Last updated 09/10/2020 |
active-directory | Howto Conditional Access Insights Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-insights-reporting.md | If you haven't integrated Azure AD logs with Azure Monitor logs, you need to tak To access the insights and reporting workbook: -1. Sign in to the **Azure portal**. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Insights and reporting**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Insights and reporting**. ### Get started: Select parameters You can also investigate the sign-ins of a specific user by searching for sign-i To configure a Conditional Access policy in report-only mode: -1. Sign into the **Azure portal** as a Conditional Access Administrator, security administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select an existing policy or create a new policy. 1. Under **Enable policy** set the toggle to **Report-only** mode. 1. Select **Save** To configure a Conditional Access policy in report-only mode: ### Why are queries failing due to a permissions error? -In order to access the workbook, you need the proper Azure AD permissions and Log Analytics workspace permissions. To test whether you have the proper workspace permissions by running a sample log analytics query: +In order to access the workbook, you need the proper permissions in Azure AD and Log Analytics. To test whether you have the proper workspace permissions by running a sample log analytics query: -1. Sign in to the **Azure portal**. -1. Browse to **Azure Active Directory** > **Log Analytics**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Log Analytics**. 1. Type `SigninLogs` into the query box and select **Run**. 1. If the query doesn't return any results, your workspace may not have been configured correctly. |
active-directory | Howto Conditional Access Policy Admin Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-admin-mfa.md | Organizations can choose to include or exclude roles as they see fit. The following steps will help create a Conditional Access policy to require those assigned administrative roles to perform multifactor authentication. Some organizations may be ready to move to stronger authentication methods for their administrators. These organizations may choose to implement a policy like the one described in the article [Require phishing-resistant multifactor authentication for administrators](how-to-policy-phish-resistant-admin-mfa.md). -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy All Users Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-all-users-mfa.md | Organizations that use [Subscription Activation](/windows/deployment/windows-10- The following steps help create a Conditional Access policy to require all users do multifactor authentication. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Authentication Strength External | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-authentication-strength-external.md | The authentication methods that external users can use to satisfy MFA requiremen Determine if one of the built-in authentication strengths will work for your scenario or if you'll need to create a custom authentication strength. -1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. -1. Browse to **Azure Active Directory** > **Security** > **Authentication methods** > **Authentication strengths**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Authentication methods** > **Authentication strengths**. 1. Review the built-in authentication strengths to see if one of them meets your requirements. 1. If you want to enforce a different set of authentication methods, [create a custom authentication strength](https://aka.ms/b2b-auth-strengths). Determine if one of the built-in authentication strengths will work for your sce Use the following steps to create a Conditional Access policy that applies an authentication strength to external users. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Azure Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md | The following steps will help create a Conditional Access policy to require user > [!CAUTION] > Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Block Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-access.md | -> Misconfiguration of a block policy can lead to organizations being locked out of the Azure portal. +> Misconfiguration of a block policy can lead to organizations being locked out. Policies like these can have unintended side effects. Proper testing and validation are vital before enabling. Administrators should utilize tools such as [Conditional Access report-only mode](concept-conditional-access-report-only.md) and [the What If tool in Conditional Access](what-if-tool.md) when making changes. The following steps will help create Conditional Access policies to block access The first policy blocks access to all apps except for Microsoft 365 applications if not on a trusted location. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Block Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md | Organizations can choose to deploy this policy using the steps outlined below or The following steps will help create a Conditional Access policy to block legacy authentication requests. This policy is put in to [Report-only mode](howto-conditional-access-insights-reporting.md) to start so administrators can determine the impact they'll have on existing users. When administrators are comfortable that the policy applies as they intend, they can switch to **On** or stage the deployment by adding specific groups and excluding others. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Compliant Device Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device-admin.md | Organizations can choose to include or exclude roles as they see fit. The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Compliant Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-compliant-device.md | Requiring a hybrid Azure AD joined device is dependent on your devices already b The following steps will help create a Conditional Access policy to require multifactor authentication, devices accessing resources be marked as compliant with your organization's Intune compliance policies, or be hybrid Azure AD joined. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-location.md | With the location condition in Conditional Access, you can control access to you ## Define locations -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access** > **Named locations**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Named locations**. 1. Choose the type of location to create. 1. **Countries location** or **IP ranges location**. 1. Give your location a name. More information about the location condition in Conditional Access can be found ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Registration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-registration.md | Organizations can choose to deploy this policy using the steps outlined below or The following policy applies to the selected users, who attempt to register using the combined registration experience. The policy requires users to be in a trusted network location, do multifactor authentication or use Temporary Access Pass credentials. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration with TAP**. 1. Under **Assignments**, select **Users or workload identities**. Organizations may choose to require other grant controls with or in place of **R For [guest users](../external-identities/what-is-b2b.md) who need to register for multifactor authentication in your directory you may choose to block registration from outside of [trusted network locations](concept-conditional-access-conditions.md#locations) using the following guide. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. In Name, Enter a Name for this policy. For example, **Combined Security Info Registration on Trusted Networks**. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Risk User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk-user.md | Organizations can choose to deploy this policy using the steps outlined below or ## Enable with Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Policy Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-policy-risk.md | Organizations can choose to deploy this policy using the steps outlined below or ## Enable with Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Conditional Access Session Lifetime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-conditional-access-session-lifetime.md | description: Customize Azure AD authentication session configuration including u + Last updated 07/18/2023 To make sure that your policy works as expected, the recommended best practice i ### Policy 1: Sign-in frequency control -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Choose all required conditions for customerΓÇÖs environment, including the target cloud apps. To make sure that your policy works as expected, the recommended best practice i ### Policy 2: Persistent browser session -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Choose all required conditions. To make sure that your policy works as expected, the recommended best practice i 1. Select **Persistent browser session**. > [!NOTE]- > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane in the Azure portal for the same user if you have configured both policies. + > Persistent Browser Session configuration in Azure AD Conditional Access overrides the ΓÇ£Stay signed in?ΓÇ¥ setting in the company branding pane for the same user if you have configured both policies. 1. Select a value from dropdown. 1. Save your policy. ### Policy 3: Sign-in frequency control every time risky user -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Continuous Access Evaluation Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-continuous-access-evaluation-troubleshoot.md | Administrators can monitor and troubleshoot sign in events where [continuous acc Administrators can monitor user sign-ins where continuous access evaluation (CAE) is applied. This information is found in the Azure AD sign-in logs: -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Sign-in logs**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Apply the **Is CAE Token** filter. [ ![Screenshot showing how to add a filter to the Sign-ins log to see where CAE is being applied or not.](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png) ](./media/howto-continuous-access-evaluation-troubleshoot/sign-ins-log-apply-filter.png#lightbox) The continuous access evaluation insights workbook allows administrators to view Log Analytics integration must be completed before workbooks are displayed. For more information about how to stream Azure AD sign-in logs to a Log Analytics workspace, see the article [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md). -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Workbooks**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Workbooks**. 1. Under **Public Templates**, search for **Continuous access evaluation insights**. The **Continuous access evaluation insights** workbook contains the following table: Admins can view records filtered by time range and application. Admins can compa To unblock users, administrators can add specific IP addresses to a trusted named location. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Named locations**. Here you can create or update trusted IP locations. > [!NOTE] > Before adding an IP address as a trusted named location, confirm that the IP address does in fact belong to the intended organization. |
active-directory | Howto Policy App Enforced Restriction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-app-enforced-restriction.md | Block or limit access to SharePoint, OneDrive, and Exchange content from unmanag ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Policy Approved App Or App Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-approved-app-or-app-protection.md | The following steps will help create a Conditional Access policy requiring an ap Organizations can choose to deploy this policy using the steps outlined below or using the [Conditional Access templates](concept-conditional-access-policy-common.md#conditional-access-templates). -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. After administrators confirm the settings using [report-only mode](howto-conditi This policy will block all Exchange ActiveSync clients using basic authentication from connecting to Exchange Online. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Policy Guest Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-guest-mfa.md | Require guest users perform multifactor authentication when accessing your organ ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Policy Persistent Browser Session | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-persistent-browser-session.md | Protect user access on unmanaged devices by preventing browser sessions from rem ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Howto Policy Unknown Unsupported Device | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/howto-policy-unknown-unsupported-device.md | Users will be blocked from accessing company resources when the device type is u ## Create a Conditional Access policy -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Location Condition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/location-condition.md | The location found using the public IP address a client provides to Azure Active ## Named locations -Locations exist in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions. +Locations exist under **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. These named network locations may include locations like an organization's headquarters network ranges, VPN network ranges, or ranges that you wish to block. Named locations are defined by IPv4 and IPv6 address ranges or by countries/regions. > [!VIDEO https://www.youtube.com/embed/P80SffTIThY] To define a named location by IPv4/IPv6 address ranges, you need to provide: - One or more IP ranges. - Optionally **Mark as trusted location**. -![New IP locations in the Azure portal](./media/location-condition/new-trusted-location.png) +![New IP locations](./media/location-condition/new-trusted-location.png) Named locations defined by IPv4/IPv6 address ranges are subject to the following limitations: To define a named location by country/region, you need to provide: - Add one or more countries/regions. - Optionally choose to **Include unknown countries/regions**. -![Country as a location in the Azure portal](./media/location-condition/new-named-location-country-region.png) +![Country as a location](./media/location-condition/new-named-location-country-region.png) If you select **Determine location by IP address**, the system collects the IP address of the device the user is signing into. When a user signs in, Azure AD resolves the user's IPv4 or [IPv6](/troubleshoot/azure/active-directory/azure-ad-ipv6-support) address (starting April 3, 2023) to a country or region, and the mapping updates periodically. Organizations can use named locations defined by countries/regions to block traffic from countries/regions where they don't do business. Some IP addresses don't map to a specific country or region. To capture these IP ## Define locations 1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator.-1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Named locations**. +1. Browse to **Protection** > **Conditional Access** > **Named locations**. 1. Choose **New location**. 1. Give your location a name. 1. Choose **IP ranges** if you know the specific externally accessible IPv4 address ranges that make up that location or **Countries/Regions**. |
active-directory | Migrate Approved Client App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/migrate-approved-client-app.md | The following steps make an existing Conditional Access policy require an approv Organizations can choose to update their policies using the following steps. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select a policy that uses the approved client app grant. 1. Under **Access controls** > **Grant**, select **Grant access**. 1. Select **Require approved client app** and **Require app protection policy** The following steps help create a Conditional Access policy requiring an approve Organizations can choose to deploy this policy using the following steps. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/overview.md | Title: What is Conditional Access in Azure Active Directory? -description: Learn how Conditional Access is at the heart of the new identity-driven control plane. +description: Conditional Access is the Zero Trust policy engine at the heart of the new identity-driven control plane. Previously updated : 06/20/2023 Last updated : 08/24/2023 -Microsoft is providing Conditional Access templates to organizations in report-only mode starting in January of 2023. We may add more policies as new threats emerge. - The modern security perimeter extends beyond an organization's network perimeter to include user and device identity. Organizations now use identity-driven signals as part of their access control decisions. -> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4MwZs] +> [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4MwZs] Azure AD Conditional Access brings signals together, to make decisions, and enforce organizational policies. Conditional Access is Microsoft's [Zero Trust policy engine](/security/zero-trust/deploy/identity) taking signals from various sources into account when enforcing policy decisions. :::image type="content" source="media/overview/conditional-access-signal-decision-enforcement.png" alt-text="Diagram showing concept of Conditional Access signals plus decision to enforce organizational policy."::: -Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to do multifactor authentication to access it. +Conditional Access policies at their simplest are if-then statements; **if** a user wants to access a resource, **then** they must complete an action. For example: If a user wants to access an application or service like Microsoft 365, then they must perform multifactor authentication to gain access. Administrators are faced with two primary goals: These signals include: - Users with devices of specific platforms or marked with a specific state can be used when enforcing Conditional Access policies. - Use filters for devices to target policies to specific devices like privileged access workstations. - Application- - Users attempting to access specific applications can trigger different Conditional Access policies. + - Users attempting to access specific applications can trigger different Conditional Access policies. - Real-time and calculated risk detection- - Signals integration with [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify and remediate risky users and sign-in behavior. + - Signals integration with [Microsoft Entra ID Protection](../identity-protection/overview-identity-protection.md) allows Conditional Access policies to identify and remediate risky users and sign-in behavior. - [Microsoft Defender for Cloud Apps](/defender-cloud-apps/what-is-defender-for-cloud-apps) - Enables user application access and sessions to be monitored and controlled in real time. This integration increases visibility and control over access to and activities done within your cloud environment. Many organizations have [common access concerns that Conditional Access policies - Requiring multifactor authentication for users with administrative roles - Requiring multifactor authentication for Azure management tasks - Blocking sign-ins for users attempting to use legacy authentication protocols-- Requiring trusted locations for Azure AD Multifactor Authentication registration+- Requiring trusted locations for security information registration - Blocking or granting access from specific locations - Blocking risky sign-in behaviors - Requiring organization-managed devices for specific applications Administrators can create policies from scratch or start from a template policy ## Administrator experience -Administrators with the [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) role can manage policies in Azure AD. +Administrators with the [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator) role can manage policies. -Conditional Access is found in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access**. +Conditional Access is found in the [Microsoft Entra admin center](https://entra.microsoft.com) under **Protection** > **Conditional Access**. - The **Overview** page provides a summary of policy state, users, devices, and applications as well as general and security alerts with suggestions. - The **Coverage** page provides a synopsis of applications with and without Conditional Access policy coverage over the last seven days. Conditional Access is found in the Azure portal under **Azure Active Directory** Customers with [Microsoft 365 Business Premium licenses](/office365/servicedescriptions/office-365-service-descriptions-technet-library) also have access to Conditional Access features. -Risk-based policies require access to [Identity Protection](../identity-protection/overview-identity-protection.md), which is an Azure AD P2 feature. +Risk-based policies require access to [Identity Protection](../identity-protection/overview-identity-protection.md), which requires P2 licenses. Other products and features that may interact with Conditional Access policies require appropriate licensing for those products and features. |
active-directory | Plan Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/plan-conditional-access.md | Taking into account our learnings in the use of Conditional Access and supportin **Ensure that every app has at least one Conditional Access policy applied**. From a security perspective it's better to create a policy that encompasses **All cloud apps**, and then exclude applications that you don't want the policy to apply to. This practice ensures you don't need to update Conditional Access policies every time you onboard a new application. > [!TIP]-> Be very careful in using block and all apps in a single policy. This could lock admins out of the Azure portal, and exclusions cannot be configured for important endpoints such as Microsoft Graph. +> Be very careful in using block and all apps in a single policy. This could lock admins out, and exclusions cannot be configured for important endpoints such as Microsoft Graph. ### Minimize the number of Conditional Access policies |
active-directory | Policy Migration Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/policy-migration-mfa.md | -This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies in the Azure portal](policy-migration.md) before you start migrating your classic policies. +This article shows how to migrate a classic policy that requires **multifactor authentication** for a cloud app. Although it isn't a prerequisite, we recommend that you read [Migrate classic policies](policy-migration.md) before you start migrating your classic policies. ![Classic policy details requiring MFA for Salesforce app](./media/policy-migration/33.png) The migration process consists of the following steps: ## Open a classic policy -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Navigate to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Browse to **Protection** > **Conditional Access**. 1. Select, **Classic policies**. The migration process consists of the following steps: 1. In the list of classic policies, select the policy you wish to migrate. Document the configuration settings so that you can re-create with a new Conditional Access policy. -For examples of common policies and their configuration in the Azure portal, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md). +For examples of common policies and their configuration, see the article [Common Conditional Access policies](concept-conditional-access-policy-common.md). ## Disable the classic policy |
active-directory | Require Tou | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/require-tou.md | In this quickstart, you'll configure a Conditional Access policy in Azure Active To complete the scenario in this quickstart, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability. You can sign up for a trial in the Azure portal.+- Azure AD Premium P1 or P2 - Azure AD Conditional Access is an Azure AD Premium capability. - A test account to sign-in with - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user). ## Sign-in without terms of use - The goal of this step is to get an impression of the sign-in experience without a Conditional Access policy. -1. Sign in to the [Azure portal](https://portal.azure.com) as your test user. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as your test user. 1. Sign out. ## Create your terms of use This section provides you with the steps to create a sample ToU. When you create 1. In Microsoft Word, create a new document. 1. Type **My terms of use**, and then save the document on your computer as **mytou.pdf**.-1. Sign in to the [Azure portal](https://portal.azure.com) as a Conditional Access Administrator, Security Administrator, or a Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. - :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use shown in the Azure portal highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png"::: + :::image type="content" source="media/require-tou/terms-of-use-azure-ad-conditional-access.png" alt-text="Screenshot of terms of use highlighting the new terms button." lightbox="media/require-tou/terms-of-use-azure-ad-conditional-access.png"::: 1. In the menu on the top, select **New terms**. - :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy in the Azure portal." lightbox="media/require-tou/new-terms-of-use-creation.png"::: + :::image type="content" source="media/require-tou/new-terms-of-use-creation.png" alt-text="Screenshot that shows creating a new terms of use policy." lightbox="media/require-tou/new-terms-of-use-creation.png"::: 1. In the **Name** textbox, type **My TOU**. 1. Upload your terms of use PDF file. |
active-directory | Resilience Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/resilience-defaults.md | You can configure Conditional Access resilience defaults from the Azure portal, ### Azure portal -1. Navigate to the **Azure portal** > **Security** > **Conditional Access** +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Create a new policy or select an existing policy 1. Open the Session control settings 1. Select Disable resilience defaults to disable the setting for this policy. Sign-ins in scope of the policy will be blocked during an Azure AD outage |
active-directory | Terms Of Use | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/terms-of-use.md | Azure AD terms of use policies use the PDF format to present content. The PDF fi Once you've completed your terms of use policy document, use the following procedure to add it. -1. Sign in to the **Azure portal** as a Conditional Access Administrator or Security Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select, **New terms**. ![New term of use pane to specify your terms of use settings](./media/terms-of-use/new-tou.png) -1. In the **Name** box, enter a name for the terms of use policy used in the Azure portal. +1. In the **Name** box, enter a name for the terms of use policy. 1. For **Terms of use document**, browse to your finalized terms of use policy PDF and select it. 1. Select the language for your terms of use policy document. The language option allows you to upload multiple terms of use policies, each with a different language. The version of the terms of use policy that an end user sees is based on their browser preferences. 1. In the **Display name** box, enter a title that users see when they sign in. Once you've completed your terms of use policy document, use the following proce The Terms of use blade shows a count of the users who have accepted and declined. These counts and who accepted/declined are stored for the life of the terms of use policy. -1. Sign in to Azure and navigate to **Terms of use** at [https://aka.ms/catou](https://aka.ms/catou). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. ![Terms of use blade listing the number of user show have accepted and declined](./media/terms-of-use/view-tou.png) If you want to view more activity, Azure AD terms of use policies include audit To get started with Azure AD audit logs, use the following procedure: -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select a terms of use policy. 1. Select **View audit logs**. 1. On the Azure AD audit logs screen, you can filter the information using the provided lists to target specific audit log information. Users can review and see the terms of use policies that they've accepted by usin You can edit some details of terms of use policies, but you can't modify an existing document. The following procedure describes how to edit the details. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**. 1. In the Edit terms of use pane, you can change the following options: You can edit some details of terms of use policies, but you can't modify an exis ## Update the version or pdf of an existing terms of use -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit terms**. 1. For the language that you would like to update a new version, select **Update** under the action column You can edit some details of terms of use policies, but you can't modify an exis ## View previous versions of a ToU -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy for which you want to view a version history. 1. Select **Languages and version history** 1. Select **See previous versions.** You can edit some details of terms of use policies, but you can't modify an exis ## See who has accepted each version -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. To see who has currently accepted the ToU, select the number under the **Accepted** column for the ToU you want. 1. By default, the next page will show you the current state of each user's acceptance to the ToU 1. If you would like to see the previous consent events, you can select **All** from the **Current State** drop-down. Now you can see each user's events in details about each version and what happened. You can edit some details of terms of use policies, but you can't modify an exis The following procedure describes how to add a ToU language. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to edit. 1. Select **Edit Terms** 1. Select **Add language** at the bottom of the page. If a user is using browser that isn't supported, they're asked to use a differen You can delete old terms of use policies using the following procedure. -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access** > **Terms of use**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access** > **Terms of use**. 1. Select the terms of use policy you want to remove. 1. Select **Delete terms**. 1. In the message that appears asking if you want to continue, select **Yes**. |
active-directory | Troubleshoot Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-conditional-access.md | Organizations should avoid the following configurations: **For all users, all cloud apps:** - **Block access** - This configuration blocks your entire organization.-- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back into the Azure portal to change the policy.+- **Require device to be marked as compliant** - For users that haven't enrolled their devices yet, this policy blocks all access including access to the Intune portal. If you're an administrator without an enrolled device, this policy blocks you from getting back in to change the policy. - **Require Hybrid Azure AD domain joined device** - This policy block access has also the potential to block access for all users in your organization if they don't have a hybrid Azure AD joined device. - **Require app protection policy** - This policy block access has also the potential to block access for all users in your organization if you don't have an Intune policy. If you're an administrator without a client application that has an Intune app protection policy, this policy blocks you from getting back into portals such as Intune and Azure. More information can be found about the problem by clicking **More Details** in To find out which Conditional Access policy or policies applied and why do the following. -1. Sign in to the **Azure portal** as a Global Administrator, Security Administrator, or Global Reader. -1. Browse to **Azure Active Directory** > **Sign-ins**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Find the event for the sign-in to review. Add or remove filters and columns to filter out unnecessary information. 1. Add filters to narrow the scope: 1. **Correlation ID** when you have a specific event to investigate. To determine the service dependency, check the sign-ins log for the application :::image type="content" source="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png" alt-text="Screenshot that shows an example sign-in log showing an Application calling a Resource. This scenario is also known as a service dependency." lightbox="media/troubleshoot-conditional-access/service-dependency-example-sign-in.png"::: -## What to do if you're locked out of the Azure portal? +## What to do if you're locked out? -If you're locked out of the Azure portal due to an incorrect setting in a Conditional Access policy: +If you're locked out of the due to an incorrect setting in a Conditional Access policy: -- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access to the Azure portal can disable the policy that is impacting your sign-in. +- Check is there are other administrators in your organization that aren't blocked yet. An administrator with access can disable the policy that is impacting your sign-in. - If none of the administrators in your organization can update the policy, submit a support request. Microsoft support can review and upon confirmation update the Conditional Access policies that are preventing access. ## Next steps - [Use the What If tool to troubleshoot Conditional Access policies](what-if-tool.md)-- [Sign-in activity reports in the Azure portal](../reports-monitoring/concept-sign-ins.md)+- [Sign-in activity reports](../reports-monitoring/concept-sign-ins.md) - [Troubleshooting Conditional Access using the What If tool](troubleshoot-conditional-access-what-if.md) |
active-directory | Troubleshoot Policy Changes Audit Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/troubleshoot-policy-changes-audit-log.md | Find these options in the **Azure portal** > **Azure Active Directory**, **Diagn ## Use the audit log -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Audit logs**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Identity** > **Monitoring & health** > **Audit logs**. 1. Select the **Date** range you want to query. 1. From the **Service** filter, select **Conditional Access** and select the **Apply** button. |
active-directory | What If Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/what-if-tool.md | When the evaluation has finished, the tool generates a report of the affected po ## Running the tool -You can find the **What If** tool in the Azure portal under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**. +You can find the **What If** tool under **Azure Active Directory** > **Security** > **Conditional Access** > **What If**. Before you can run the What If tool, you must provide the conditions you want to evaluate. Before you can run the What If tool, you must provide the conditions you want to The only condition you must make is selecting a user or workload identity. All other conditions are optional. For a definition of these conditions, see the article [Building a Conditional Access policy](concept-conditional-access-policies.md). ## Evaluation |
active-directory | Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/workload-identity.md | -Conditional Access policies have historically applied only to users when they access apps and services like SharePoint online or the Azure portal. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities. +Conditional Access policies have historically applied only to users when they access apps and services like SharePoint Online. We're now extending support for Conditional Access policies to be applied to service principals owned by the organization. We call this capability Conditional Access for workload identities. A [workload identity](../workload-identities/workload-identities-overview.md) is an identity that allows an application or service principal access to resources, sometimes in the context of a user. These workload identities differ from traditional user accounts as they: Conditional Access for workload identities enables blocking service principals f Create a location based Conditional Access policy that applies to service principals. -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. Create a risk-based Conditional Access policy that applies to service principals :::image type="content" source="media/workload-identity/conditional-access-workload-identity-risk-policy.png" alt-text="Creating a Conditional Access policy with a workload identity and risk as a condition." lightbox="media/workload-identity/conditional-access-workload-identity-risk-policy.png"::: -1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). -1. Browse to **Microsoft Entra ID (Azure AD)** > **Protection** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **Create new policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. If you wish to roll back this feature, you can delete or disable any created pol The sign-in logs are used to review how policy is enforced for service principals or the expected affects of policy when using report-only mode. -1. Browse to **Azure Active Directory** > **Sign-in logs** > **Service principal sign-ins**. +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs** > **Service principal sign-ins**. 1. Select a log entry and choose the **Conditional Access** tab to view evaluation information. Failure reason when Service Principal is blocked by Conditional Access: ΓÇ£Access has been blocked due to Conditional Access policies.ΓÇ¥ To view results of a risk-based policy, refer to the **Report-only** tab of even You can get the objectID of the service principal from Azure AD Enterprise Applications. The Object ID in Azure AD App registrations canΓÇÖt be used. This identifier is the Object ID of the app registration, not of the service principal. -1. Browse to the **Azure portal** > **Azure Active Directory** > **Enterprise Applications**, find the application you registered. +1. Browse to **Identity** > **Applications** > **Enterprise Applications**, find the application you registered. 1. From the **Overview** tab, copy the **Object ID** of the application. This identifier is the unique to the service principal, used by Conditional Access policy to find the calling app. ### Microsoft Graph |
active-directory | Api Find An Api How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/api-find-an-api-how-to.md | - Title: Find an API for a custom-developed app -description: How to configure the permissions you need to access a particular API in your custom developed Azure AD application -------- Previously updated : 09/27/2021-----# How to find a specific API needed for a custom-developed application --Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration. --## Configuring a resource application to expose web APIs --When you expose your web API, the API be displayed in the **Select an API** list when adding permissions to an app registration. To add access scopes, follow the steps outlined in [Configure an application to expose web APIs](quickstart-configure-app-expose-web-apis.md). --## Configuring a client application to access web APIs --When you add permissions to your app registration, you can **add API access** to exposed web APIs. To access web APIs, follow the steps outlined in [Configure a client application to access web APIs](quickstart-configure-app-access-web-apis.md). --## Next steps --- [Understanding the Azure Active Directory application manifest](./reference-app-manifest.md) |
active-directory | App Objects And Service Principals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md | |
active-directory | Authentication Flows App Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md | Title: Microsoft identity platform authentication flows & app scenarios + Title: Microsoft identity platform app types and authentication flows description: Learn about application scenarios for the Microsoft identity platform, including authenticating identities, acquiring tokens, and calling protected APIs. Previously updated : 05/05/2022 Last updated : 08/11/2023 -#Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform. +# Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform. -# Authentication flows and application scenarios +# Microsoft identity platform app types and authentication flows The Microsoft identity platform supports authentication for different kinds of modern application architectures. All of the architectures are based on the industry-standard protocols [OAuth 2.0 and OpenID Connect](./v2-protocols.md). By using the [authentication libraries for the Microsoft identity platform](reference-v2-libraries.md), applications authenticate identities and acquire tokens to access protected APIs. This article describes authentication flows and the application scenarios that t ## Application categories -Tokens can be acquired from several types of applications, including: +[Security tokens](./security-tokens.md) can be acquired from several types of applications, including: - Web apps - Mobile apps The following sections describe the categories of applications. Authentication scenarios involve two activities: -- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](reference-v2-libraries.md), developed and supported by Microsoft.+- **Acquiring security tokens for a protected web API**: We recommend that you use the [Microsoft Authentication Library (MSAL)](msal-overview.md), developed and supported by Microsoft. - **Protecting a web API or a web app**: One challenge of protecting these resources is validating the security token. On some platforms, Microsoft offers [middleware libraries](reference-v2-libraries.md). ### With users or without users The available authentication flows differ depending on the sign-in audience. Som For more information, see [Supported account types](v2-supported-account-types.md#account-type-support-in-authentication-flows). -## Application scenarios +## Application types The Microsoft identity platform supports authentication for these app architectures: For a desktop app to call a web API that signs in users, use the interactive tok There's another possibility for Windows-hosted applications on computers joined either to a Windows domain or by Azure Active Directory (Azure AD). These applications can silently acquire a token by using [integrated Windows authentication](https://aka.ms/msal-net-iwa). -Applications running on a device without a browser can still call an API on behalf of a user. To authenticate, the user must sign in on another device that has a web browser. This scenario requires that you use the [device code flow](https://aka.ms/msal-net-device-code-flow). +Applications running on a device without a browser can still call an API on behalf of a user. To authenticate, the user must sign in on another device that has a web browser. This scenario requires that you use the [device code flow](v2-oauth2-device-code.md). ![Device code flow](media/scenarios/device-code-flow-app.svg) Similar to a desktop app, a mobile app calls the interactive token-acquisition m MSAL iOS and MSAL Android use the system web browser by default. However, you can direct them to use the embedded web view instead. There are specificities that depend on the mobile platform: Universal Windows Platform (UWP), iOS, or Android. -Some scenarios, like those that involve Conditional Access related to a device ID or a device enrollment, require a broker to be installed on the device. Examples of brokers are Microsoft Company Portal on Android and Microsoft Authenticator on Android and iOS. MSAL can now interact with brokers. For more information about brokers, see [Leveraging brokers on Android and iOS](https://github.com/AzureAD/azure-activedirectory-library-for-dotnet/wiki/leveraging-brokers-on-Android-and-iOS). +Some scenarios, like those that involve Conditional Access related to a device ID or a device enrollment, require a broker to be installed on the device. Examples of brokers are Microsoft Company Portal on Android and Microsoft Authenticator on Android and iOS. MSAL can now interact with brokers. For more information about brokers, see [Leveraging brokers on Android and iOS](msal-net-use-brokers-with-xamarin-apps.md). For more information, see [Mobile app that calls web APIs](scenario-mobile-overview.md). |
active-directory | Authentication Protocols | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-protocols.md | - Title: Microsoft identity platform authentication protocols -description: An overview of the authentication protocols supported by the Microsoft identity platform -------- Previously updated : 09/27/2021-------# Microsoft identity platform authentication protocols --The Microsoft identity platform supports several of the most widely used authentication and authorization protocols. The topics in this section describe the supported protocols and their implementation in Microsoft identity platform. The topics included a review of supported claim types, an introduction to the use of federation metadata, detailed OAuth 2.0. and SAML 2.0 protocol reference documentation, and a troubleshooting section. --## Authentication protocols articles and reference --* [Important Information About Signing Key Rollover in Microsoft identity platform](./signing-key-rollover.md) ΓÇô Learn about Microsoft identity platformΓÇÖs signing key rollover cadence, changes you can make to update the key automatically, and discussion for how to update the most common application scenarios. -* [Supported Token and Claim Types](id-tokens.md) - Learn about the claims in the tokens that the Microsoft identity platform issues. -* [OAuth 2.0 in Microsoft identity platform](v2-oauth2-auth-code-flow.md) - Learn about the implementation of OAuth 2.0 in Microsoft identity platform. -* [OpenID Connect 1.0](v2-protocols-oidc.md) - Learn how to use OAuth 2.0, an authorization protocol, for authentication. -* [Service to Service Calls with Client Credentials](v2-oauth2-client-creds-grant-flow.md) - Learn how to use OAuth 2.0 client credentials grant flow for service to service calls. -* [Service to Service Calls with On-Behalf-Of Flow](v2-oauth2-on-behalf-of-flow.md) - Learn how to use OAuth 2.0 On-Behalf-Of flow for service to service calls. -* [SAML Protocol Reference](./saml-protocol-reference.md) - Learn about the Single Sign-On and Single Sign-out SAML profiles of Microsoft identity platform. --## See also --* [Microsoft identity platform overview](v2-overview.md) -* [Active Directory Code Samples](sample-v2-code.md) |
active-directory | Configure App Multi Instancing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-app-multi-instancing.md | The IDP initiated SSO feature exposes the following settings for each applicatio ### Configure IDP initiated SSO +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications**. 1. Open any SSO enabled enterprise app and navigate to the SAML single sign-on blade. 1. Select **Edit** on the **User Attributes & Claims** panel. 1. Select **Edit** to open the advanced options blade. |
active-directory | Consent Framework Links | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/consent-framework-links.md | - Title: How application consent works -description: Learn more about how the Azure AD consent framework works to see how you can use it when developing applications on Azure AD --------- Previously updated : 09/27/2021-----# How application consent works --This article is to help you learn more about how the Azure AD consent framework works so you can develop applications more effectively. --## Recommended documents --- Get a general understanding of [how consent allows a resource owner to govern an application's access to resources](./developer-glossary.md#consent).-- Get a step-by-step overview of [how the Azure AD consent framework implements consent](./quickstart-register-app.md).-- For more depth, learn [how a multi-tenant application can use the consent framework](./howto-convert-app-to-be-multi-tenant.md) to implement "user" and "admin" consent, supporting more advanced multi-tier application patterns.-- For more depth, learn [how consent is supported at the OAuth 2.0 protocol layer during the authorization code grant flow.](v2-oauth2-auth-code-flow.md#request-an-authorization-code)--## Next steps -[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Custom Extension Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/custom-extension-get-started.md | -This article describes how to configure and setup a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token. +This article describes how to configure and set up a custom claims provider with the [token issuance start event](custom-claims-provider-overview.md#token-issuance-start-event-listener) type. This event is triggered right before the token is issued, and allows you to call a REST API to add claims to the token. This how-to guide demonstrates the token issuance start event with a REST API running in Azure Functions and a sample OpenID Connect application. Before you start, take a look at following video, which demonstrates how to configure Azure AD custom claims provider with Function App: In this step, you configure a custom authentication extension, which will be use # [Microsoft Graph](#tab/microsoft-graph) -Create an Application Registration to authenticate your custom authentication extension to your Azure Function. +Register an application to authenticate your custom authentication extension to your Azure Function. -1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/applications` -1. Select **Request Body** and paste the following JSON: +1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. The account must have the privileges to create and manage an application registration in the tenant. +2. Run the following request. - ```json + # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/v1.0/applications + Content-type: application/json + {- "displayName": "authenticationeventsAPI" + "displayName": "authenticationeventsAPI" } ``` -1. Select **Run Query** to submit the request. --1. Copy the **Application ID** value (*appId*) from the response. You need this value later, which is referred to as the `{authenticationeventsAPI_AppId}`. Also get the object ID of the app (*ID*), which is referred to as `{authenticationeventsAPI_ObjectId}` from the response. + # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/csharp/v1/tutorial-application-basics-create-app-csharp-snippets.md)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/go/v1/tutorial-application-basics-create-app-go-snippets.md)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/jav)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/javascript/v1/tutorial-application-basics-create-app-javascript-snippets.md)] + + # [PHP](#tab/php) + Snippet not available. + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/powershell/v1/tutorial-application-basics-create-app-powershell-snippets.md)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/includes/snippets/python/v1/tutorial-application-basics-create-app-python-snippets.md)] + + -Create a service principal in the tenant for the authenticationeventsAPI app registration: +3. From the response, record the value of **id** and **appId** of the newly created app registration. These values will be referenced in this article as `{authenticationeventsAPI_ObjectId}` and `{authenticationeventsAPI_AppId}` respectively. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals` -1. Select **Request Body** and paste the following JSON: +Create a service principal in the tenant for the authenticationeventsAPI app registration. - ```json - { - "appId": "{authenticationeventsAPI_AppId}" - } - ``` +Still in Graph Explorer, run the following request. Replace `{authenticationeventsAPI_AppId}` with the value of **appId** that you recorded from the previous step. -1. Select **Run Query** to submit the request. +```http +POST https://graph.microsoft.com/v1.0/servicePrincipals +Content-type: application/json + +{ + "appId": "{authenticationeventsAPI_AppId}" +} +``` ### Set the App ID URI, access token version, and required resource access Update the newly created application to set the application ID URI value, the access token version, and the required resource access. -1. Set the HTTP method to **PATCH**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId}` -1. Select **Request Body** and paste the following JSON: +In Graph Explorer, run the following request. + - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier. + - Set the `{authenticationeventsAPI_AppId}` value with the **appId** that you recorded earlier. + - An example value is `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as you'll use it later in this article in place of `{functionApp_IdentifierUri}`. - Set the application ID URI value in the *identifierUris* property. Replace `{Function_Url_Hostname}` with the hostname of the `{Function_Url}` you recorded earlier. - - Set the `{authenticationeventsAPI_AppId}` value with the App ID generated from the app registration created in the previous step. - - An example value would be `api://authenticationeventsAPI.azurewebsites.net/f4a70782-3191-45b4-b7e5-dd415885dd80`. Take note of this value as it is used in following steps and is referenced as `{functionApp_IdentifierUri}`. - - ```json +```http +POST https://graph.microsoft.com/v1.0/applications/{authenticationeventsAPI_ObjectId} +Content-type: application/json ++{ +"identifierUris": [ + "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}" +], +"api": { + "requestedAccessTokenVersion": 2, + "acceptMappedClaims": null, + "knownClientApplications": [], + "oauth2PermissionScopes": [], + "preAuthorizedApplications": [] +}, +"requiredResourceAccess": [ {- "identifierUris": [ - "api://{Function_Url_Hostname}/{authenticationeventsAPI_AppId}" - ], - "api": { - "requestedAccessTokenVersion": 2, - "acceptMappedClaims": null, - "knownClientApplications": [], - "oauth2PermissionScopes": [], - "preAuthorizedApplications": [] - }, - "requiredResourceAccess": [ + "resourceAppId": "00000003-0000-0000-c000-000000000000", + "resourceAccess": [ {- "resourceAppId": "00000003-0000-0000-c000-000000000000", - "resourceAccess": [ - { - "id": "214e810f-fda8-4fd7-a475-29461495eb00", - "type": "Role" - } - ] + "id": "214e810f-fda8-4fd7-a475-29461495eb00", + "type": "Role" } ] }- ``` --1. Select **Run Query** to submit the request. +] +} +``` ### Register a custom authentication extension -Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the App Registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`. +Next, you register the custom authentication extension. You register the custom authentication extension by associating it with the app registration for the Azure Function, and your Azure Function endpoint `{Function_Url}`. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/beta/identity/customAuthenticationExtensions` -1. Select **Request Body** and paste the following JSON: +1. In Graph Explorer, run the following request. Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step. + - You'll need the *CustomAuthenticationExtension.ReadWrite.All* delegated permission. - Replace `{Function_Url}` with the hostname of your Azure Function app. Replace `{functionApp_IdentifierUri}` with the identifierUri used in the previous step. + # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/beta/identity/customAuthenticationExtensions + Content-type: application/json - ```json { "@odata.type": "#microsoft.graph.onTokenIssuanceStartCustomExtension", "displayName": "onTokenIssuanceStartCustomExtension", Next, you register the custom authentication extension. You register the custom ] } ```+ # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] -1. Select **Run Query** to submit the request. + -Record the ID value of the created custom claims provider object. The ID is needed in a later step and is referred to as the `{customExtensionObjectId}`. +1. Record the **id** value of the created custom claims provider object. You'll use the value later in this tutorial in place of `{customExtensionObjectId}`. ### 2.2 Grant admin consent -After your custom authentication extension is created, you'll be taken to the **Overview** tab of the new custom authentication extension. +After your custom authentication extension is created, open the **Overview** tab of the new custom authentication extension. From the **Overview** page, select the **Grant permission** button to give admin consent to the registered app, which allows the custom authentication extension to authenticate to your API. The custom authentication extension uses `client_credentials` to authenticate to the Azure Function App using the `Receive custom authentication extension HTTP requests` permission. The following screenshot shows how to register the *My Test application*. ### 3.1 Get the application ID -In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. +In your app registration, under **Overview**, copy the **Application (client) ID**. The app ID is referred to as the `{App_to_enrich_ID}` in later steps. In Microsoft Graph, it's referenced by the **appId** propety. :::image type="content" border="false"source="media/custom-extension-get-started/get-the-test-application-id.png" alt-text="Screenshot that shows how to copy the application ID."::: Next, assign the attributes from the custom claims provider, which should be iss # [Microsoft Graph](#tab/microsoft-graph) -First create an event listener to trigger a custom authentication extension using the token issuance start event: --1. Sign in to the [Microsoft Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/beta/identity/authenticationEventListeners` -1. Select **Request Body** and paste the following JSON: +First create an event listener to trigger a custom authentication extension for the *My Test application* using the token issuance start event. - Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier. +1. Sign in to [Graph Explorer](https://aka.ms/ge) using an account whose home tenant is the tenant you wish to manage your custom authentication extension in. +1. Run the following request. Replace `{App_to_enrich_ID}` with the app ID of *My Test application* recorded earlier. Replace `{customExtensionObjectId}` with the custom authentication extension ID recorded earlier. + - You'll need the *EventListener.ReadWrite.All* delegated permission. - ```json + # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/beta/identity/authenticationEventListeners + Content-type: application/json + { "@odata.type": "#microsoft.graph.onTokenIssuanceStartListener", "conditions": { First create an event listener to trigger a custom authentication extension usin } ``` -1. Select **Run Query** to submit the request. + # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/bet)] + + + -Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider: +Next, create the claims mapping policy, which describes which claims can be issued to an application from a custom claims provider. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/policies/claimsmappingpolicies` -1. Select **Request Body** and paste the following JSON: +1. Still in Graph Explorer, run the following request. You'll need the *Policy.ReadWrite.ApplicationConfiguration* delegated permission. +++ # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies + Content-type: application/json - ```json { "definition": [ "{\"ClaimsMappingPolicy\":{\"Version\":1,\"IncludeBasicClaimSet\":\"true\",\"ClaimsSchema\":[{\"Source\":\"CustomClaimsProvider\",\"ID\":\"DateOfBirth\",\"JwtClaimType\":\"dob\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CustomRoles\",\"JwtClaimType\":\"my_roles\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"CorrelationId\",\"JwtClaimType\":\"correlationId\"},{\"Source\":\"CustomClaimsProvider\",\"ID\":\"ApiVersion\",\"JwtClaimType\":\"apiVersion \"},{\"Value\":\"tokenaug_V2\",\"JwtClaimType\":\"policy_version\"}]}}" Next, create the claims mapping policy, which describes which claims can be issu "isOrganizationDefault": false } ```+ # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-claimsmappingpolicies-csharp-snippets.md)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-claimsmappingpolicies-go-snippets.md)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-claimsmappingpolicies-javascript-snippets.md)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-claimsmappingpolicies-php-snippets.md)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-claimsmappingpolicies-powershell-snippets.md)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-claimsmappingpolicies-python-snippets.md)] + + -1. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`. -1. Select **Run Query** to submit the request. +2. Record the `ID` generated in the response, later it's referred to as `{claims_mapping_policy_ID}`. -Get the `servicePrincipal` objectId: +Get the service principal object ID: -1. Set the HTTP method to **GET**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}')/claimsMappingPolicies/$ref`. Replace `{App_to_enrich_ID}` with *My Test Application* App ID. -1. Record the `id` value, later it's referred to as `{test_App_Service_Principal_ObjectId}`. +1. Run the following request in Graph Explorer. Replace `{App_to_enrich_ID}` with the **appId** of *My Test Application*. -Assign the claims mapping policy to the `servicePrincipal` of *My Test Application*: + ```http + GET https://graph.microsoft.com/v1.0/servicePrincipals(appId='{App_to_enrich_ID}') + ``` ++Record the value of **id**. -1. Set the HTTP method to **POST**. -1. Paste the URL: `https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref` -1. Select **Request Body** and paste the following JSON: +Assign the claims mapping policy to the service principal of *My Test Application*. ++1. Run the following request in Graph Explorer. You'll need the *Policy.ReadWrite.ApplicationConfiguration* and *Application.ReadWrite.All* delegated permission. ++ # [HTTP](#tab/http) + ```http + POST https://graph.microsoft.com/v1.0/servicePrincipals/{test_App_Service_Principal_ObjectId}/claimsMappingPolicies/$ref + Content-type: application/json - ```json { "@odata.id": "https://graph.microsoft.com/v1.0/policies/claimsMappingPolicies/{claims_mapping_policy_ID}" } ``` -1. Select **Run Query** to submit the request. + # [C#](#tab/csharp) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/csharp/create-claimsmappingpolicy-from-serviceprincipal-csharp-snippets.md)] + + # [Go](#tab/go) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/go/create-claimsmappingpolicy-from-serviceprincipal-go-snippets.md)] + + # [Java](#tab/java) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/jav)] + + # [JavaScript](#tab/javascript) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/javascript/create-claimsmappingpolicy-from-serviceprincipal-javascript-snippets.md)] + + # [PHP](#tab/php) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/php/create-claimsmappingpolicy-from-serviceprincipal-php-snippets.md)] + + # [PowerShell](#tab/powershell) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/powershell/create-claimsmappingpolicy-from-serviceprincipal-powershell-snippets.md)] + + # [Python](#tab/python) + [!INCLUDE [sample-code](~/microsoft-graph/api-reference/v1.0/includes/snippets/python/create-claimsmappingpolicy-from-serviceprincipal-python-snippets.md)] + + |
active-directory | Delegated And App Perms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/delegated-and-app-perms.md | - Title: Differences between delegated and app permissions -description: Learn about delegated and application permissions, how they are used by clients and exposed by resources for applications you are developing with Azure AD --------- Previously updated : 11/10/2022-----# How to recognize differences between delegated and application permissions --## Recommended documents --- Learn more about how client applications use [delegated and application permission requests](developer-glossary.md#permissions) to access resources.-- Learn about [delegated and application permissions](permissions-consent-overview.md).-- See step-by-step instructions on how to [configure a client application's permission requests](quickstart-configure-app-access-web-apis.md)-- For more depth, learn how resource applications expose [scopes](developer-glossary.md#scopes) and [application roles](developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal. --## Next steps -[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Enterprise App Role Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/enterprise-app-role-management.md | You can customize the role claim in the access token that is received after an a Use the following steps to locate the enterprise application: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. In the left pane, select **Azure Active Directory**. -1. Select **Enterprise applications**, and then select **All applications**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. 1. Enter the name of the existing application in the search box, and then select the application from the search results. 1. After the application is selected, copy the object ID from the overview pane. - :::image type="content" source="media/enterprise-app-role-management/record-objectid.png" alt-text="Screenshot that shows how to locate and record the object identifier for the application."::: - ## Add roles Use the Microsoft Graph Explorer to add roles to an enterprise application. Use the Microsoft Graph Explorer to add roles to an enterprise application. Update the attributes to define the role claim that is included in the token. -1. Locate the application in the Azure portal, and then select **Single sign-on** in the left menu. +1. Locate the application in the Microsoft Entra admin center, and then select **Single sign-on** in the left menu. 1. In the **Attributes & Claims** section, select **Edit**. 1. Select **Add new claim**. 1. In the **Name** box, type the attribute name. This example uses **Role Name** as the claim name. Update the attributes to define the role claim that is included in the token. 1. From the **Source attribute** list, select **user.assignedroles**. 1. Select **Save**. The new **Role Name** attribute should now appear in the **Attributes & Claims** section. The claim should now be included in the access token when signing into the application. - :::image type="content" source="media/enterprise-app-role-management/attributes-summary.png" alt-text="Screenshot that shows a display of the list of attributes and claims defined for the application."::: - ## Assign roles After the service principal is patched with more roles, you can assign users to the respective roles. -1. In the Azure portal, locate the application to which the role was added. +1. Locate the application to which the role was added in the Microsoft Entra admin center. 1. Select **Users and groups** in the left menu and then select the user that you want to assign the new role. 1. Select **Edit assignment** at the top of the pane to change the role. 1. Select **None Selected**, select the role from the list, and then select **Select**. 1. Select **Assign** to assign the role to the user. - :::image type="content" source="media/enterprise-app-role-management/assign-role.png" alt-text="Screenshot that shows how to assign a role to a user of an application."::: - ## Update roles To update an existing role, perform the following steps: |
active-directory | How Applications Are Added | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-applications-are-added.md | |
active-directory | Howto Configure App Instance Property Locks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-configure-app-instance-property-locks.md | -# How to configure app instance property lock for your applications (Preview) +# How to configure app instance property lock for your applications Application instance lock is a feature in Azure Active Directory (Azure AD) that allows sensitive properties of a multi-tenant application object to be locked for modification after the application is provisioned in another tenant. This feature provides application developers with the ability to lock certain properties if the application doesn't support scenarios that require configuring those properties. The following property usage scenarios are considered as sensitive: - Credentials (`keyCredentials`, `passwordCredentials`) where usage type is `Verify`. In this scenario, your application supports an OIDC client credentials flow. - `TokenEncryptionKeyId` which specifies the keyId of a public key from the keyCredentials collection. When configured, Azure AD encrypts all the tokens it emits by using the key to which this property points. The application code that receives the encrypted token must use the matching private key to decrypt the token before it can be used for the signed-in user. +> [!NOTE] +> App instance lock is enabled by default for all new applications created using the Microsoft Entra admin center. + ## Configure an app instance lock [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] |
active-directory | Howto Create Self Signed Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-self-signed-certificate.md | To customize the start and expiry date and other properties of the certificate, Use the certificate you create using this method to authenticate from an application running from your machine. For example, authenticate from Windows PowerShell. -In an elevated PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate. +In a PowerShell prompt, run the following command and leave the PowerShell console session open. Replace `{certificateName}` with the name that you wish to give to your certificate. ```powershell $certname = "{certificateName}" ## Replace {certificateName} |
active-directory | Identity Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/identity-videos.md | ___ <!-- IMAGES -->-[auth-fund-01-img]: ./media/identity-videos/aad-auth-fund-01.jpg -[auth-fund-02-img]: ./media/identity-videos/aad-auth-fund-02.jpg -[auth-fund-03-img]: ./media/identity-videos/aad-auth-fund-03.jpg -[auth-fund-04-img]: ./media/identity-videos/aad-auth-fund-04.jpg -[auth-fund-05-img]: ./media/identity-videos/aad-auth-fund-05.jpg -[auth-fund-06-img]: ./media/identity-videos/aad-auth-fund-06.jpg +[auth-fund-01-img]: ./media/identity-videos/auth-fund-01.jpg +[auth-fund-02-img]: ./media/identity-videos/auth-fund-02.jpg +[auth-fund-03-img]: ./media/identity-videos/auth-fund-03.jpg +[auth-fund-04-img]: ./media/identity-videos/auth-fund-04.jpg +[auth-fund-05-img]: ./media/identity-videos/auth-fund-05.jpg +[auth-fund-06-img]: ./media/identity-videos/auth-fund-06.jpg <!-- VIDEOS --> [auth-fund-01-vid]: https://www.youtube.com/watch?v=fbSVgC8nGz4&list=PLLasX02E8BPD5vC2XHS_oHaMVmaeHHPLy&index=1 |
active-directory | Jwt Claims Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/jwt-claims-customization.md | These JSON Web tokens (JWT) used by OIDC and OAuth applications contain pieces o [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -To view or edit the claims issued in the JWT to the application, open the application in Azure portal. Then select **Single sign-on** blade in the left-hand menu and open the **Attributes & Claims** section. +To view or edit the claims issued in the JWT to the application: +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. +1. Select the application, select **Single sign-on** in the left-hand menu, and then select **Edit** in the **Attributes & Claims** section. An application may need claims customization for various reasons. For example, when an application requires a different set of claim URIs or claim values. Using the **Attributes & Claims** section, you can add or remove a claim for your application. You can also create a custom claim that is specific for an application based on the use case. The following steps describe how to assign a constant value: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. In the **Attributes & Claims** section, Select **Edit** to edit the claims. -1. Select the required claim that you want to modify. +1. Select the claim that you want to modify. 1. Enter the constant value without quotes in the **Source attribute** as per your organization, and then select **Save**. - The Attributes overview displays the constant value. - ## Special claims transformations You can use the following special claims transformations functions. To apply a transformation to a user attribute: 1. **Treat source as multivalued** indicates whether the transform is applied to all values or just the first. By default, the first element in a multi-value claim is applied the transformations. When you check this box, it ensures it's applied to all. This checkbox is only enabled for multi-valued attributes. For example, `user.proxyaddresses`. 1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case. - :::image type="content" source="./media/jwt-claims-customization/sso-saml-multiple-claims-transformation.png" alt-text="Screenshot of claims transformation."::: - You can use the following functions to transform claims. | Function | Description | You can use the following functions to transform claims. | **ToLowercase()** | Converts the characters of the selected attribute into lowercase characters. | | **ToUppercase()** | Converts the characters of the selected attribute into uppercase characters. | | **Contains()** | Outputs an attribute or constant if the input matches the specified value. Otherwise, you can specify another output if there's no match. <br/>For example, if you want to emit a claim where the value is the user's email address if it contains the domain `@contoso.com`, otherwise you want to output the user principal name. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.email<br/>*Value*: "@contoso.com"<br/>Parameter 2 (output): user.email<br/>Parameter 3 (output if there's no match): user.userprincipalname |-| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with "000", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 | -| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with "US", otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 | +| **EndWith()** | Outputs an attribute or constant if the input ends with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the employee ID ends with `000`, otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.employeeid<br/>*Value*: "000"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 | +| **StartWith()** | Outputs an attribute or constant if the input starts with the specified value. Otherwise, you can specify another output if there's no match.<br/>For example, if you want to emit a claim where the value is the user's employee ID if the country/region starts with `US`, otherwise you want to output an extension attribute. To perform this function, you configure the following values:<br/>*Parameter 1(input)*: user.country<br/>*Value*: "US"<br/>Parameter 2 (output): user.employeeid<br/>Parameter 3 (output if there's no match): user.extensionattribute1 | | **Extract() - After matching** | Returns the substring after it matches the specified value.<br/>For example, if the input's value is `Finance_BSimon`, the matching value is `Finance_`, then the claim's output is `BSimon`. | | **Extract() - Before matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is `BSimon_US`, the matching value is `_US`, then the claim's output is `BSimon`. | | **Extract() - Between matching** | Returns the substring until it matches the specified value.<br/>For example, if the input's value is `Finance_BSimon_US`, the first matching value is `Finance_`, the second matching value is `_US`, then the claim's output is `BSimon`. | For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta. - As another example, consider when Britta Simon tries to sign in using the following configuration. Azure AD first evaluates all conditions with source `Attribute`. The source for the claim is `user.mail` when Britta's user type is **AAD guests**. Next, Azure AD evaluates the transformations. Because Britta is a guest, `user.extensionattribute1` is the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is the new source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta. - As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. The claim falls back to `user.extensionattribute1` ignoring the condition entry in both cases. ## Security considerations-Applications that receive tokens rely on claim values that are authoritatively issued by Azure AD and can't be tampered with. When you modify the token contents through claims customization, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the customization to protect themselves from customizations created by malicious actors. This can be done in one the following ways: +Applications that receive tokens rely on claim values that can't be tampered with. When you modify the token contents through claims customization, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified to protect themselves from customizations created by malicious actors. Protect from inappropriate customizations in one the following ways: - [Configure a custom signing key](#configure-a-custom-signing-key) - [update the application manifest to accept mapped claims](#update-the-application-manifest). Applications that receive tokens rely on claim values that are authoritatively i Without this, Azure AD returns an [AADSTS50146 error code](./reference-error-codes.md#aadsts-error-codes). ## Configure a custom signing key-For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. when setting up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to validate the token signing key. +For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. when setting up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After you configure the custom signing key, your application code needs to validate the token signing key. Add the following information to the service principal: Add the following information to the service principal: Extract the private and public key base-64 encoded from the PFX file export of your certificate. Make sure that the `keyId` for the `keyCredential` used for "Sign" matches the `keyId` of the `passwordCredential`. You can generate the `customkeyIdentifier` by getting the hash of the cert's thumbprint. ## Request-The following example shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify". +The following example shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is `Sign`. For the public key, the property usage is `Verify`. ``` PATCH https://graph.microsoft.com/v1.0/servicePrincipals/f47a6776-bca7-4f2e-bc6c-eec59d058e3e Authorization: Bearer {token} ``` ## Configure a custom signing key using PowerShell-Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key). +Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After you configure the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key). -To run this script you need: +To run this script, you need: - The object ID of your application's service principal, found in the Overview blade of your application's entry in Enterprise Applications in the Azure portal. - An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the Overview blade of the application's entry in App registrations in the Azure portal. The app registration should have the following configuration: https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration ``` ## Update the application manifest-For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication?view=graph-rest-1.0&preserve-view=true#properties), this allows an application to use claims mapping without specifying a custom signing key. +For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication?view=graph-rest-1.0&preserve-view=true#properties). Setting the property allows an application to use claims mapping without specifying a custom signing key. >[!WARNING] >Do not set the acceptMappedClaims property to true for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app. |
active-directory | Mark App As Publisher Verified | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md | Title: Mark an app as publisher verified -description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Microsoft Partner Network (MPN) account that has completed the verification process and has associated this MPN account with that application registration. +description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Cloud Partner Program (CPP) account that has completed the verification process and has associated this CPP account with that application registration. -When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN) account and has associated this MPN account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process. +When an app registration has a verified publisher, it means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Cloud Partner Program (CPP) account and has associated this CPP account with their app registration. This article describes how to complete the [publisher verification](publisher-verification-overview.md) process. ## Quickstart-If you are already enrolled in the Microsoft Partner Network (MPN) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away: +If you are already enrolled in the [Cloud Partner Program (CPP)](/partner-center/intro-to-cloud-partner-program-membership) and have met the [pre-requisites](publisher-verification-overview.md#requirements), you can get started right away: 1. Sign into the [App Registration portal](https://aka.ms/PublisherVerificationPreview) using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) 1. Choose an app and click **Branding & properties**. -1. Click **Add MPN ID to verify publisher** and review the listed requirements. +1. Click **Add Partner One ID to verify publisher** and review the listed requirements. -1. Enter your MPN ID and click **Verify and save**. +1. Enter your Partner One ID and click **Verify and save**. For more details on specific benefits, requirements, and frequently asked questions see the [overview](publisher-verification-overview.md). ## Mark your app as publisher verified Make sure you meet the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified. -1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the MPN Account in Partner Center. +1. Sign in using [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) to an organizational (Azure AD) account authorized to make changes to the app you want to mark as Publisher Verified and on the CPP Account in Partner Center. - The Azure AD user must have one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator. - - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD). + - The user in Partner Center must have the following [roles](/partner-center/permissions-overview): CPP Admin, Accounts Admin, or a Global Administrator (a shared role mastered in Azure AD). 1. Navigate to the **App registrations** blade: Make sure you meet the [pre-requisites](publisher-verification-overview.md#requi 1. Ensure the appΓÇÖs [publisher domain](howto-configure-publisher-domain.md) is set. -1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your MPN account. +1. Ensure that either the publisher domain or a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) on the tenant matches the domain of the email address used during the verification process for your CPP account. -1. Click **Add MPN ID to verify publisher** near the bottom of the page. +1. Click **Add Partner One ID to verify publisher** near the bottom of the page. -1. Enter the **MPN ID** for: +1. Enter the **Partner One ID** for: - - A valid Microsoft Partner Network account that has completed the verification process. + - A valid Cloud Partner Program account that has completed the verification process. - The Partner global account (PGA) for your organization. |
active-directory | Msal Android Single Sign On | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md | If the application uses a `WebView` strategy without integrating Microsoft Authe If the application uses MSAL with a broker like Microsoft Authenticator or Intune Company Portal, then users can have SSO experience across applications if they have an active sign-in with one of the apps. +> [!NOTE] +> MSAL with broker utilizes WebViews instead of Custom Tabs. As a result, the Single Sign-On (SSO) state is not extended to other apps that use Custom Tabs. + ### WebView To use the in-app WebView, put the following line in the app configuration JSON that is passed to MSAL: |
active-directory | Msal Client Application Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-client-application-configuration.md | The authority you specify in your code needs to be consistent with the **Support The authority can be: - An Azure AD cloud authority.-- An Azure AD B2C authority. See [B2C specifics](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/AAD-B2C-specifics).-- An Active Directory Federation Services (AD FS) authority. See [AD FS support](https://aka.ms/msal-net-adfs-support).+- An Azure AD B2C authority. See [B2C specifics](msal-net-b2c-considerations.md). +- An Active Directory Federation Services (AD FS) authority. See [AD FS support](msal-net-adfs-support.md). Azure AD cloud authorities have two parts: You can override the redirect URI by using the `RedirectUri` property (for examp - `RedirectUriOnAndroid` = "msauth-5a434691-ccb2-4fd1-b97b-b64bcfbc03fc://com.microsoft.identity.client.sample"; - `RedirectUriOnIos` = $"msauth.{Bundle.ID}://auth"; -For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/wiki/Leveraging-the-broker-on-iOS). +For more iOS details, see [Migrate iOS applications that use Microsoft Authenticator from ADAL.NET to MSAL.NET](msal-net-migration-ios-broker.md) and [Leveraging the broker on iOS](msal-net-use-brokers-with-xamarin-apps.md). For more Android details, see [Brokered auth in Android](msal-android-single-sign-on.md). ### Redirect URI for confidential client apps |
active-directory | Msal Error Handling Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-js.md | The following error types are available: - `AuthError`: Base error class for the MSAL.js library, also used for unexpected errors. -- `ClientAuthError`: Error class, which denotes an issue with Client authentication. Most errors that come from the library will be ClientAuthErrors. These errors result from things like calling a login method when login is already in progress, the user cancels the login, and so on.+- `ClientAuthError`: Error class which denotes an issue with Client authentication. Most errors that come from the library are ClientAuthErrors. These errors result from things like calling a login method when login is already in progress, the user cancels the login, and so on. - `ClientConfigurationError`: Error class, extends `ClientAuthError` thrown before requests are made when the given user config parameters are malformed or missing. -- `ServerError`: Error class, represents the error strings sent by the authentication server. These may be errors such as invalid request formats or parameters, or any other errors that prevent the server from authenticating or authorizing the user.+- `ServerError`: Error class, represents the error strings sent by the authentication server. These errors may be invalid request formats or parameters, or any other errors that prevent the server from authenticating or authorizing the user. - `InteractionRequiredAuthError`: Error class, extends `ServerError` to represent server errors, which require an interactive call. This error is thrown by `acquireTokenSilent` if the user is required to interact with the server to provide credentials or consent for authentication/authorization. Error codes include `"interaction_required"`, `"login_required"`, and `"consent_required"`. myMSALObj.handleRedirectPromise() myMSALObj.acquireTokenRedirect(request); ``` -The methods for pop-up experience (`loginPopup`, `acquireTokenPopup`) return promises, so you can use the promise pattern (.then and .catch) to handle them as shown: +The methods for pop-up experience (`loginPopup`, `acquireTokenPopup`) return promises, so you can use the promise pattern (`.then` and `.catch`) to handle them as shown: ```javascript myMSALObj.acquireTokenPopup(request).then( When calling an API requiring Conditional Access, you can receive a claims chall See [How to use Continuous Access Evaluation enabled APIs in your applications](./app-resilience-continuous-access-evaluation.md) for more detail. +### Using other frameworks ++Using toolkits like Tauri for registered single page applications (SPAs) with the identity platform are not recognized for production apps. SPAs only support URLs that start with `https` for production apps and `http://localhost` for local development. Prefixes like `tauri://localhost` cannot be used for browser apps. This format can only be supported for mobile or web apps as they have a confidential component unlike browser apps. + [!INCLUDE [Active directory error handling retries](./includes/error-handling-and-tips/error-handling-retries.md)] ## Next steps |
active-directory | Msal Ios Shared Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-ios-shared-devices.md | These Microsoft applications support Azure AD's shared device mode: - [Microsoft Teams](/microsoftteams/platform/) (in Public Preview) > [!IMPORTANT]-> Public preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> Public preview is provided without a service-level agreement and isn't recommended for production workloads. Some features might be unsupported or have constrained capabilities. For more information, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Next steps |
active-directory | Msal Net Use Brokers With Xamarin Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-use-brokers-with-xamarin-apps.md | Add the redirect URI to the app's registration in the [Azure portal](https://por **To generate the redirect URI:** -1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. -1. Select **Azure Active Directory** > **App registrations** > your registered app +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Search for and select the application. 1. Select **Authentication** > **Add a platform** > **iOS / macOS** 1. Enter your bundle ID, and then select **Configure**. |
active-directory | Msal Net User Gets Consent For Multiple Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-user-gets-consent-for-multiple-resources.md | -The Microsoft identity platform does not allow you to get a token for several resources at once. When using the Microsoft Authentication Library for .NET (MSAL.NET), the scopes parameter in the acquire token method should only contain scopes for a single resource. However, you can pre-consent to several resources upfront by specifying additional scopes using the `.WithExtraScopeToConsent` builder method. +The Microsoft identity platform does not allow you to get a token for several resources at once. When using the Microsoft Authentication Library for .NET (MSAL.NET), the *scopes* parameter in the acquire token method should only contain scopes for a single resource. However, you can pre-consent to several resources upfront by specifying additional scopes using the `.WithExtraScopesToConsent` builder method. > [!NOTE] > Getting consent for several resources works for Microsoft identity platform, but not for Azure AD B2C. Azure AD B2C supports only admin consent, not user consent. For example, if you have two resources that have 2 scopes each: - https:\//mytenant.onmicrosoft.com/customerapi (with 2 scopes `customer.read` and `customer.write`) - https:\//mytenant.onmicrosoft.com/vendorapi (with 2 scopes `vendor.read` and `vendor.write`) -You should use the `.WithExtraScopeToConsent` modifier which has the *extraScopesToConsent* parameter as shown in the following example: +You should use the `.WithExtraScopesToConsent` method which has the *extraScopesToConsent* parameter as shown in the following example: ```csharp string[] scopesForCustomerApi = new string[] string[] scopesForVendorApi = new string[] var accounts = await app.GetAccountsAsync(); var result = await app.AcquireTokenInteractive(scopesForCustomerApi) .WithAccount(accounts.FirstOrDefault())- .WithExtraScopeToConsent(scopesForVendorApi) + .WithExtraScopesToConsent(scopesForVendorApi) .ExecuteAsync(); ``` -This will get you an access token for the first web API. Then, to access the second web API you can silently acquire the token from the token cache: +`AcquireTokenInteractive` will return an access token for the first web API. Along with that access token, a refresh token will also be retrieved from Azure AD and cached. Then, to access the second web API, you can silently acquire the token using `AcquireTokenSilent`. MSAL will use the cached refresh token to retrieve from Azure AD the access token for the second web API. ```csharp-AcquireTokenSilent(scopesForVendorApi, accounts.FirstOrDefault()).ExecuteAsync(); +var result = await AcquireTokenSilent(scopesForVendorApi, accounts.FirstOrDefault()).ExecuteAsync(); ``` |
active-directory | Optional Claims | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/optional-claims.md | -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Choose the application for which you want to configure optional claims based on your scenario and desired outcome. 1. Under **Manage**, select **Token configuration**. - The UI option **Token configuration** blade isn't available for apps registered in an Azure AD B2C tenant, which can be configured by modifying the application manifest. For more information, see [Add claims and customize user input using custom policies in Azure Active Directory B2C](../../active-directory-b2c/configure-user-input.md) This section covers the configuration options under optional claims for changing Complete the following steps to configure groups optional claims using the Azure portal: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. After you've authenticated, choose your tenant by selecting it from the top-right corner of the page. -1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations**. -1. Select the application you want to configure optional claims for in the list. +1. Select the application for which you want to configure optional claims. 1. Under **Manage**, select **Token configuration**. 1. Select **Add groups claim**. 1. Select the group types to return (**Security groups**, or **Directory roles**, **All groups**, and/or **Groups assigned to the application**): Complete the following steps to configure groups optional claims using the Azure Complete the following steps to configure groups optional claims through the application manifest: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. After you've authenticated, choose your Azure AD tenant by selecting it from the top-right corner of the page. -1. Search for and select **Azure Active Directory**. -1. Select the application you want to configure optional claims for in the list. +1. Select the application for which you want to configure optional claims. 1. Under **Manage**, select **Manifest**. 1. Add the following entry using the manifest editor: Complete the following steps to configure groups optional claims through the app Multiple token types can be listed: - - idToken for the OIDC ID token - - accessToken for the OAuth access token - - Saml2Token for SAML tokens. + - `idToken` for the OIDC ID token + - `accessToken` for the OAuth access token + - `Saml2Token` for SAML tokens. - The Saml2Token type applies to both SAML1.1 and SAML2.0 format tokens. + The `Saml2Token` type applies to both SAML1.1 and SAML2.0 format tokens. For each relevant token type, modify the groups claim to use the `optionalClaims` section in the manifest. The `optionalClaims` schema is as follows: In the following example, the Azure portal and manifest are used to add optional Configure claims in the Azure portal: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. After you've authenticated, choose your tenant by selecting it from the top-right corner of the page. -1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations**. -1. Find the application you want to configure optional claims for in the list and select it. +1. Select the application for which you want to configure optional claims. 1. Under **Manage**, select **Token configuration**. 1. Select **Add optional claim**, select the **ID** token type, select **upn** from the list of claims, and then select **Add**. 1. Select **Add optional claim**, select the **Access** token type, select **auth_time** from the list of claims, then select **Add**. Configure claims in the Azure portal: Configure claims in the manifest: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. After you've authenticated, choose your tenant by selecting it from the top-right corner of the page. -1. Search for and select **Azure Active Directory**. -1. Find the application you want to configure optional claims for in the list and select it. +1. Select the application for which you want to configure optional claims. 1. Under **Manage**, select **Manifest** to open the inline manifest editor. 1. You can directly edit the manifest using this editor. The manifest follows the schema for the [Application entity](./reference-app-manifest.md), and automatically formats the manifest once saved. New elements are added to the `optionalClaims` property. |
active-directory | Permissions Consent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/permissions-consent-overview.md | Depending on the permissions they require, some applications might require an ad Preauthorization allows a resource application owner to grant permissions without requiring users to see a consent prompt for the same set of permissions that have been preauthorized. This way, an application that has been preauthorized won't ask users to consent to permissions. Resource owners can preauthorize client apps in the Azure portal or by using PowerShell and APIs, like Microsoft Graph. -## Next steps +## See also - [Delegated access scenario](delegated-access-primer.md) - [User and admin consent overview](../manage-apps/user-admin-consent-overview.md) - [OpenID connect scopes](scopes-oidc.md)+-- [Making your application multi-tenant](./howto-convert-app-to-be-multi-tenant.md) +- [AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Perms For Given Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/perms-for-given-api.md | - Title: Select permissions for a given API -description: Learn about how permissions requests work for client and resource applications for applications you are developing --------- Previously updated : 11/10/2022-----# How to select permissions for a given API --## Recommended documents --- Learn more about how client applications use [delegated and application permission requests](./developer-glossary.md#permissions) to access resources.-- Learn about [scopes and permissions in the Microsoft identity platform](scopes-oidc.md)-- See step-by-step instructions on how to [configure a client application's permission requests](./quickstart-configure-app-access-web-apis.md)-- For more depth, learn how resource applications expose [scopes](./developer-glossary.md#scopes) and [application roles](./developer-glossary.md#roles) to client applications, which manifest as delegated and application permissions respectively in the Azure portal.--## Next steps --[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Publisher Verification Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md | -When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (MCPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration. +When an app has a verified publisher, this means that the organization that publishes the app has been verified as authentic by Microsoft. Verifying an app includes using a Microsoft Cloud Partner Program (CPP), formerly known as Microsoft Partner Network (MPN), account that's been [verified](/partner-center/verification-responses) and associating the verified PartnerID with an app registration. When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages: Publisher verification for an app has the following benefits: App developers must meet a few requirements to complete the publisher verification process. Many Microsoft partners will have already satisfied these requirements. -- The developer must have an MPN ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The MPN account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization.+- The developer must have an Partner One ID for a valid [Microsoft Cloud Partner Program](https://partner.microsoft.com/membership) account that has completed the [verification](/partner-center/verification-responses) process. The CPP account must be the [partner global account (PGA)](/partner-center/account-structure#the-top-level-is-the-partner-global-account-pga) for the developer's organization. > [!NOTE]- > The MPN account you use for publisher verification can't be your partner location MPN ID. Currently, location MPN IDs aren't supported for the publisher verification process. + > The CPP account you use for publisher verification can't be your partner location Partner One ID. Currently, location Partner One IDs aren't supported for the publisher verification process. - The app that's to be publisher verified must be registered by using an Azure AD work or school account. Apps that are registered by using a Microsoft account can't be publisher verified. -- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the MPN PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account).+- The Azure AD tenant where the app is registered must be associated with the PGA. If the tenant where the app is registered isn't the primary tenant associated with the PGA, complete the steps to [set up the CPP PGA as a multitenant account and associate the Azure AD tenant](/partner-center/multi-tenant-account#add-an-azure-ad-tenant-to-your-account). - The app must be registered in an Azure AD tenant and have a [publisher domain](howto-configure-publisher-domain.md) set. The feature is not supported in Azure AD B2C tenant. -- The domain of the email address that's used during MPN account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified) +- The domain of the email address that's used during CPP account verification must either match the publisher domain that's set for the app or be a DNS-verified [custom domain](../fundamentals/add-custom-domain.md) that's added to the Azure AD tenant. (**NOTE**__: the app's publisher domain can't be *.onmicrosoft.com to be publisher verified) -- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the MPN account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center.+- The user who initiates verification must be authorized to make changes both to the app registration in Azure AD and to the CPP account in Partner Center. The user who initiates the verification must have one of the required roles in both Azure AD and Partner Center. - In Azure AD, this user must be a member of one of the following [roles](../roles/permissions-reference.md): Application Admin, Cloud Application Admin, or Global Administrator. - - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): MPN Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD). + - In Partner Center, this user must have one of the following [roles](/partner-center/permissions-overview): CPP Partner Admin, Account Admin, or Global Administrator (a shared role that's mastered in Azure AD). - The user who initiates verification must sign in by using [Azure AD multifactor authentication](../authentication/howto-mfa-getstarted.md). |
active-directory | Quickstart Configure App Access Web Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-access-web-apis.md | By specifying a web API's scopes in your client app's registration, the client a [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] +Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration. + In the first scenario, you grant a client app access to your own web API, both of which you should have registered as part of the prerequisites. If you don't yet have both a client app and a web API registered, complete the steps in the two [Prerequisites](#prerequisites) articles. This diagram shows how the two app registrations relate to one another. In this section, you add permissions to the client app's registration. |
active-directory | Quickstart Configure App Expose Web Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-configure-app-expose-web-apis.md | In this quickstart, you'll register a web API with the Microsoft identity platfo ## Register the web API +Access to APIs require configuration of access scopes and roles. If you want to expose your resource application web APIs to client applications, configure access scopes and roles for the API. If you want a client application to access a web API, configure permissions to access the API in the app registration. + To provide scoped access to the resources in your web API, you first need to register the API with the Microsoft identity platform. Perform the steps in the **Register an application** section of [Quickstart: Register an app with the Microsoft identity platform](quickstart-register-app.md). |
active-directory | Quickstart Daemon App Java Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-daemon-app-java-acquire-token.md | You have two options to start your quickstart application: Express (Option 1 bel ### Option 1: Register and auto configure your app and then download your code sample -1. Go to the [Azure portal - App registrations](https://portal.azure.com/?Microsoft_AAD_RegisteredApps=true#blade/Microsoft_AAD_RegisteredApps/applicationsListBlade/quickStartType/JavaDaemonQuickstartPage/sourceType/docs) quickstart experience. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Select **New registration**. 1. Enter a name for your application and select **Register**. 1. Follow the instructions to download and automatically configure your new application with just one click. You have two options to start your quickstart application: Express (Option 1 bel To register your application and add the app's registration information to your solution manually, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.-1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations** > **New registration**. +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Select **New registration**. 1. Enter a **Name** for your application, for example `Daemon-console`. Users of your app might see this name, and you can change it later. 1. Select **Register**. 1. Under **Manage**, select **Certificates & secrets**. |
active-directory | Reference V2 Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-v2-libraries.md | For more information about the Microsoft Authentication Library, see the [Overvi <!--Reference-style links --> [AAD-App-Model-V2-Overview]: v2-overview.md [Microsoft-SDL]: https://www.microsoft.com/securityengineering/sdl/-[preview-tos]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ +[preview-tos]: https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all |
active-directory | Registration Config How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-how-to.md | - Title: Get the endpoints for an Azure AD app registration -description: How to find the authentication endpoints for a custom application you're developing or registering with Azure AD. --------- Previously updated : 11/09/2022-----# How to discover endpoints --You can find the authentication endpoints for your application in the [Azure portal](https://portal.azure.com). --1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. -1. Select **Azure Active Directory**. -1. Under **Manage**, select **App registrations**, and then select **Endpoints** in the top menu. -- The **Endpoints** page is displayed, showing the authentication endpoints for your tenant. - - Use the endpoint that matches the authentication protocol you're using in conjunction with the **Application (client) ID** to craft the authentication request specific to your application. --**National clouds** (for example Azure AD China, Germany, and US Government) have their own app registration portal and Azure AD authentication endpoints. Learn more in the [National clouds overview](authentication-national-cloud.md). --## Next steps --For more information about endpoints in the different Azure environments, see the [National clouds overview](authentication-national-cloud.md). |
active-directory | Registration Config Specific Application Property How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-specific-application-property-how-to.md | - Title: Azure portal registration fields for custom-developed apps -description: Guidance for registering a custom developed application with Azure AD --------- Previously updated : 09/27/2021-----# Azure portal registration fields for custom-developed apps --This article gives you a brief description of all the available fields in the application registration form in the [Azure portal](https://portal.azure.com). --## Register a new application --- To register a new application, sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.--- From the left navigation pane, click **Azure Active Directory.**--- Choose **App registrations** and click **Add**.--- This open up the application registration form.--## Fields in the application registration form --| Field | Description | -||| -| Name | The name of the application. It should have a minimum of four characters. | -| Supported account types| Select which accounts you would like your application to support: accounts in this organizational directory only, accounts in any organizational directory, or accounts in any organizational directory and personal Microsoft accounts. | -| Redirect URI (optional) | Select the type of app you're building, **Web** or **Public client (mobile & desktop)**, and then enter the redirect URI (or reply URL) for your application. For web applications, provide the base URL of your app. For example, http://localhost:31544 might be the URL for a web app running on your local machine. Users would use this URL to sign in to a web client application. For public client applications, provide the URI used by Azure AD to return token responses. Enter a value specific to your application, such as myapp://auth. To see specific examples for web applications or native applications, check out our [quickstarts](./index.yml).| --Once you have filled the above fields, the application is registered in the Azure portal, and you are redirected to the application overview page. The settings pages in the left pane under **Manage** have more fields for you to customize your application. The tables below describe all the fields. You would only see a subset of these fields, depending on whether you created a web application or a public client application. --### Overview --| Field | Description | -|--|--| -| Application ID | When you register an application, Azure AD assigns your application an Application ID. The application ID can be used to uniquely identify your application in authentication requests to Azure AD, as well as to access resources like the Graph API. | -| App ID URI | This should be a unique URI, usually of the form **https://<tenant\_name>/<application\_name>.** This is used during the authorization grant flow, as a unique identifier to specify the resource that the token should be issued for. It also becomes the 'aud' claim in the issued access token. | --### Branding --| Field | Description | -|--|--| -| Upload new logo | You can use this to upload a logo for your application. The logo must be in .bmp, .jpg or .png format, and the file size should be less than 100 KB. The dimensions for the image should be 215x215 pixels, with central image dimensions of 94x94 pixels.| -| Home page URL | This is the sign-on URL specified during application registration.| --### Authentication --| Field | Description | -|--|--| -| Front-channel logout URL | This is the single sign-out logout URL. Azure AD sends a logout request to this URL when the user clears their session with Azure AD using any other registered application.| -| Supported account types | This switch specifies whether the application can be used by multiple tenants. Typically, this means that external organizations can use your application by registering it in their tenant and granting access to their organization's data.| -| Redirect URLs | The redirect, or reply, URLs are the endpoints where Azure AD returns any tokens that your application requests. For native applications, this is where the user is sent after successful authorization. Azure AD checks that the redirect URI your application supplies in the OAuth 2.0 request matches one of the registered values in the portal.| --### Certificates and secrets --| Field | Description | -|--|--| -| Client secrets | You can create client secrets, or keys, to programmatically access web APIs secured by Azure AD without any user interaction. From the **New client secret** page, enter a key description and the expiration date and save to generate the key. Make sure to save it somewhere secure, as you won't be able to access it later. | --## Next steps --[Managing Applications with Azure Active Directory](../manage-apps/what-is-application-management.md) |
active-directory | Registration Config Sso How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/registration-config-sso-how-to.md | - Title: Configure application single sign-on -description: How to configure single sign-on for a custom application you are developing and registering with Azure AD. --------- Previously updated : 07/15/2019-----# How to configure single sign-on for an application --Enabling federated single sign-on (SSO) in your app is automatically enabled when federating through Azure AD for OpenID Connect, SAML 2.0, or WS-Fed. If your end users are having to sign in despite already having an existing session with Azure AD, itΓÇÖs likely your app may be misconfigured. --* If youΓÇÖre using Microsoft Authentication Library (MSAL), make sure you have **PromptBehavior** set to **Auto** rather than **Always**. --* If youΓÇÖre building a mobile app, you may need additional configurations to enable brokered or non-brokered SSO. --For Android, see [Enabling Cross App SSO in Android](msal-android-single-sign-on.md). --For iOS, see [Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md). --## Next steps --[Azure AD SSO](../manage-apps/what-is-single-sign-on.md)<br> --[Enabling Cross App SSO in Android](msal-android-single-sign-on.md)<br> --[Enabling Cross App SSO in iOS](single-sign-on-macos-ios.md)<br> --[Integrating Apps to AzureAD](./quickstart-register-app.md)<br> --[Permissions and consent in the Microsoft identity platform](./permissions-consent-overview.md)<br> --[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Reply Url | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reply-url.md | This table shows the maximum number of redirect URIs you can add to an app regis | Microsoft work or school accounts in any organization's Azure Active Directory (Azure AD) tenant | 256 | `signInAudience` field in the application manifest is set to either *AzureADMyOrg* or *AzureADMultipleOrgs* | | Personal Microsoft accounts and work and school accounts | 100 | `signInAudience` field in the application manifest is set to *AzureADandPersonalMicrosoftAccount* | -The maximum number of redirect URIS can't be raised for [security reasons](#restrictions-on-wildcards-in-redirect-uris). If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) as the solution. +The maximum number of redirect URIs can't be raised for [security reasons](#restrictions-on-wildcards-in-redirect-uris). If your scenario requires more redirect URIs than the maximum limit allowed, consider the following [state parameter approach](#use-a-state-parameter) as the solution. ## Maximum URI length |
active-directory | Saml Claims Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/saml-claims-customization.md | By default, the Microsoft identity platform issues a SAML token to an applicatio ## View or edit claims -To view or edit the claims issued in the SAML token to the application, open the application in Azure portal. Then open the **Attributes & Claims** section. -+To view or edit the claims issued in the SAML token to the application: +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **All applications**. +1. Select the application, select **Single sign-on** in the left-hand menu, and then select **Edit** in the **Attributes & Claims** section. You might need to edit the claims issued in the SAML token for the following reasons: To edit the name identifier value claim: 1. Open the **Name identifier value** page. 1. Select the attribute or transformation that you want to apply to the attribute. Optionally, you can specify the format that you want the `nameID` claim to have. - :::image type="content" source="./media/saml-claims-customization/saml-sso-manage-user-claims.png" alt-text="Screenshot of editing the nameID (name identifier) value in the Azure portal."::: - ### NameID format If the SAML request contains the element `NameIDPolicy` with a specific format, then the Microsoft identity platform honors the format in the request. For more information about identifier values, see the table that lists the valid Any constant (static) value can be assigned to any claim. Use the following steps to assign a constant value: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. In the **User Attributes & Claims** section, select **Edit** to edit the claims. -1. Select the required claim that you want to modify. -1. Enter the constant value without quotes in the **Source attribute** as per your organization and select **Save**. -- :::image type="content" source="./media/saml-claims-customization/organization-attribute.png" alt-text="Screenshot of the organization Attributes & Claims section in the Azure portal."::: --1. The constant value is displayed as shown in the following image. -- :::image type="content" source="./media/saml-claims-customization/edit-attributes-claims.png" alt-text="Screenshot of editing in the Attributes & Claims section in the Azure portal."::: +1. On the **Attributes & Claims** blade, select the required claim that you want to modify. +1. Enter the constant value without quotes in the **Source attribute** as per your organization and select **Save**. The constant value is displayed. ### Directory Schema extensions (Preview) You can also configure directory schema extension attributes as non-conditional/conditional attributes. Use the following steps to configure the single or multi-valued directory schema extension attribute as a claim: -1. Sign in to the [Azure portal](https://portal.azure.com). --1. In the **User Attributes & Claims** section, select **Edit** to edit the claims. -1. Select **Add new claim** or edit an existing claim. -- :::image type="content" source="./media/saml-claims-customization/mv-extension-1.jpg" alt-text="Screenshot of the MultiValue extension configuration section in the Azure portal."::: -+1. On the **Attributes & Claims** blade, select **Add new claim** or edit an existing claim. 1. Select source application from application picker where extension property is defined. - :::image type="content" source="./media/saml-claims-customization/mv-extension-2.jpg" alt-text="Screenshot of the source application selection in MultiValue extension configuration section in the Azure portal."::: - 1. Select **Add** to add the selection to the claims. 1. Click **Save** to commit the changes. You can use the following special claims transformations functions. To add application-specific claims: -1. In **User Attributes & Claims**, select **Add new claim** to open the **Manage user claims** page. +1. On the **Attributes & Claims** blade, select **Add new claim** to open the **Manage user claims** page. 1. Enter the **name** of the claims. The value doesn't strictly need to follow a URI pattern, per the SAML spec. If you need a URI pattern, you can put that in the **Namespace** field. 1. Select the **Source** where the claim is going to retrieve its value. You can select a user attribute from the source attribute dropdown or apply a transformation to the user attribute before emitting it as a claim. To apply a transformation to a user attribute: 1. In **Manage claim**, select *Transformation* as the claim source to open the **Manage transformation** page. 1. Select the function from the transformation dropdown. Depending on the function selected, provide parameters and a constant value to evaluate in the transformation. 1. Select the source of the attribute by clicking on the appropriate radio button. Directory schema extension source is in preview currently.-- :::image type="content" source="./media/saml-claims-customization/mv-extension-4.png" alt-text="Screenshot of claims transformation."::: - 1. Select the attribute name from the dropdown.- 1. **Treat source as multivalued** is a checkbox indicating whether the transform should be applied to all values or just the first. By default, transformations are only applied to the first element in a multi-value claim, by checking this box it ensures it's applied to all. This checkbox is only be enabled for multi-valued attributes, for example `user.proxyaddresses`.- 1. To apply multiple transformations, select **Add transformation**. You can apply a maximum of two transformations to a claim. For example, you could first extract the email prefix of the `user.mail`. Then, make the string upper case. For example, Britta Simon is a guest user in the Contoso tenant. Britta belongs First, the Microsoft identity platform verifies whether Britta's user type is **All guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.extensionattribute1`. Second, the Microsoft identity platform verifies whether Britta's user type is **AAD guests**. Because the type is **All guests**, the Microsoft identity platform assigns the source for the claim to `user.mail`. Finally, the claim is emitted with a value of `user.mail` for Britta. - As another example, consider when Britta Simon tries to sign in and the following configuration is used. All conditions are first evaluated with the source of `Attribute`. Because Britta's user type is **AAD guests**, `user.mail` is assigned as the source for the claim. Next, the transformations are evaluated. Because Britta is a guest, `user.extensionattribute1` is now the new source for the claim. Because Britta is in **AAD guests**, `user.othermail` is now the source for this claim. Finally, the claim is emitted with a value of `user.othermail` for Britta. - As a final example, consider what happens if Britta has no `user.othermail` configured or it's empty. In both cases the condition entry is ignored, and the claim falls back to `user.extensionattribute1` instead. ## Advanced SAML claims options |
active-directory | Scenario Web App Call Api Acquire Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-call-api-acquire-token.md | These advanced steps are covered in chapter 3 of the [3-WebApp-multi-APIs](https The code for ASP.NET is similar to the code shown for ASP.NET Core: -- A controller action, protected by an [Authorize] attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller. (ASP.NET uses `HttpContext.User`.)-*Microsoft.Identity.Web* adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. +- A controller action, protected by an `[Authorize]` attribute, extracts the tenant ID and user ID of the `ClaimsPrincipal` member of the controller (ASP.NET uses `HttpContext.User`). This ensures that only authenticated users can use the app. +**Microsoft.Identity.Web** adds extension methods to the Controller that provide convenience services to call Microsoft Graph or a downstream web API, or to get an authorization header, or even a token. The methods used to call an API directly are explained in detail in [A web app that calls web APIs: Call an API](scenario-web-app-call-api-call-api.md). With these helper methods, you don't need to manually acquire a token. -If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use *Microsoft.Identity.Web* to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK. +If, however, you do want to manually acquire a token or build an authorization header, the following code shows how to use Microsoft.Identity.Web to do so in a controller. It calls an API (Microsoft Graph) using the REST API instead of the Microsoft Graph SDK. To get an authorization header, you get an `IAuthorizationHeaderProvider` service from the controller using an extension method `GetAuthorizationHeaderProvider`. To get an authorization header to call an API on behalf of the user, use `CreateAuthorizationHeaderForUserAsync`. To get an authorization header to call a downstream API on behalf of the application itself, in a daemon scenario, use `CreateAuthorizationHeaderForAppAsync`. -The controller methods are protected by an `[Authorize]` attribute that ensures only authenticated users can use the web app. -- The following snippet shows the action of the `HomeController`, which gets an authorization header to call Microsoft Graph as a REST API: - ```csharp [Authorize] public class HomeController : Controller public class HomeController : Controller # [Java](#tab/java) -In the Java sample, the code that calls an API is in the getUsersFromGraph method in [AuthPageController.java#L62](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthPageController.java#L62). +In the Java sample, the code that calls an API is in the `getUsersFromGraph` method in [AuthPageController.java#L62](https://github.com/Azure-Samples/ms-identity-java-webapp/blob/d55ee4ac0ce2c43378f2c99fd6e6856d41bdf144/src/main/java/com/microsoft/azure/msalwebsample/AuthPageController.java#L62). The method attempts to call `getAuthResultBySilentFlow`. If the user needs to consent to more scopes, the code processes the `MsalInteractionRequiredException` object to challenge the user. public ModelAndView getUserFromGraph(HttpServletRequest httpRequest, HttpServlet # [Node.js](#tab/nodejs) -In the Node.js sample, the code that acquires a token is in the *acquireToken* method of the **AuthProvider** class. +In the Node.js sample, the code that acquires a token is in the `acquireToken` method of the `AuthProvider` class. :::code language="js" source="~/ms-identity-node/App/auth/AuthProvider.js" range="79-121"::: This access token is then used to handle requests to the `/profile` endpoint: # [Python](#tab/python) -In the Python sample, the code that calls the API is in `app.py`. +In the Python sample, the code that calls the API is in *app.py*. The code attempts to get a token from the token cache. If it can't get a token, it redirects the user to the sign-in route. Otherwise, it can proceed to call the API. Move on to the next article in this scenario, Move on to the next article in this scenario, [Call a web API](scenario-web-app-call-api-call-api.md?tabs=python). -+ |
active-directory | Setup Multi Tenant App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/setup-multi-tenant-app.md | - Title: Configure a new multi-tenant application -description: Learn how to configure an application as multi-tenant, and how multi-tenant applications work --------- Previously updated : 11/10/2022-----# How to configure a new multi-tenant application --Here is a list of recommended topics to learn more about multi-tenant applications: --- Get a general understanding of [what it means to be a multi-tenant application](./developer-glossary.md#multi-tenant-application)-- Learn about [tenancy in Azure Active Directory](single-and-multi-tenant-apps.md)-- Get a general understanding of [how to configure an application to be multi-tenant](./howto-convert-app-to-be-multi-tenant.md)-- Get a step-by-step overview of [how the Azure AD consent framework is used to implement consent](./quickstart-register-app.md), which is required for multi-tenant applications-- For more depth, learn [how a multi-tenant application is configured and coded end-to-end](./howto-convert-app-to-be-multi-tenant.md), including how to register, use the "common" endpoint, implement "user" and "admin" consent, how to implement more advanced multi-tier scenarios--## Next steps -[AzureAD Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Single Sign On Saml Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-sign-on-saml-protocol.md | Title: Azure single sign-on SAML protocol + Title: Single sign-on SAML protocol description: This article describes the single sign-on (SSO) SAML protocol in Azure Active Directory documentationcenter: .net To request a user authentication, cloud services send an `AuthnRequest` element | Parameter | Type | Description | | | | |-| ID | Required | Azure AD uses this attribute to populate the `InResponseTo` attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, `id6c1c178c166d486687be4aaf5e482730` is a valid ID. | -| Version | Required | This parameter should be set to **2.0**. | -| IssueInstant | Required | This is a DateTime string with a UTC value and [round-trip format ("o")](/dotnet/standard/base-types/standard-date-and-time-format-strings). Azure AD expects a DateTime value of this type, but doesn't evaluate or use the value. | -| AssertionConsumerServiceURL | Optional | If provided, this parameter must match the `RedirectUri` of the cloud service in Azure AD. | -| ForceAuthn | Optional | This is a boolean value. If true, it means that the user will be forced to re-authenticate, even if they have a valid session with Azure AD. | -| IsPassive | Optional | This is a boolean value that specifies whether Azure AD should authenticate the user silently, without user interaction, using the session cookie if one exists. If this is true, Azure AD will attempt to authenticate the user using the session cookie. | --All other `AuthnRequest` attributes, such as Consent, Destination, AssertionConsumerServiceIndex, AttributeConsumerServiceIndex, and ProviderName are **ignored**. +| `ID` | Required | Azure AD uses this attribute to populate the `InResponseTo` attribute of the returned response. ID must not begin with a number, so a common strategy is to prepend a string like "ID" to the string representation of a GUID. For example, `id6c1c178c166d486687be4aaf5e482730` is a valid ID. | +| `Version` | Required | This parameter should be set to `2.0`. | +| `IssueInstant` | Required | This is a DateTime string with a UTC value and [round-trip format ("o")](/dotnet/standard/base-types/standard-date-and-time-format-strings). Azure AD expects a DateTime value of this type, but doesn't evaluate or use the value. | +| `AssertionConsumerServiceURL` | Optional | If provided, this parameter must match the `RedirectUri` of the cloud service in Azure AD. | +| `ForceAuthn` | Optional | This is a boolean value. If true, it means that the user will be forced to re-authenticate, even if they have a valid session with Azure AD. | +| `IsPassive` | Optional | This is a boolean value that specifies whether Azure AD should authenticate the user silently, without user interaction, using the session cookie if one exists. If this is true, Azure AD will attempt to authenticate the user using the session cookie. | ++All other `AuthnRequest` attributes, such as `Consent`, `Destination`, `AssertionConsumerServiceIndex`, `AttributeConsumerServiceIndex`, and `ProviderName` are **ignored**. Azure AD also ignores the `Conditions` element in `AuthnRequest`. |
active-directory | Supported Accounts Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/supported-accounts-validation.md | See the following table for the validation differences of various properties for | Certificates (`keyCredentials`) | Symmetric signing key | Symmetric signing key | Encryption and asymmetric signing key | | Client secrets (`passwordCredentials`) | No limit\* | No limit\* | If liveSDK is enabled: Maximum of two client secrets | | Redirect URIs (`replyURLs`) | See [Redirect URI/reply URL restrictions and limitations](reply-url.md) for more info. | | |-| API permissions (`requiredResourceAccess`) | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | No more than 50 APIs (resource apps) from the same tenant as the application, no more than 10 APIs from other tenants, and no more than 400 permissions total across all APIs. | Maximum of 50 resources per application and 30 permissions per resource (for example, Microsoft Graph). Total limit of 200 per application (resources x permissions). | +| API permissions (`requiredResourceAccess`) | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 400 permissions total across all APIs. | No more than 50 total APIs (resource apps), with no more than 10 APIs from other tenants. No more than 200 permissions total across all APIs. Maximum of 30 permissions per resource (for example, Microsoft Graph). | | Scopes defined by this API (`oauth2Permissions`) | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 120 characters <br><br> No limit\* on the number of scopes defined | Maximum scope name length of 40 characters <br><br> Maximum of 100 scopes defined | | Authorized client applications (`preAuthorizedApplications`) | No limit\* | No limit\* | Total maximum of 500 <br><br> Maximum of 100 client apps defined <br><br> Maximum of 30 scopes defined per client | | appRoles | Supported <br> No limit\* | Supported <br> No limit\* | Not supported | |
active-directory | Test Setup Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/test-setup-environment.md | You can [manually create a tenant](quickstart-create-new-tenant.md), which will For convenience, you may want to invite yourself and other members of your development team to be guest users in the tenant. This will create separate guest objects in the test tenant, but means you only have to manage one set of credentials for your corporate account and your test account. -1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. -2. Go to **Users**. -3. Click on **New guest user** and invite your work account email address. -4. Repeat for other members of the development and/or testing team for your application. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Invite external user** and invite your work account email address. +1. Repeat for other members of the development and/or testing team for your application. You can also create test users in your test tenant. If you used one of the Microsoft 365 sample packs, you may already have some test users in your tenant. If not, you should be able to create some yourself as the tenant administrator. -1. Sign in to the [Azure portal](https://portal.azure.com), then select on **Azure Active Directory**. -2. Go to **Users**. -3. Click **New user** and create some new test users in your directory. +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user** and create some new test users in your directory. ### Get an Azure AD subscription (optional) Replicating Conditional Access policies ensures you don't encounter unexpected b Viewing your production tenant Conditional Access policies may need to be performed by a company administrator. -1. Sign in to the [Azure portal](https://portal.azure.com) using your production tenant account. 1. Go to **Azure Active Directory** > **Enterprise applications** > **Conditional Access**. 1. View the list of policies in your tenant. Click the first one. 1. Navigate to **Cloud apps or actions**. Viewing your production tenant Conditional Access policies may need to be perfor In a new tab or browser session, sign in to the [Azure portal](https://portal.azure.com) to access your test tenant. -1. Go to **Azure Active Directory** > **Enterprise applications** > **Conditional Access**. -1. Click on **New policy** +1. Browse to **Identity** > **Applications** > **Enterprise applications** > **Conditional Access**. +1. Select **Create new policy** 1. Copy the settings from the production tenant policy, identified through the previous steps. #### Permission grant policies Replicating permission grant policies ensures you don't encounter unexpected prompts for admin consent when moving to production. -1. Sign in to the [Azure portal](https://portal.azure.com) using your production tenant account. -1. Click on **Azure Active Directory**. -1. Go to **Enterprise applications**. -1. From your production tenant, go to **Azure Active Directory** > **Enterprise applications** > **Consent and permissions** > **User consent** settings. Copy the settings there to your test tenant. +Browse to **Identity** > **Applications** > **Enterprise applications** > **Consent and permissions** > **User consent** settings. Copy the settings there to your test tenant. #### Token lifetime policies You'll need to create an app registration to use in your test environment. This You'll need to create some test users with associated test data to use while testing your scenarios. This step might need to be performed by an admin. -1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. -2. Go to **Users**. -3. Select **New user** and create some new test users in your directory. +1. Browse to **Identity** > **Users** > **All users**. +1. Select **New user** > **Create new user** and create some new test users in your directory. ### Add the test users to a group (optional) For convenience, you can assign all these users to a group, which makes other assignment operations easier. -1. Sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. -2. Go to **Groups**. -3. Click **New group**. -4. Select either **Security** or **Microsoft 365** for group type. -5. Name your group. -6. Add the test users created in the previous step. +1. Browse to **Identity** > **Groups** > **All groups**. +1. Select **New group**. +1. Select either **Security** or **Microsoft 365** for group type. +1. Name your group. +1. Add the test users created in the previous step. ### Restrict your test application to specific users |
active-directory | Troubleshoot Publisher Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md | -2. Review the instructions to [mark an app as publisher verified](mark-app-as-publisher-verified.md) and ensure all steps have been performed successfully. --3. Review the list of [common issues](#common-issues). --4. Reproduce the request using [Graph Explorer](#making-microsoft-graph-api-calls) to gather more info and rule out any issues in the UI. +1. Review the instructions to [mark an app as publisher verified](mark-app-as-publisher-verified.md) and ensure all steps have been performed successfully. +1. Review the list of [common issues](#common-issues). +1. Reproduce the request using [Graph Explorer](#making-microsoft-graph-api-calls) to gather more info and rule out any issues in the UI. ## Common Issues Below are some common issues that may occur during the process. -- **I donΓÇÖt know my Microsoft Partner Network ID (MPN ID) or I donΓÇÖt know who the primary contact for the account is.** - 1. Navigate to the [MPN enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new). - 2. Sign in with a user account in the org's primary Azure AD tenant. - 3. If an MPN account already exists, this is recognized and you are added to the account. - 4. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the MPN ID and primary account contact will be listed. +- **I donΓÇÖt know my Cloud Partner Program ID (Partner One ID) or I donΓÇÖt know who the primary contact for the account is.** + 1. Navigate to the [Cloud Partner Program enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new). + 1. Sign in with a user account in the org's primary Azure AD tenant. + 1. If an Cloud Partner Program account already exists, this is recognized and you are added to the account. + 1. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the Partner One ID and primary account contact will be listed. - **I donΓÇÖt know who my Azure AD Global Administrator (also known as company admin or tenant admin) is, how do I find them? What about the Application Administrator or Cloud Application Administrator?**- 1. Sign in to the [Azure portal](https://portal.azure.com) using a user account in your organization's primary tenant. - 1. Browse to **Azure Active Directory** > [Roles and administrators](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators). - 3. Select the desired admin role. - 4. The list of users assigned that role will be displayed. + 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Adminstrator](../roles/permissions-reference.md#cloud-application-administrator). + 1. Browse to **Identity** > **Roles & admins** > **Roles & admins**. + 1. Select the desired admin role. + 1. The list of users assigned that role will be displayed. -- **I don't know who the admin(s) for my MPN account are**- Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles. +- **I don't know who the admin(s) for my CPP account are** + Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and filter the user list to see what users are in various admin roles. -- **I am getting an error saying that my MPN ID is invalid or that I do not have access to it.**+- **I am getting an error saying that my Partner One ID is invalid or that I do not have access to it.** Follow the [remediation guidance](#mpnaccountnotfoundornoaccess). - **When I sign in to the Azure portal, I do not see any apps registered. Why?** Response 204 No Content ``` > [!NOTE]-> *verifiedPublisherID* is your MPN ID. +> *verifiedPublisherID* is your Partner One ID. ### Unset Verified Publisher The following is a list of the potential error codes you may receive, either whe ### MPNAccountNotFoundOrNoAccess -The MPN ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid Partner One ID and try again. -Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the MPN account, or an invalid MPN ID. +Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the CPP account, or an invalid Partner One ID. **Remediation Steps** 1. Go to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that: - - The MPN ID is correct. - - There are no errors or ΓÇ£pending actionsΓÇ¥ shown, and the verification status under Legal business profile and Partner info both say ΓÇ£authorizedΓÇ¥ or ΓÇ£successΓÇ¥. -2. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account. -3. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions). + - The Partner One ID is correct. + - There are no errors or "pending actions" shown, and the verification status under Legal business profile and Partner info both say "authorized" or "success". +1. Go to the [CPP tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the [multi-tenant-account instructions](/partner-center/multi-tenant-account). All Global Admins of any tenant you add will be granted Global Administrator privileges on your Partner Center account. +1. Go to the [CPP User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Administrator, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions for [creating user accounts and setting permissions](/partner-center/create-user-accounts-and-set-permissions). ### MPNGlobalAccountNotFound -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused when an MPN ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details. +Most commonly caused when an Partner One ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details. **Remediation Steps**-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab -2. Use the Partner ID with type PartnerGlobal +1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**. +1. Use the Partner ID with type PartnerGlobal. ### MPNAccountInvalid -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused by the wrong MPN ID being provided. +Most commonly caused by the wrong Partner One ID being provided. **Remediation Steps**-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab -2. Use the Partner ID with type PartnerGlobal +1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**. +1. Use the Partner ID with type PartnerGlobal. ### MPNAccountNotVetted -The MPN ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again. +The Partner One ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again. -Most commonly caused by when the MPN account hasn't completed the [verification](/partner-center/verification-responses) process. +Most commonly caused by when the CPP account hasn't completed the [verification](/partner-center/verification-responses) process. **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that there are no errors or **pending actions** shown, and that the verification status under Legal business profile and Partner info both say **authorized** or **success**.-2. If not, view pending action items in Partner Center and troubleshoot with [here](/partner-center/verification-responses) +1. If not, view pending action items in Partner Center and troubleshoot with [here](/partner-center/verification-responses). ### NoPublisherIdOnAssociatedMPNAccount -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused by the wrong MPN ID being provided. +Most commonly caused by the wrong Partner One ID being provided. **Remediation Steps**-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab -2. Use the Partner ID with type PartnerGlobal +1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**. +1. Use the Partner ID with type PartnerGlobal. ### MPNIdDoesNotMatchAssociatedMPNAccount -The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again. +The Partner One ID you provided (`MPNID`) isn't valid. Provide a valid Partner One ID and try again. -Most commonly caused by the wrong MPN ID being provided. +Most commonly caused by the wrong Partner One ID being provided. **Remediation Steps**-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab -2. Use the Partner ID with type PartnerGlobal +1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**. +1. Use the Partner ID with type PartnerGlobal. ### ApplicationNotFound -The target application (`AppId`) canΓÇÖt be found. Provide a valid application ID and try again. +The target application (`AppId`) can't be found. Provide a valid application ID and try again. Most commonly caused when verification is being performed via Graph API, and the ID of the application provided is incorrect. **Remediation Steps**-1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application) -2. Log in to [Azure Active Directory](https://aad.portal.azure.com/) with a user account in your organization's primary tenant > Azure Active Directory > App Registrations blade -3. Find your app's registration to view the Object ID +1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Find your app's registration to view the Object ID. ### ApplicationObjectisInvalid The target application's object ID is invalid. Please provide a valid ID and try Most commonly caused when the verification is being performed via Graph API, and the ID of the application provided does not exist. **Remediation Steps**-1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application) -2. Log in to [Azure Active Directory](https://aad.portal.azure.com/) with a user account in your organization's primary tenant > Azure Active Directory > App Registrations blade -3. Find your app's registration to view the Object ID +1. The Object ID of the application must be provided, not the AppId/ClientId. See **id** on the list of application properties [here](/graph/api/resources/application). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Find your app's registration to view the Object ID. ### B2CTenantNotAllowed The target application (`AppId`) must have a Publisher Domain set. Set a Publish Occurs when a [Publisher Domain](howto-configure-publisher-domain.md) isn't configured on the app. **Remediation Steps**-1. Follow the directions [here](./howto-configure-publisher-domain.md#set-a-publisher-domain-in-the-azure-portal) to set a Publisher Domain +Follow the directions [here](./howto-configure-publisher-domain.md#set-a-publisher-domain-in-the-azure-portal) to set a Publisher Domain. ### PublisherDomainMismatch See [requirements](publisher-verification-overview.md) for a list of allowed dom **Remediation Steps** 1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile), and view the email listed as Primary Contact-2. The domain used to perform email verification in Partner Center is the portion after the ΓÇ£@ΓÇ¥ in the Primary ContactΓÇÖs email -3. Log in to [Azure Active Directory](https://aad.portal.azure.com/) > Azure Active Directory > App Registrations blade > (`Your App`) > Branding and Properties -4. Select **Update Publisher Domain** and follow the instructions to **Verify a New Domain**. -5. Add the domain used to perform email verification in Partner Center as a New Domain +1. The domain used to perform email verification in Partner Center is the portion after the "@" in the Primary Contact's email +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Application registrations** > **Branding and Properties**. +1. Select **Update Publisher Domain** and follow the instructions to **Verify a New Domain**. +1. Add the domain used to perform email verification in Partner Center as a New Domain. ### NotAuthorizedToVerifyPublisher You aren't authorized to set the verified publisher property on application (<`AppId`). -Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. +Most commonly caused by the signed-in user not being a member of the proper role for the CPP account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. **Remediation Steps**-1. Sign in to the [Azure AD Portal](https://aad.portal.azure.com) using a user account in your organization's primary tenant. -2. Navigate to [Role Management](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators). -3. Select the desired admin role and click ΓÇ£Add AssignmentΓÇ¥ if you have sufficient permissions. -4. If you do not have sufficient permissions, contact an admin role for assistance +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Roles & admins** > **Roles & admins**. +1. Select the desired admin role and select **Add Assignment** if you have sufficient permissions. +1. If you do not have sufficient permissions, contact an admin role for assistance. ### MPNIdWasNotProvided -The MPN ID wasn't provided in the request body or the request content type wasn't "application/json". +The Partner One ID wasn't provided in the request body or the request content type wasn't "application/json". -Most commonly caused when the verification is being performed via Graph API, and the MPN ID wasnΓÇÖt provided in the request. +Most commonly caused when the verification is being performed via Graph API, and the Partner One ID wasnΓÇÖt provided in the request. **Remediation Steps**-1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > Identifiers blade > Microsoft Cloud Partners Program Tab -2. Use the Partner ID with type PartnerGlobal in the request +1. Navigate to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) > **Identifiers blade** > **Microsoft Cloud Partners Program Tab**. +1. Use the Partner ID with type PartnerGlobal in the request. ### MSANotSupported The error message displayed will be: "Due to a configuration change made by your **Remediation Steps** 1. Ensure [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) is enabled and **required** for the user you're signing in with and for this scenario-2. Retry Publisher Verification +1. Retry Publisher Verification ### UserUnableToAddPublisher If you've reviewed all of the previous information and are still receiving an er - ObjectId of target application - AppId of target application - TenantId where app is registered-- MPN ID+- Partner One ID - REST request being made - Error code and message being returned |
active-directory | Tutorial V2 Windows Uwp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-v2-windows-uwp.md | private async Task DisplayMessageAsync(string message) Now, register your application: -1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.-1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations** > **New registration**. +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Select **New registration**. 1. Enter a **Name** for your application, for example `UWP-App-calling-MSGraph`. Users of your app might see this name, and you can change it later. 1. Under **Supported account types**, select **Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)**. 1. Select **Register**. Now, register your application: Configure authentication for your application: -1. Back in the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>, under **Manage**, select **Authentication** > **Add a platform**, and then select **Mobile and desktop applications**. +1. In to the Microsoft Entra admin center, select **Authentication** > **Add a platform**, and then select **Mobile and desktop applications**. 1. In the **Redirect URIs** section, enter `https://login.microsoftonline.com/common/oauth2/nativeclient`. 1. Select **Configure**. Configure API permissions for your application: -1. Under **Manage**, select **API permissions** > **Add a permission**. +1. Select **API permissions** > **Add a permission**. 1. Select **Microsoft Graph**. 1. Select **Delegated permissions**, search for *User.Read*, and verify that **User.Read** is selected. 1. If you made any changes, select **Add permissions** to save them. In the current sample, the `WithRedirectUri("https://login.microsoftonline.com/c You can then remove the line of code because it's required only once, to fetch the value. -3. In the app registration portal, add the returned value in **RedirectUri** in the **Authentication** pane. +3. In the Microsoft Entra admin center, add the returned value in **RedirectUri** in the **Authentication** pane. ## Test your code |
active-directory | V2 App Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md | -The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](./v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-scenarios). +The Microsoft identity platform supports authentication for various modern app architectures, all of them based on industry-standard protocols [OAuth 2.0 or OpenID Connect](./v2-protocols.md). This article describes the types of apps that you can build by using Microsoft identity platform, regardless of your preferred language or platform. The information is designed to help you understand high-level scenarios before you start working with the code in the [application scenarios](authentication-flows-app-scenarios.md#application-types). ## The basics |
active-directory | V2 Oauth Ropc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth-ropc.md | Title: Sign in with resource owner password credentials grant + Title: Microsoft identity platform and OAuth 2.0 Resource Owner Password Credentials description: Support browser-less authentication flows using the resource owner password credential (ROPC) grant. The Microsoft identity platform supports the [OAuth 2.0 Resource Owner Password > [!WARNING] > Microsoft recommends you do _not_ use the ROPC flow. In most scenarios, more secure alternatives are available and recommended. This flow requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows aren't viable. - > [!IMPORTANT] > > * The Microsoft identity platform only supports the ROPC grant within Azure AD tenants, not personal accounts. This means that you must use a tenant-specific endpoint (`https://login.microsoftonline.com/{TenantId_or_Name}`) or the `organizations` endpoint. |
active-directory | V2 Oauth2 Implicit Grant Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-implicit-grant-flow.md | Title: OAuth 2.0 implicit grant flow - The Microsoft identity platform + Title: Microsoft identity platform and OAuth 2.0 implicit grant flow description: Secure single-page apps using Microsoft identity platform implicit flow. -# Microsoft identity platform and implicit grant flow +# Microsoft identity platform and OAuth 2.0 implicit grant flow The Microsoft identity platform supports the OAuth 2.0 implicit grant flow as described in the [OAuth 2.0 Specification](https://tools.ietf.org/html/rfc6749#section-4.2). The defining characteristic of the implicit grant is that tokens (ID tokens or access tokens) are returned directly from the /authorize endpoint instead of the /token endpoint. This is often used as part of the [authorization code flow](v2-oauth2-auth-code-flow.md), in what is called the "hybrid flow" - retrieving the ID token on the /authorize request along with an authorization code. The following diagram shows what the entire implicit sign-in flow looks like and To initially sign the user into your app, you can send an [OpenID Connect](v2-protocols-oidc.md) authentication request and get an `id_token` from the Microsoft identity platform. > [!IMPORTANT]-> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned: `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'` +> To successfully request an ID token and/or an access token, the app registration in the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page must have the corresponding implicit grant flow enabled, by selecting **ID tokens** and **access tokens** in the **Implicit grant and hybrid flows** section. If it's not enabled, an `unsupported_response` error will be returned: +> +> `The provided value for the input parameter 'response_type' is not allowed for this client. Expected value is 'code'` ``` // Line breaks for legibility only client_id=6731de76-14a6-49ae-97bc-6eba6914391e | | | | | `tenant` | required |The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](./v2-protocols.md#endpoints).Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.| | `client_id` | required | The Application (client) ID that the [Azure portal - App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |-| `response_type` | required |Must include `id_token` for OpenID Connect sign-in. It may also include the response_type `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, user.read on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This id_token+code response is sometimes called the hybrid flow. | -| `redirect_uri` | recommended |The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be URL-encoded. | -| `scope` | required |A space-separated list of [scopes](./permissions-consent-overview.md). For OpenID Connect (id_tokens), it must include the scope `openid`, which translates to the "Sign you in" permission in the consent UI. Optionally you may also want to include the `email` and `profile` scopes for gaining access to additional user data. You may also include other scopes in this request for requesting consent to various resources, if an access token is requested. | +| `response_type` | required | Must include `id_token` for OpenID Connect sign-in. It may also include the `response_type`, `token`. Using `token` here will allow your app to receive an access token immediately from the authorize endpoint without having to make a second request to the authorize endpoint. If you use the `token` response_type, the `scope` parameter must contain a scope indicating which resource to issue the token for (for example, `user.read` on Microsoft Graph). It can also contain `code` in place of `token` to provide an authorization code, for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). This `id_token`+`code` response is sometimes called the hybrid flow. | +| `redirect_uri` | recommended |The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. | +| `scope` | required |A space-separated list of [scopes](./permissions-consent-overview.md). For OpenID Connect (`id_tokens`), it must include the scope `openid`, which translates to the "Sign you in" permission in the consent UI. Optionally you may also want to include the `email` and `profile` scopes for gaining access to additional user data. You may also include other scopes in this request for requesting consent to various resources, if an access token is requested. | | `response_mode` | optional |Specifies the method that should be used to send the resulting token back to your app. Defaults to query for just an access token, but fragment if the request includes an id_token. | | `state` | recommended |A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The state is also used to encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |-| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. | -| `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are 'login', 'none', 'select_account', and 'consent'. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. | +| `nonce` | required |A value included in the request, generated by the app, that will be included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. Only required when an id_token is requested. | +| `prompt` | optional |Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `select_account`, and `consent`. `prompt=login` will force the user to enter their credentials on that request, negating single-sign on. `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via SSO, the Microsoft identity platform will return an error. `prompt=select_account` sends the user to an account picker where all of the accounts remembered in the session will appear. `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app. | | `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](./optional-claims.md) from an earlier sign-in. | | `domain_hint` | optional |If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience. This parameter is commonly used for Line of Business apps that operate in a single tenant, where they'll provide a domain name within a given tenant, forwarding the user to the federation provider for that tenant. This hint prevents guests from signing into this application, and limits the use of cloud credentials like FIDO. | code=0.AgAAktYV-sfpYESnQynylW_UKZmH-C9y_G1A | | | | `code` | Included if `response_type` includes `code`. It's an authorization code suitable for use in the [authorization code flow](v2-oauth2-auth-code-flow.md). | | `access_token` |Included if `response_type` includes `token`. The access token that the app requested. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. |-| `token_type` |Included if `response_type` includes `token`. Will always be `Bearer`. | +| `token_type` |Included if `response_type` includes `token`. This will always be `Bearer`. | | `expires_in`|Included if `response_type` includes `token`. Indicates the number of seconds the token is valid, for caching purposes. | | `scope` |Included if `response_type` includes `token`. Indicates the scope(s) for which the access_token will be valid. May not include all the requested scopes if they weren't applicable to the user. For example, Azure AD-only scopes requested when logging in using a personal account. |-| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. | +| `id_token` | A signed JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about ID tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested and `response_type` included `id_tokens`. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | [!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)] For details on the query parameters in the URL, see [send the sign in request](# > [!TIP] > Try copy & pasting the request below into a browser tab! (Don't forget to replace the `login_hint` values with the correct value for your user) >->`https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username}` +> ``` +> https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=token&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&scope=https%3A%2F%2Fgraph.microsoft.com%2Fuser.read&response_mode=fragment&state=12345&nonce=678910&prompt=none&login_hint={your-username} +> ``` > > Note that this will work even in browsers without third party cookie support, since you're entering this directly into a browser bar as opposed to opening it within an iframe. access_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q.. | Parameter | Description | | | | | `access_token` |Included if `response_type` includes `token`. The access token that the app requested, in this case for the Microsoft Graph. The access token shouldn't be decoded or otherwise inspected, it should be treated as an opaque string. |-| `token_type` | Will always be `Bearer`. | +| `token_type` | This will always be `Bearer`. | | `expires_in` | Indicates the number of seconds the token is valid, for caching purposes. |-| `scope` | Indicates the scope(s) for which the access_token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). | +| `scope` | Indicates the scope(s) for which the access token will be valid. May not include all of the scopes requested, if they weren't applicable to the user (in the case of Azure AD-only scopes being requested when a personal account is used to log in). | | `id_token` | A signed JSON Web Token (JWT). Included if `response_type` includes `id_token`. The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token` reference](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. | | `state` |If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. | If you receive this error in the iframe request, the user must interactively sig ## Refreshing tokens -The implicit grant does not provide refresh tokens. Both `id_token`s and `access_token`s will expire after a short period of time, so your app must be prepared to refresh these tokens periodically. To refresh either type of token, you can perform the same hidden iframe request from above using the `prompt=none` parameter to control the identity platform's behavior. If you want to receive a new `id_token`, be sure to use `id_token` in the `response_type` and `scope=openid`, as well as a `nonce` parameter. +The implicit grant does not provide refresh tokens. Both ID tokens and access tokens will expire after a short period of time, so your app must be prepared to refresh these tokens periodically. To refresh either type of token, you can perform the same hidden iframe request from above using the `prompt=none` parameter to control the identity platform's behavior. If you want to receive a new ID token, be sure to use `id_token` in the `response_type` and `scope=openid`, as well as a `nonce` parameter. In browsers that do not support third party cookies, this will result in an error indicating that no user is signed in. |
active-directory | Web Api Tutorial 01 Register App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-tutorial-01-register-app.md | In this tutorial: To complete registration, provide the application a name and specify the supported account types. Once registered, the application **Overview** page will display the identifiers needed in the application source code. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.-1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations > New registration**. +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Select **New registration**. 1. Enter a **Name** for the application, such as *NewWebAPI1*. 1. For **Supported account types**, select **Accounts in this organizational directory only**. For information on different account types, select **Help me choose** option. 1. Select **Register**. |
active-directory | Web App Tutorial 01 Register Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-tutorial-01-register-application.md | In this tutorial: To complete registration, provide the application a name and specify the supported account types. Once registered, the application **Overview** page will display the identifiers needed in the application source code. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Application Developer](../roles/permissions-reference.md#application-developer). 1. If access to multiple tenants is available, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.-1. Search for and select **Azure Active Directory**. -1. Under **Manage**, select **App registrations > New registration**. +1. Browse to **Identity** > **Applications** > **Application registrations**. +1. Select **New registration**. 1. Enter a **Name** for the application, such as *NewWebApp1*. 1. For Supported account types, select **Accounts in this organizational directory only**. For information on different account types, select the **Help me choose** option. - The **Redirect URI (optional)** will be configured at a later stage. |
active-directory | Assign Local Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md | When you connect a Windows device with Azure AD using an Azure AD join, Azure AD - The Azure AD joined device local administrator role - The user performing the Azure AD join -By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to the global administrators, you can also enable users that have been *only* assigned the device administrator role to manage a device. +By adding Azure AD roles to the local administrators group, you can update the users that can manage a device anytime in Azure AD without modifying anything on the device. Azure AD also adds the Azure AD joined device local administrator role to the local administrators group to support the principle of least privilege (PoLP). In addition to users with the Global Administrator role, you can also enable users that have been *only* assigned the Azure AD Joined Device Local Administrator role to manage a device. -## Manage the global administrators role +## Manage the Global Administrator role -To view and update the membership of the Global Administrator role, see: +To view and update the membership of the [Global Administrator](/azure/active-directory/roles/permissions-reference#global-administrator) role, see: - [View all members of an administrator role in Azure Active Directory](../roles/manage-roles-portal.md) - [Assign a user to administrator roles in Azure Active Directory](../fundamentals/how-subscriptions-associated-directory.md) -## Manage the device administrator role +## Manage the Azure AD Joined Device Local Administrator role +You can manage the [Azure AD Joined Device Local Administrator](/azure/active-directory/roles/permissions-reference#azure-ad-joined-device-local-administrator) role from **Device settings**. -In the Azure portal, you can manage the device administrator role from **Device settings**. --1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. -1. Browse to **Azure Active Directory** > **Devices** > **Device settings**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator). +1. Browse to **Identity** > **Devices** > **All devices** > **Device settings**. 1. Select **Manage Additional local administrators on all Azure AD joined devices**. 1. Select **Add assignments** then choose the other administrators you want to add and select **Add**. -To modify the device administrator role, configure **Additional local administrators on all Azure AD joined devices**. +To modify the Azure AD Joined Device Local Administrator role, configure **Additional local administrators on all Azure AD joined devices**. > [!NOTE] > This option requires Azure AD Premium licenses. -Device administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope device administrators to a specific set of devices. Updating the device administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen: +Azure AD Joined Device Local Administrators are assigned to all Azure AD joined devices. You canΓÇÖt scope this role to a specific set of devices. Updating the Azure AD Joined Device Local Administrator role doesn't necessarily have an immediate impact on the affected users. On devices where a user is already signed into, the privilege elevation takes place when *both* the below actions happen: - Upto 4 hours have passed for Azure AD to issue a new Primary Refresh Token with the appropriate privileges. - User signs out and signs back in, not lock/unlock, to refresh their profile. -Users won't be listed in the local administrator group, the permissions are received through the Primary Refresh Token. +Users aren't directly listed in the local administrator group, the permissions are received through the Primary Refresh Token. > [!NOTE] > The above actions are not applicable to users who have not signed in to the relevant device previously. In this case, the administrator privileges are applied immediately after their first sign-in to the device. ## Manage administrator privileges using Azure AD groups (preview) -Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you the granularity to configure distinct administrators for different groups of devices. +Starting with Windows 10 version 20H2, you can use Azure AD groups to manage administrator privileges on Azure AD joined devices with the [Local Users and Groups](/windows/client-management/mdm/policy-csp-localusersandgroups) MDM policy. This policy allows you to assign individual users or Azure AD groups to the local administrators group on an Azure AD joined device, providing you with the granularity to configure distinct administrators for different groups of devices. Organizations can use Intune to manage these policies using [Custom OMA-URI Settings](/mem/intune/configuration/custom-settings-windows-10) or [Account protection policy](/mem/intune/protect/endpoint-security-account-protection-policy). A few considerations for using this policy: -- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID is defined by the property `securityIdentifier` in the API response.+- Adding Azure AD groups through the policy requires the group's SID that can be obtained by executing the [Microsoft Graph API for Groups](/graph/api/resources/group). The SID equates to the property `securityIdentifier` in the API response. - Administrator privileges using this policy are evaluated only for the following well-known groups on a Windows 10 or newer device - Administrators, Users, Guests, Power Users, Remote Desktop Users and Remote Management Users. By default, Azure AD adds the user performing the Azure AD join to the administr - [Windows Autopilot](/windows/deployment/windows-autopilot/windows-10-autopilot) - Windows Autopilot provides you with an option to prevent primary user performing the join from becoming a local administrator by [creating an Autopilot profile](/intune/enrollment-autopilot#create-an-autopilot-deployment-profile).-- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an auto-created user. Users signing in after a device has been joined aren't added to the administrators group. +- [Bulk enrollment](/intune/windows-bulk-enroll) - An Azure AD join that is performed in the context of a bulk enrollment happens in the context of an autocreated user. Users signing in after a device has been joined aren't added to the administrators group. ## Manually elevate a user on a device Additionally, you can also add users using the command prompt: ## Considerations -- You can only assign role based groups to the device administrator role.-- Device administrators are assigned to all Azure AD Joined devices. They can't be scoped to a specific set of devices.+- You can only assign role based groups to the Azure AD Joined Device Local Administrator role. +- The Azure AD Joined Device Local Administrator role is assigned to all Azure AD Joined devices. This role can't be scoped to a specific set of devices. - Local administrator rights on Windows devices aren't applicable to [Azure AD B2B guest users](../external-identities/what-is-b2b.md).-- When you remove users from the device administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours.+- When you remove users from the Azure AD Joined Device Local Administrator role, changes aren't instant. Users still have local administrator privilege on a device as long as they're signed in to it. The privilege is revoked during their next sign-in when a new primary refresh token is issued. This revocation, similar to the privilege elevation, could take upto 4 hours. ## Next steps -- To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md).+- To get an overview of how to manage devices, see [managing devices using the Azure portal](manage-device-identities.md). - To learn more about device-based Conditional Access, see [Conditional Access: Require compliant or hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md). |
active-directory | Concept Primary Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-primary-refresh-token.md | The following diagrams illustrate the underlying details in issuing, renewing, a > [!NOTE] > In Azure AD joined devices, Azure AD PRT issuance (steps A-F) happens synchronously before the user can logon to Windows. In hybrid Azure AD joined devices, on-premises Active Directory is the primary authority. So, the user is able to login hybrid Azure AD joined Windows after they can acquire a TGT to login, while the PRT issuance happens asynchronously. This scenario does not apply to Azure AD registered devices as logon does not use Azure AD credentials. +> [!NOTE] +> In a Hybrid Azure AD joined Windows environment, the issuance of the PRT occurs asynchronously. The issuance of the PRT may fail due to issues with the federation provider. This failure can result in sign on issues when users try to access cloud resources. It is important to troubleshoot this scenario with the federation provider. + | Step | Description | | :: | | | A | User enters their password in the sign in UI. LogonUI passes the credentials in an auth buffer to LSA, which in turns passes it internally to CloudAP. CloudAP forwards this request to the CloudAP plugin. | |
active-directory | Device Join Out Of Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-join-out-of-box.md | Your device may restart several times as part of the setup process. Your device :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-device-sign-in-info.png" alt-text="Screenshot of Windows 11 out-of-box experience showing the sign-in experience."::: 1. Continue to follow the prompts to set up your device. 1. Azure AD checks if an enrollment in mobile device management is required and starts the process.- 1. Windows registers the device in the organizationΓÇÖs directory in Azure AD and enrolls it in mobile device management, if applicable. + 1. Windows registers the device in the organizationΓÇÖs directory and enrolls it in mobile device management, if applicable. 1. If you sign in with a managed user account, Windows takes you to the desktop through the automatic sign-in process. Federated users are directed to the Windows sign-in screen to enter your credentials. :::image type="content" source="media/device-join-out-of-box/windows-11-first-run-experience-complete-automatic-sign-in-desktop.png" alt-text="Screenshot of Windows 11 at the desktop after first run experience Azure AD joined."::: To verify whether a device is joined to your Azure AD, review the **Access work ## Next steps -- For more information about managing devices in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md).+- For more information about managing devices, see [managing devices using the Azure portal](manage-device-identities.md). - [What is Microsoft Intune?](/mem/intune/fundamentals/what-is-intune) - [Overview of Windows Autopilot](/mem/autopilot/windows-autopilot) - [Passwordless authentication options for Azure Active Directory](../authentication/concept-authentication-passwordless.md) |
active-directory | Enterprise State Roaming Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-enable.md | Enterprise State Roaming provides users with a unified experience across their W ## To enable Enterprise State Roaming --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Devices** > **Enterprise State Roaming**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator). +1. Browse to **Identity** > **Devices** > **Overview** > **Enterprise State Roaming**. 1. Select **Users may sync settings and app data across devices**. For more information, see [how to configure device settings](./manage-device-identities.md). For a Windows 10 or newer device to use the Enterprise State Roaming service, the device must authenticate using an Azure AD identity. For devices that are joined to Azure AD, the userΓÇÖs primary sign-in identity is their Azure AD identity, so no other configuration is required. For devices that use on-premises Active Directory, the IT admin must [Configure hybrid Azure Active Directory joined devices](./hybrid-join-plan.md). The country/region value is set as part of the Azure AD directory creation proce Follow these steps to view a per-user device sync status report. -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a [Global Administrator](../roles/permissions-reference.md#global-administrator). +1. Browse to **Identity** > **Users** > **All users**. 1. Select the user, and then select **Devices**. 1. Select **View devices syncing settings and app data** to show sync status. 1. Devices syncing for the user are shown and can be downloaded. |
active-directory | Enterprise State Roaming Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/enterprise-state-roaming-troubleshooting.md | Enterprise State Roaming requires the device to be registered with Azure AD. Alt **Potential issue**: **WamDefaultSet** and **AzureAdJoined** both have ΓÇ£NOΓÇ¥ in the field value, the device was domain-joined and registered with Azure AD, and the device doesn't sync. If it's showing this, the device may need to wait for policy to be applied or the authentication for the device failed when connecting to Azure AD. The user may have to wait a few hours for the policy to be applied. Other troubleshooting steps may include retrying autoregistration by signing out and back in, or launching the task in Task Scheduler. In some cases, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue. -**Potential issue**: The field for **SettingsUrl** is empty and the device doesn't sync. The user may have last logged in to the device before Enterprise State Roaming was enabled in the Azure portal. Restart the device and have the user login. Optionally, in the portal, try having the IT Admin navigate to **Azure Active Directory** > **Devices** > **Enterprise State Roaming** disable and re-enable **Users may sync settings and app data across devices**. Once re-enabled, restart the device and have the user login. If this doesn't resolve the issue, **SettingsUrl** may be empty if there's a bad device certificate. In this case, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue. +**Potential issue**: The field for **SettingsUrl** is empty and the device doesn't sync. The user may have last logged in to the device before Enterprise State Roaming was enabled. Restart the device and have the user login. Optionally, in the portal, try having the IT Admin navigate to **Azure Active Directory** > **Devices** > **Enterprise State Roaming** disable and re-enable **Users may sync settings and app data across devices**. Once re-enabled, restart the device and have the user login. If this doesn't resolve the issue, **SettingsUrl** may be empty if there's a bad device certificate. In this case, running ΓÇ£*dsregcmd.exe /leave*ΓÇ¥ in an elevated command prompt window, rebooting, and trying registration again may help with this issue. ## Enterprise State Roaming and multifactor authentication |
active-directory | How To Hybrid Join Verify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/how-to-hybrid-join-verify.md | description: Verify configurations for hybrid Azure AD joined devices + Last updated 02/27/2023 For downlevel devices, see the article [Troubleshooting hybrid Azure Active Dire ## Using the Azure portal -1. Go to the devices page using a [direct link](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices). -2. Information on how to locate a device can be found in [How to manage device identities using the Azure portal](./manage-device-identities.md). -3. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle. -4. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com)ntra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator). +1. Browse to **Identity** > **Devices** > **All devices**. +1. If the **Registered** column says **Pending**, then hybrid Azure AD join hasn't completed. In federated environments, this state happens only if it failed to register and Azure AD Connect is configured to sync the devices. Wait for Azure AD Connect to complete a sync cycle. +1. If the **Registered** column contains a **date/time**, then hybrid Azure AD join has completed. ## Using PowerShell |
active-directory | Howto Manage Local Admin Passwords | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-manage-local-admin-passwords.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). Every Windows device comes with a built-in local administrator account that you must secure and protect to mitigate any Pass-the-Hash (PtH) and lateral traversal attacks. Many customers have been using our standalone, on-premises [Local Administrator Password Solution (LAPS)](https://www.microsoft.com/download/details.aspx?id=46899) product for local administrator password management of their domain joined Windows machines. With Azure AD support for Windows LAPS, we're providing a consistent experience for both Azure AD joined and hybrid Azure AD joined devices. Other than the built-in Azure AD roles of Cloud Device Administrator, Intune Adm To enable Windows LAPS with Azure AD, you must take actions in Azure AD and the devices you wish to manage. We recommend organizations [manage Windows LAPS using Microsoft Intune](/mem/intune/protect/windows-laps-policy). However, if your devices are Azure AD joined but you're not using Microsoft Intune or Microsoft Intune isn't supported (like for Windows Server 2019/2022), you can still deploy Windows LAPS for Azure AD manually. For more information, see the article [Configure Windows LAPS policy settings](/windows-server/identity/laps/laps-management-policy-settings). -1. Sign in to the **Azure portal** as a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator). -1. Browse to **Azure Active Directory** > **Devices** > **Device settings** +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Device Administrator](../roles/permissions-reference.md#cloud-device-administrator). +1. Browse to **Identity** > **Devices** > **Overview** > **Device settings** 1. Select **Yes** for the Enable Local Administrator Password Solution (LAPS) setting and select **Save**. You may also use the Microsoft Graph API [Update deviceRegistrationPolicy](/graph/api/deviceregistrationpolicy-update?view=graph-rest-beta&preserve-view=true). 1. Configure a client-side policy and set the **BackUpDirectory** to be Azure AD. |
active-directory | Howto Vm Sign In Azure Ad Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md | There are two ways to enable Azure AD login for your Linux VM: ### Azure portal - You can enable Azure AD login for any of the [supported Linux distributions](#supported-linux-distributions-and-azure-regions) by using the Azure portal. For example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD login: To configure role assignments for your Azure AD-enabled Linux VMs: | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity | - ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) + ![Screenshot that shows the page for adding a role assignment.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) After a few moments, the security principal is assigned the role at the selected scope. The application that appears in the Conditional Access policy is called *Azure L If the Azure Linux VM Sign-In application is missing from Conditional Access, make sure the application isn't in the tenant: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Enterprise applications**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications**. 1. Remove the filters to see all applications, and search for **Virtual Machine**. If you don't see Microsoft Azure Linux Virtual Machine Sign-In as a result, the service principal is missing from the tenant. Another way to verify it is via Graph PowerShell: |
active-directory | Howto Vm Sign In Azure Ad Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md | There are two ways to enable Azure AD login for your Windows VM: - Azure Cloud Shell, when you're creating a Windows VM or using an existing Windows VM. > [!NOTE]-> If a device object with the same displayMame as the hostname of a VM where an extension is installed exists, the VM fails to join Azure AD with a hostname duplication error. Avoid duplication by [modifying the hostname](../../virtual-network/virtual-networks-viewing-and-modifying-hostnames.md#modify-a-hostname). +> If a device object with the same displayName as the hostname of a VM where an extension is installed exists, the VM fails to join Azure AD with a hostname duplication error. Avoid duplication by [modifying the hostname](../../virtual-network/virtual-networks-viewing-and-modifying-hostnames.md#modify-a-hostname). ### Azure portal - You can enable Azure AD login for VM images in Windows Server 2019 Datacenter or Windows 10 1809 and later. To create a Windows Server 2019 Datacenter VM in Azure with Azure AD login: To configure role assignments for your Azure AD-enabled Windows Server 2019 Data | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity | - ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) + ![Screenshot that shows the page for adding a role assignment.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) ### Azure Cloud Shell Exit code -2145648607 translates to `DSREG_AUTOJOIN_DISC_FAILED`. The extension - `curl https://pas.windows.net/ -D -` > [!NOTE]- > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID** in the Azure portal. + > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID**. > > Attempts to connect to `enterpriseregistration.windows.net` might return 404 Not Found, which is expected behavior. Attempts to connect to `pas.windows.net` might prompt for PIN credentials or might return 404 Not Found. (You don't need to enter the PIN.) Either one is sufficient to verify that the URL is reachable. Share your feedback about this feature or report problems with using it on the [ If the Azure Windows VM Sign-In application is missing from Conditional Access, make sure that the application is in the tenant: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Enterprise applications**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Cloud Application Administrator](../roles/permissions-reference.md#cloud-application-administrator). +1. Browse to **Identity** > **Applications** > **Enterprise applications**. 1. Remove the filters to see all applications, and search for **VM**. If you don't see **Azure Windows VM Sign-In** as a result, the service principal is missing from the tenant. Another way to verify it is via Graph PowerShell: |
active-directory | Hybrid Join Manual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-manual.md | description: Learn how to manually configure hybrid Azure Active Directory join + Last updated 07/05/2022 The following script helps you with the creation of the issuance transform rules #### Remarks * This script appends the rules to the existing rules. Don't run the script twice, because the set of rules would be added twice. Make sure that no corresponding rules exist for these claims (under the corresponding conditions) before running the script again.-* If you have multiple verified domain names (as shown in the Azure portal or via the **Get-MsolDomain** cmdlet), set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule: +* If you have multiple verified domain names, set the value of **$multipleVerifiedDomainNames** in the script to **$true**. Also make sure that you remove any existing **issuerid** claim that might have been created by Azure AD Connect or via other means. Here's an example for this rule: ``` c:[Type == "http://schemas.xmlsoap.org/claims/UPN"] |
active-directory | Hybrid Join Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-join-plan.md | When you're using AD FS, you need to enable the following WS-Trust endpoints: > [!WARNING] > Both **adfs/services/trust/2005/windowstransport** or **adfs/services/trust/13/windowstransport** should be enabled as intranet facing endpoints only and must NOT be exposed as extranet facing endpoints through the Web Application Proxy. To learn more on how to disable WS-Trust Windows endpoints, see [Disable WS-Trust Windows endpoints on the proxy](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs#disable-ws-trust-windows-endpoints-on-the-proxy-ie-from-extranet). You can see what endpoints are enabled through the AD FS management console under **Service** > **Endpoints**. -Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-join-manual.md). +Beginning with version 1.1.819.0, Azure AD Connect provides you with a wizard to configure hybrid Azure AD join. The wizard enables you to significantly simplify the configuration process. If installing the required version of Azure AD Connect isn't an option for you, see [how to manually configure device registration](hybrid-join-manual.md). If contoso.com is registered as a confirmed custom domain, users can get a PRT even if their syncronized on-premises AD DS UPN suffix is in a subdomain like test.contoso.com. ## Review on-premises AD users UPN support for hybrid Azure AD join |
active-directory | Manage Device Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-device-identities.md | -[![Screenshot that shows the devices overview in the Azure portal.](./media/manage-device-identities/devices-azure-portal.png)](./media/manage-device-identities/devices-azure-portal.png#lightbox) +[![Screenshot that shows the devices overview.](./media/manage-device-identities/devices-azure-portal.png)](./media/manage-device-identities/devices-azure-portal.png#lightbox) You can access the devices overview by completing these steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). 1. Go to **Azure Active Directory** > **Devices**. In the devices overview, you can view the number of total devices, stale devices, noncompliant devices, and unmanaged devices. You'll also find links to Intune, Conditional Access, BitLocker keys, and basic monitoring. From there, you can go to **All devices** to: - Review device-related audit logs. - Download devices. -[![Screenshot that shows the All devices view in the Azure portal.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox) +[![Screenshot that shows the All devices view.](./media/manage-device-identities/all-devices-azure-portal.png)](./media/manage-device-identities/all-devices-azure-portal.png#lightbox) > [!TIP] > - Hybrid Azure AD joined Windows 10 or newer devices don't have an owner. If you're looking for a device by owner and don't find it, search by the device ID. To view or copy BitLocker keys, you need to be the owner of the device or have o ## View and filter your devices (preview) - In this preview, you have the ability to infinitely scroll, reorder columns, and select all devices. You can filter the device list by these device attributes: - Enabled state In this preview, you have the ability to infinitely scroll, reorder columns, and To enable the preview in the **All devices** view: -1. Sign in to the [Azure portal](https://portal.azure.com). -2. Go to **Azure Active Directory** > **Devices** > **All devices**. -3. Select the **Preview features** button. -4. Turn on the toggle that says **Enhanced devices list experience**. Select **Apply**. -5. Refresh your browser. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). +1. Browse to **Identity** > **Devices** > **All devices**. +1. Select the **Preview features** button. +1. Turn on the toggle that says **Enhanced devices list experience**. Select **Apply**. +1. Refresh your browser. You can now experience the enhanced **All devices** view. The exported list includes these device identity attributes: If you want to manage device identities by using the Azure portal, the devices need to be either [registered or joined](overview.md) to Azure AD. As an administrator, you can control the process of registering and joining devices by configuring the following device settings. -You must be assigned one of the following roles to view device settings in the Azure portal: +You must be assigned one of the following roles to view device settings: - Global Administrator - Global Reader You must be assigned one of the following roles to view device settings in the A - Windows 365 Administrator - Directory Reviewer -You must be assigned one of the following roles to manage device settings in the Azure portal: +You must be assigned one of the following roles to manage device settings: - Global Administrator - Cloud Device Administrator |
active-directory | Manage Stale Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/manage-stale-devices.md | description: Learn how to remove stale devices from your database of registered + Last updated 09/27/2022 -#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data. - +#Customer intent: As an IT admin, I want to understand how I can get rid of stale devices, so that I can I can cleanup my device registration data. # How To: Manage stale devices in Azure AD If the delta between the existing value of the activity timestamp and the curren You have two options to retrieve the value of the activity timestamp: -- The **Activity** column on the [devices page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices) in the Azure portal+- The **Activity** column on the [devices page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/DevicesMenuBlade/Devices). - :::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot of a page in the Azure portal listing the name, owner, and other information on devices. One column lists the activity time stamp." border="false"::: + :::image type="content" source="./media/manage-stale-devices/01.png" alt-text="Screenshot listing the name, owner, and other information of devices. One column lists the activity time stamp." border="false"::: -- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet+- The [Get-AzureADDevice](/powershell/module/azuread/Get-AzureADDevice) cmdlet. :::image type="content" source="./media/manage-stale-devices/02.png" alt-text="Screenshot showing command-line output. One line is highlighted and lists a time stamp for the ApproximateLastLogonTimeStamp value." border="false"::: Any authentication where a device is being used to authenticate to Azure AD are Devices managed with Intune can be retired or wiped, for more information see the article [Remove devices by using wipe, retire, or manually unenrolling the device](/mem/intune/remote-actions/devices-wipe). -To get an overview of how to manage device in the Azure portal, see [managing devices using the Azure portal](manage-device-identities.md) +To get an overview of how to manage devices, see [managing devices using the Azure portal](manage-device-identities.md) |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/overview.md | Getting devices in to Azure AD can be done in a self-service manner or a control - Learn more about [Azure AD registered devices](concept-device-registration.md) - Learn more about [Azure AD joined devices](concept-directory-join.md) - Learn more about [hybrid Azure AD joined devices](concept-hybrid-join.md)-- To get an overview of how to manage device identities in the Azure portal, see [Managing device identities using the Azure portal](manage-device-identities.md).+- To get an overview of how to manage device identities, see [Managing device identities using the Azure portal](manage-device-identities.md). - To learn more about device-based Conditional Access, see [Configure Azure Active Directory device-based Conditional Access policies](../conditional-access/concept-conditional-access-grant.md). |
active-directory | Troubleshoot Device Windows Joined | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-device-windows-joined.md | -1. Sign in to the **Azure portal**. -1. Browse to **Azure Active Directory** > **Devices** > **Diagnose and solve problems**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). +1. Browse to **Identity** > **Devices** > **All devices** > **Diagnose and solve problems**. 1. Select **Troubleshoot** under the **Windows 10+ related issue** troubleshooter. :::image type="content" source="media/troubleshoot-device-windows-joined/devices-troubleshoot-windows.png" alt-text="A screenshot showing the Windows troubleshooter located in the diagnose and solve pane of the Azure portal." lightbox="media/troubleshoot-device-windows-joined/devices-troubleshoot-windows.png"::: 1. Select **instructions** and follow the steps to download, run, and collect the required logs for the troubleshooter to analyze. |
active-directory | Troubleshoot Hybrid Join Windows Current | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-current.md | Use Event Viewer to look for the log entries that are logged by the Azure AD Clo | Error code | Reason | Resolution | | | | |-| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled in the Azure portal. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. | +| **AADSTS50155: Device authentication failed** | <li>Azure AD is unable to authenticate the device to issue a PRT.<li>Confirm that the device hasn't been deleted or disabled. For more information about this issue, see [Azure Active Directory device management FAQ](faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices). | Follow the instructions for this issue in [Azure Active Directory device management FAQ](faq.yml#i-disabled-or-deleted-my-device--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do) to re-register the device based on the device join type. | | **AADSTS50034: The user account `Account` does not exist in the `tenant id` directory** | Azure AD is unable to find the user account in the tenant. | <li>Ensure that the user is typing the correct UPN.<li>Ensure that the on-premises user account is being synced with Azure AD.<li>Event 1144 (Azure AD analytics logs) will contain the UPN provided. | | **AADSTS50126: Error validating credentials due to invalid username or password.** | <li>The username and password entered by the user in the Windows LoginUI are incorrect.<li>If the tenant has password hash sync enabled, the device is hybrid-joined, and the user just changed the password, it's likely that the new password hasn't synced with Azure AD. | To acquire a fresh PRT with the new credentials, wait for the Azure AD password sync to finish. | | | | |
active-directory | Troubleshoot Hybrid Join Windows Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-hybrid-join-windows-legacy.md | This article provides you with troubleshooting guidance on how to resolve potent - Hybrid Azure AD join for downlevel Windows devices works slightly differently than it does in Windows 10 or newer. Many customers don't realize that they need AD FS (for federated domains) or Seamless SSO configured (for managed domains). - Seamless SSO doesn't work in private browsing mode on Firefox and Microsoft Edge browsers. It also doesn't work on Internet Explorer if the browser is running in Enhanced Protected mode or if Enhanced Security Configuration is enabled.-- For customers with federated domains, if the Service Connection Point (SCP) was configured such that it points to the managed domain name (for example, contoso.onmicrosoft.com, instead of contoso.com), then Hybrid Azure AD Join for downlevel Windows devices won't work.+- For customers with federated domains, if the Service Connection Point (SCP) was configured such that it points to the managed domain name (for example, contoso.onmicrosoft.com, instead of contoso.com), then Hybrid Azure AD Join for downlevel Windows devices doesn't work. - The same physical device appears multiple times in Azure AD when multiple domain users sign-in the downlevel hybrid Azure AD joined devices. For example, if *jdoe* and *jharnett* sign-in to a device, a separate registration (DeviceID) is created for each of them in the **USER** info tab. - You can also get multiple entries for a device on the user info tab because of a reinstallation of the operating system or a manual re-registration. - The initial registration / join of devices is configured to perform an attempt at either sign-in or lock / unlock. There could be 5-minute delay triggered by a task scheduler task. This command displays a dialog box that provides you with details about the join ## Step 2: Evaluate the hybrid Azure AD join status -If the device wasn't hybrid Azure AD joined, you can attempt to do hybrid Azure AD join by clicking on the "Join" button. If the attempt to do hybrid Azure AD join fails, the details about the failure will be shown. +If the device wasn't hybrid Azure AD joined, you can attempt to do hybrid Azure AD join by clicking on the "Join" button. If the attempt to do hybrid Azure AD join fails, the details about the failure are shown. **The most common issues are:** If the device wasn't hybrid Azure AD joined, you can attempt to do hybrid Azure - It could be that AD FS and Azure AD URLs are missing in IE's intranet zone on the client. - Network connectivity issues may be preventing **autoworkplace.exe** from reaching AD FS or the Azure AD URLs. - **Autoworkplace.exe** requires the client to have direct line of sight from the client to the organization's on-premises AD domain controller, which means that hybrid Azure AD join succeeds only when the client is connected to organization's intranet.- - If your organization uses Azure AD Seamless Single Sign-On, `https://autologon.microsoftazuread-sso.com` or `https://aadg.windows.net.nsatc.net` aren't present on the device's IE intranet settings. + - If your organization uses Azure AD Seamless Single Sign-On, `https://autologon.microsoftazuread-sso.com` isn't present on the device's IE intranet settings. + - The internet setting `Do not save encrypted pages to disk` is checked. - You aren't signed on as a domain user :::image type="content" source="./media/troubleshoot-hybrid-join-windows-legacy/03.png" alt-text="Screenshot of the Workplace Join for Windows dialog box. Text reports that an error occurred during account verification." border="false"::: |
active-directory | Troubleshoot Primary Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/troubleshoot-primary-refresh-token.md | You can find a full list and description of server error codes in [Azure AD auth - Azure AD can't authenticate the device to issue a PRT. -- The device might have been deleted or disabled in the Azure portal. (For more information, see [Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10/11 devices?](./faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices))+- The device might have been deleted or disabled. (For more information, see [Why do my users see an error message saying "Your organization has deleted the device" or "Your organization has disabled the device" on their Windows 10/11 devices?](./faq.yml#why-do-my-users-see-an-error-message-saying--your-organization-has-deleted-the-device--or--your-organization-has-disabled-the-device--on-their-windows-10-11-devices)) ##### Solution -Re-register the device based on the device join type. For instructions, see [I disabled or deleted my device in the Azure portal or by using Windows PowerShell. But the local state on the device says it's still registered. What should I do?](./faq.yml#i-disabled-or-deleted-my-device-in-the-azure-portal-or-by-using-windows-powershell--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do). +Re-register the device based on the device join type. For instructions, see [I disabled or deleted my device. But the local state on the device says it's still registered. What should I do?](./faq.yml#i-disabled-or-deleted-my-device--but-the-local-state-on-the-device-says-it-s-still-registered--what-should-i-do). </details> <details> |
active-directory | Directory Delete Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md | |
active-directory | Directory Self Service Signup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md | |
active-directory | Domains Admin Takeover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md | |
active-directory | Domains Verify Custom Subdomain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md | |
active-directory | Groups Assign Sensitivity Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md | |
active-directory | Groups Change Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md | |
active-directory | Groups Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md | |
active-directory | Groups Naming Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md | |
active-directory | Groups Restore Deleted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md | |
active-directory | Groups Self Service Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md | |
active-directory | Groups Settings Cmdlets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md | |
active-directory | Groups Settings V2 Cmdlets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md | Microsoft 365 groups are created and managed in the cloud. The writeback capabil For more details, please refer to documentation for the [Azure AD Connect sync service](../hybrid/connect/how-to-connect-syncservice-features.md). -Microsoft 365 group writeback is a public preview feature of Azure Active Directory (Azure AD) and is available with any paid Azure AD license plan. For some legal information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +Microsoft 365 group writeback is a public preview feature of Azure Active Directory (Azure AD) and is available with any paid Azure AD license plan. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Next steps |
active-directory | Licensing Group Advanced | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md | |
active-directory | Licensing Powershell Graph Examples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-powershell-graph-examples.md | The purpose of this script is to remove unnecessary direct licenses from users w ```powershell-Import-Module Microsoft.Graph +# Import the Microsoft.Graph.Users and Microsoft.Graph.Groups modules +Import-Module Microsoft.Graph.Users -Force +Import-Module Microsoft.Graph.Authentication -Force +Import-Module Microsoft.Graph.Users.Actions -Force +Import-Module Microsoft.Graph.Groups -Force -# Connect to the Microsoft Graph -Connect-MgGraph +Clear-Host -# Get the group to be processed -$groupId = "48ca647b-7e4d-41e5-aa66-40cab1e19101" --# Get the license to be removed - Office 365 E3 -$skuId = "contoso:ENTERPRISEPACK" --# Minimum set of service plans we know are inherited by this group -$expectedDisabledPlans = @("Exchange Online", "SharePoint Online", "Lync Online") --# Get the users in the group -$users = Get-MgUser -GroupObjectId $groupId --# For each user, get the license for the specified SKU -foreach ($user in $users) { - $license = GetUserLicense $user $skuId -- # If the user has the license assigned directly, continue to the next user - if (UserHasLicenseAssignedDirectly $user $skuId) { - continue - } -- # If the user is inheriting the license from the specified group, continue to the next user - if (UserHasLicenseAssignedFromThisGroup $user $skuId $groupId) { - continue - } +if ($null -eq (Get-MgContext)) { + Connect-MgGraph -Scopes "Directory.Read.All, User.Read.All, Group.Read.All, Organization.Read.All" -NoWelcome +} - # Get the list of disabled service plans for the SKU - $disabledPlans = GetDisabledPlansForSKU $skuId $expectedDisabledPlans +# Get all groups with licenses assigned +$groupsWithLicenses = Get-MgGroup -All -Property AssignedLicenses, DisplayName, Id | Where-Object { $_.assignedlicenses } | Select-Object DisplayName, Id -ExpandProperty AssignedLicenses | Select-Object DisplayName, Id, SkuId - # Get the list of unexpected enabled plans for the user - $extraPlans = GetUnexpectedEnabledPlansForUser $user $skuId $expectedDisabledPlans +$output = @() - # If there are any unexpected enabled plans, print them to the console - if ($extraPlans.Count -gt 0) { - Write-Warning "The user $user has the following unexpected enabled plans for the $skuId SKU: $extraPlans" +# Check if there is any group that has licenses assigned or not +if ($null -ne $groupsWithLicenses) { + # Loop through each group + foreach ($group in $groupsWithLicenses) { + # Get the group's licenses + $groupLicenses = $group.SkuId + + # Get the group's members + $groupMembers = Get-MgGroupMember -GroupId $group.Id -All ++ # Check if the group member list is empty or not + if ($groupMembers) { + # Loop through each member + foreach ($member in $groupMembers) { + # Check if the member is a user + if ($member.AdditionalProperties.'@odata.type' -eq '#microsoft.graph.user') { + # Get the user's direct licenses + Write-Host "Fetching license details for $($member.AdditionalProperties.displayName)" -ForegroundColor Yellow + + # Get User With Directly Assigned Licenses Only + $user = Get-MgUser -UserId $member.Id -Property AssignedLicenses, LicenseAssignmentStates, DisplayName | Select-Object DisplayName, AssignedLicenses -ExpandProperty LicenseAssignmentStates | Select-Object DisplayName, AssignedByGroup, State, Error, SkuId | Where-Object { $_.AssignedByGroup -eq $null } ++ $licensesToRemove = @() + if($user) + { + if ($user.count -ge 2) { + foreach ($u in $user) { + $userLicenses = $u.SkuId + $licensesToRemove += $userLicenses | Where-Object { $_ -in $groupLicenses } + } + } + else { + $userLicenses = $user.SkuId + $licensesToRemove = $userLicenses | Where-Object { $_ -in $groupLicenses } + } + } else { + Write-Host "No conflicting licenses found for the user $($member.AdditionalProperties.displayName)" -ForegroundColor Green + } + + + + # Remove the licenses from the user + if ($licensesToRemove) { + Write-Host "Removing the license $($licensesToRemove) from user $($member.AdditionalProperties.displayName) as inherited from group $($group.DisplayName)" -ForegroundColor Green + $result = Set-MgUserLicense -UserId $member.Id -AddLicenses @() -RemoveLicenses $licensesToRemove + $obj = [PSCustomObject]@{ + User = $result.DisplayName + Id = $result.Id + LicensesRemoved = $licensesToRemove + LicenseInheritedFromGroup = $group.DisplayName + GroupId = $group.Id + } ++ $output += $obj ++ } + else { + Write-Host "No action required for $($member.AdditionalProperties.displayName)" -ForegroundColor Green + } + + } + } + } + else { + Write-Host "The licensed group $($group.DisplayName) has no members, exiting now!!" -ForegroundColor Yellow + } + }+ + $output | Format-Table -AutoSize +} +else { + Write-Host "No groups found with licenses assigned." -ForegroundColor Cyan } ``` |
active-directory | Licensing Ps Examples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md | |
active-directory | Linkedin Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md | |
active-directory | Users Bulk Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md | |
active-directory | Users Custom Security Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-custom-security-attributes.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). [Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD), part of Microsoft Entra, are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your employees or to help determine who gets access to resources. This article describes how to assign, update, list, or remove custom security attributes for Azure AD. |
active-directory | Users Restrict Guest Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md | |
active-directory | Users Revoke Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md | |
active-directory | Add Users Administrator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md | After you add a guest user to the directory, you can either send the guest user > [!IMPORTANT] > You should follow the steps in [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/properties-area.md) to add the URL of your organization's privacy statement. As part of the first time invitation redemption process, an invited user must consent to your privacy terms to continue. -The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users.md) article. |
active-directory | Authentication Conditional Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/authentication-conditional-access.md | description: Learn how to enforce multi-factor authentication policies for Azure + Last updated 04/17/2023 |
active-directory | B2b Quickstart Add Guest Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md | In this quickstart, you'll learn how to add a new guest user to your Azure AD di If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). Instructions for the legacy create user process can be found in the [Add or delete users](../fundamentals/add-users.md) article. |
active-directory | Bulk Invite Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md | Last updated 07/31/2023 ---# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user. + +# Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user. # Tutorial: Use PowerShell to bulk invite Azure AD B2B collaboration users |
active-directory | Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/code-samples.md | Last updated 04/06/2023 -+ # Customer intent: As a tenant administrator, I want to bulk-invite external users to an organization from email addresses that I've stored in a .csv file. |
active-directory | Cross Tenant Access Settings B2b Collaboration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md | With inbound settings, you select which external users and groups will be able t - In the menu next to the search box, choose either **user** or **group**. - Select **Add**. - ![Screenshot showing adding users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add.png) + > [!NOTE] + > You cannot target users or groups in inbound default settings. ++ ![Screenshot showing adding users and groups.](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add-new.png) 1. When you're done adding users and groups, select **Submit**. |
active-directory | Cross Tenant Access Settings B2b Direct Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md | With inbound settings, you select which external users and groups will be able t - In the menu next to the search box, choose either **user** or **group**. - Select **Add**. + > [!NOTE] + > You cannot target users or groups in inbound default settings. + ![Screenshot showing adding external users for inbound b2b direct connect](media/cross-tenant-access-settings-b2b-direct-connect/b2b-direct-connect-inbound-external-users-groups-add.png) 1. When you're done adding users and groups, select **Submit**. |
active-directory | How To Add Attributes To Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-add-attributes-to-token.md | You can specify which built-in or custom attributes you want to include as claim ## Add built-in or custom attributes to the token -1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**. -1. Select **Applications** > **App registrations**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select your application in the list to open the application's **Overview** page. :::image type="content" source="media/how-to-add-attributes-to-token/select-app.png" alt-text="Screenshot of the overview page of the app registration."::: You can specify which built-in or custom attributes you want to include as claim ### Update the application manifest to accept mapped claims -1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**. -1. Select **Applications** > **App registrations**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select your application in the list to open the application's **Overview** page. 1. In the left menu, under **Manage**, select **Manifest** to open the application manifest. 1. Find the **acceptMappedClaims** key and set its value to **true**. |
active-directory | How To Create Customer Tenant Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-create-customer-tenant-portal.md | In this article, you learn how to: ## Create a new customer tenant -1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/). -1. From the left menu, select **Azure Active Directory** > **Overview**. -1. On the overview page, select **Manage tenants** +1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/) as at least a [Contributor](/azure/role-based-access-control/built-in-roles#contributor). +1. Browse to **Identity** > **Overview** > **Manage tenants**. 1. Select **Create**. :::image type="content" source="media/how-to-create-customer-tenant-portal/create-tenant.png" alt-text="Screenshot of the create tenant option."::: |
active-directory | How To Customize Branding Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-branding-customers.md | The following image displays the neutral default branding of the customer tenant Before you customize any settings, the neutral default branding will appear in your sign-in and sign-up pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a [custom CSS](/azure/active-directory/fundamentals/reference-company-branding-css-template). -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.-1. In the search bar, type and select **Company branding**. -1. Under **Default sign-in** select **Edit**. +1. Browse to **Company Branding** > **Default sign-in** > **Edit**. :::image type="content" source="media/how-to-customize-branding-customers/company-branding-default-edit-button.png" alt-text="Screenshot of the company branding edit button."::: Your customer tenant name replaces the Microsoft banner logo in the neutral defa When no longer needed, you can remove the sign-in customization from your customer tenant via the Azure portal. -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). -1.If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. -1. In the search bar, type and select **Company branding**. -1. Under **Default sign-in experience**, select **Edit**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier. +1. Browse to **Company branding** > **Default sign-in experience** > **Edit**. 1. Remove the elements you no longer need. 1. Once finished select **Review + save**. 1. Wait a few minutes for the changes to take effect. |
active-directory | How To Customize Languages Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-customize-languages-customers.md | You can create a personalized sign-in experience for users who sign in using a s ## Add browser language under Company branding -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.-1. In the search bar, type and select **Company branding**. -1. Under **Browser language customizations**, select **Add browser language**. +1. Browse to **Company branding** > **Browser language customizations** > **Add browser language**. :::image type="content" source="media/how-to-customize-languages-customers/company-branding-add-browser-language.png" alt-text="Screenshot of the browser language customizations tab." lightbox="media/how-to-customize-languages-customers/company-branding-add-browser-language.png"::: The following languages are supported in the customer tenant: - Spanish (Spain) - Swedish (Sweden) - Thai (Thailand)- - Turkish (Turkey) + - Turkish (T├╝rkiye) - Ukrainian (Ukraine) 6. Customize the elements on the **Basics**, **Layout**, **Header**, **Footer**, **Sign-in form**, and **Text** tabs. For detailed instructions, see [Customize the branding and end-user experience](how-to-customize-branding-customers.md). The following languages are supported in the customer tenant: Language customization in the customer tenant allows your user flow to accommodate different languages to suit your customer's needs. You can use languages to modify the strings displayed to your customers as part of the attribute collection process during sign-up. -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 2. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.-3. In the left menu, select **Azure Active Directory** > **External Identities**. -4. Select **User flows**. +3. Browse to **Identity** > **External Identities** > **User flows**. 5. Select the user flow that you want to enable for translations. 6. Select **Languages**. 7. On the **Languages** page for the user flow, select the language that you want to customize. |
active-directory | How To Define Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-define-custom-attributes.md | If your application relies on certain built-in or custom user attributes, you ca ## Create custom attributes -1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**. -1. Select **External Identities** > **Overview**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. Browse to **Identity** > **External Identities** > **Overview**. 1. Select **Custom user attributes**. The available user attributes are listed. 1. To add an attribute, select **Add**. In the **Add an attribute** pane, enter the following values: If your application relies on certain built-in or custom user attributes, you ca Follow these steps to add sign-up attributes to a user flow you've already created. (For a new user flow, see [Create a sign-up and sign-in user flow for customers](how-to-user-flow-sign-up-sign-in-customers.md).) -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. -1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**. +1. Browse to **Identity** > **External Identities** > **User flows**. 1. Select the user flow from the list. Follow these steps to add sign-up attributes to a user flow you've already creat You can choose the order in which the attributes are displayed on the sign-up page. -1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**. +1. Browse to **Identity** > **External Identities** > **User flows**. 1. From the list, select your user flow. |
active-directory | How To Enable Password Reset Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-enable-password-reset-customers.md | The following screenshots show the self-service password rest flow. From the app ## Enable self-service password reset for customers -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the customer tenant you created earlier.-1. In the navigation pane, select **Azure Active Directory**. -1. Select **External Identities** > **User flows**. +1. Browse to **Identity** > **External Identities** > **User flows**. 1. From the list of **User flows**, select the user flow you want to enable SSPR. 1. Make sure that the sign-up user flow registers **Email with password** as an authentication method under **Identity providers**. The following screenshots show the self-service password rest flow. From the app To enable self-service password reset, you need to enable the email one-time passcode (Email OTP) authentication method for all users in your tenant. To ensure that the Email OTP feature is enabled follow the steps below: - 1. Select **Protect & secure** from the sidebar under **Azure Active Directory** and then **Authentication methods** > **Policies**. + 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). + + 1. Browse to **Identity** > **Protection** > **Authentication methods**. - 1. Under **Method** select **Email OTP (preview)**. + 1. Under **Policies** > **Method** select **Email OTP (preview)**. :::image type="content" source="media/how-to-enable-password-reset-customers/authentication-methods.png" alt-text="Screenshot that shows authentication methods."::: |
active-directory | How To Facebook Federation Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-facebook-federation-customers.md | If you don't already have a Facebook account, sign up at [https://www.facebook.c - `https://<tenant-name>.ciamlogin.com/<tenant-ID>/federation/oauth2` - `https://<tenant-name>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2` > [!NOTE]- > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**. + > To find your customer tenant ID, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). Browse to **Identity** > **Overview**. Then select the **Overview** tab and copy the **Tenant ID**. 1. Select **Save changes** at the bottom of the page. 1. At this point, only Facebook application owners can sign in. Because you registered the app, you can sign in with your Facebook account. To make your Facebook application available to your users, from the menu, select **Go live**. Follow all of the steps listed to complete all requirements. You'll likely need to complete the business verification to verify your identity as a business entity or organization. For more information, see [Meta App Development](https://developers.facebook.com/docs/development/release). If you don't already have a Facebook account, sign up at [https://www.facebook.c After you create the Facebook application, in this step you set the Facebook client ID and client secret in Azure AD. You can use the Azure portal or PowerShell to do so. To configure Facebook federation in the Microsoft Entra admin center, follow these steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as the global administrator of your customer tenant. -1. Go to **Azure Active Directory** > **External Identities** > **All identity providers**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. Browse to **Identity** > **External Identities** > **All identity providers**. 2. Select **+ Facebook**. <!-- ![Screenshot that shows how to add Facebook identity provider in Azure AD.](./media/sign-in-with-facebook/configure-facebook-idp.png)--> To configure Facebook federation by using PowerShell, follow these steps: At this point, the Facebook identity provider has been set up in your customer tenant, but it's not yet available in any of the sign-in pages. To add the Facebook identity provider to a user flow: -1. In your customer tenant, go to **Azure Active Directory** > **External Identities** > **User flows**. +1. Browse to **Identity** > **External Identities** > **User flows**. 1. Select the user flow where you want to add the Facebook identity provider. 1. Under Settings, select **Identity providers** 1. Under **Other Identity Providers**, select **Facebook**. At this point, the Facebook identity provider has been set up in your customer t ## Next steps - [Add Google as an identity provider](how-to-google-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md) |
active-directory | How To Google Federation Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-google-federation-customers.md | To enable sign-in for customers with a Google account, you need to create an app - `https://<tenant-ID>.ciamlogin.com/<tenant-ID>/federation/oauth2` - `https://<tenant-ID>.ciamlogin.com/<tenant-name>.onmicrosoft.com/federation/oauth2` > [!NOTE]- > To find your customer tenant ID, go to the [Microsoft Entra admin center](https://entra.microsoft.com). Under **Azure Active Directory**, select **Overview**. Then select the **Overview** tab and copy the **Tenant ID**. + > To find your customer tenant ID, sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). Browse to **Identity** > **Overview** and copy the **Tenant ID**. 2. Select **Create**. 3. Copy the values of **Client ID** and **Client secret**. You need both values to configure Google as an identity provider in your tenant. **Client secret** is an important security credential. To enable sign-in for customers with a Google account, you need to create an app After you create the Google application, in this step you set the Google client ID and client secret in Azure AD. You can use the Microsoft Entra admin center or PowerShell to do so. To configure Google federation in the Microsoft Entra admin center, follow these steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) as the global administrator of your customer tenant. -1. Go to **Azure Active Directory** > **External Identities** > **All identity providers**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com).  +1. Browse to **Identity** > **External Identities** > **All identity providers**. 2. Select **+ Google**. <!-- ![Screenshot that shows how to add Google identity provider in Azure AD.](./media/sign-in-with-google/configure-google-idp.png)--> To configure Google federation by using PowerShell, follow these steps: At this point, the Google identity provider has been set up in your Azure AD, but it's not yet available in any of the sign-in pages. To add the Google identity provider to a user flow: -1. In your customer tenant, go to **Azure Active Directory** > **External Identities** > **User flows**. +1. In your customer tenant, browse to **Identity** > **External Identities** > **User flows**. 1. Select the user flow where you want to add the Facebook identity provider. 1. Under Settings, select **Identity providers** 1. Under **Other Identity Providers**, select **Google**. At this point, the Google identity provider has been set up in your Azure AD, bu ## Next steps - [Add Facebook as an identity provider](how-to-facebook-federation-customers.md)-- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md)+- [Customize the branding for customer sign-in experiences](how-to-customize-branding-customers.md) |
active-directory | How To Identity Protection Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-identity-protection-customers.md | An administrator can choose to dismiss a user's risk in the Microsoft Entra admi 1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the Directories + subscriptions icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**. -1. Browse to **Azure Active Directory** > **Protect & secure** > **Security Center**. +1. Browse to **Identity** > **Protection** > **Security Center**. 1. Select **Identity Protection**. Administrators can then choose to return to the user's risk or sign-ins report t ### Navigating the risk detections report -1. In the [Microsoft Entra admin center](https://entra.microsoft.com), browse to **Azure Active Directory** > **Protect & secure** > **Security Center**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). + +1. Browse to **Identity** > **Protection** > **Security Center**. 1. Select **Identity Protection**. |
active-directory | How To Manage Admin Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-admin-accounts.md | In Azure Active Directory (Azure AD) for customers, a customer tenant represents To create a new admin account, follow these steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Select **New user** > **Create new user**. 1. Enter information for this admin: The admin is created and added to your customer tenant. It's preferable to have You can also invite a new guest user to manage your tenant. To invite an admin, follow these steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Select **New user** > **Invite external user**. 1. On the **New user** page, enter information for the admin: An invitation email is sent to the user. The user needs to accept the invitation You can assign a role when you create a user or invite a guest user. You can add a role, change the role, or remove a role for a user: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Select the user you want to change the roles for. Then select **Assigned roles**. 1. Select **Add assignments**, select the role to assign (for example, *Application administrator*), and then choose **Add**. You can assign a role when you create a user or invite a guest user. You can add If you need to remove a role assignment from a user, follow these steps: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Select the user you want to change the roles for. Then select **Assigned roles**. 1. Select the role you want to remove, for example *Application administrator*, and then select **Remove assignment**. If you need to remove a role assignment from a user, follow these steps: As part of an auditing process, you typically review which users are assigned to specific roles in your customer directory. Use the following steps to audit which users are currently assigned privileged roles. -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Roles & admins** > **Roles & admins**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Roles & admins** > **Roles & admins**. 2. Select a role, such as **Global administrator**. The **Assignments** page lists the users with that role. ## Delete an administrator account To delete an existing user, you must have a *Global administrator* role assignment. Global admins can delete any user, including other admins. *User administrators* can delete any non-admin user. -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Select the user you want to delete. 1. Select **Delete**, and then **Yes** to confirm the deletion. |
active-directory | How To Manage Customer Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-manage-customer-accounts.md | To add or delete users, your account must be assigned the *User administrator* o ## Create a customer account -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Select **New user** > **Create new user**. 1. Select **Create a customer**. 1. Under **Identity**, select a **Sign in method** and enter the **Value**: As an administrator, you can reset a user's password, if the user forgets their To reset a customer's password: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Search for and select the user that needs the reset, and then select **Reset Password**. 1. In the **Reset password** page, select **Reset password**. 1. Copy the password and give it to the user. The user will be required to change the password during the next sign-in process. ## Delete a customer account -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/) with Global Administrator or Privileged Role Administrator permissions. -1. Make sure you're using your customer tenant. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar. -1. On the **Portal settings | Directories + subscriptions** page, find your customer tenant in the **Directory name** list, and then select **Switch**. -1. Under **Azure Active Directory**, select **Users** > **All users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Global Administrator or Privileged Role Administrator permissions. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Users** > **All users**. 1. Search for and select the user to delete. 1. Select **Delete**, and then **Yes** to confirm the deletion. |
active-directory | How To Multifactor Authentication Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-multifactor-authentication-customers.md | Create a Conditional Access policy in your customer tenant that prompts users fo 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Make sure you're using the directory that contains your Azure AD customer tenant: Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the toolbar and find your customer tenant in the list. If it's not the current directory, select **Switch**. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. -1. Browse to **Azure Active Directory** > **Protect & secure** > **Security Center**. +1. Browse to **Identity** > **Protection** > **Security Center**. 1. Select **Conditional Access** > **Policies**, and then select **New policy**. Create a Conditional Access policy in your customer tenant that prompts users fo Enable the email one-time passcode authentication method in your customer tenant for all users. -1. Sign in to your customer tenant in the [Microsoft Entra admin center](https://entra.microsoft.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Browse to **Azure Active Directory** > **Protect & secure** > **Authentication Methods**. +1. Browse to **Identity** > **Protection** > **Authentication methods**. 1. In the **Method** list, select **Email OTP**. |
active-directory | How To Register Ciam App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-register-ciam-app.md | Azure AD for customers supports authentication for Single-page apps (SPAs). The following steps show you how to register your SPA in the Microsoft Entra admin center: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - - 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. - - 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. --1. On the sidebar menu, select **Azure Active Directory**. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. -1. Select **Applications**, then select **App Registrations**. +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **+ New registration**. Azure AD for customers supports authentication for web apps. The following steps show you how to register your web app in the Microsoft Entra admin center: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - - 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. - - 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. -1. On the sidebar menu, select **Azure Active Directory**. --1. Select **Applications**, then select **App Registrations**. +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **+ New registration**. If your web app needs to call an API, you must grant your web app API permission The following steps show you how to register your app in the Microsoft Entra admin center: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). --1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: - - 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. - - 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. On the sidebar menu, select **Azure Active Directory**. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. -1. Select **Applications**, then select **App Registrations**. +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **+ New registration**. |
active-directory | How To Single Page App Vanillajs Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-configure-authentication.md | - Title: Tutorial - Handle authentication flows in a vanilla JavaScript single-page app -description: Learn how to configure authentication for a vanilla JavaScript single-page app (SPA) with your Azure Active Directory (AD) for customers tenant. --------- Previously updated : 06/09/2023-#Customer intent: As a developer, I want to learn how to configure vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. ---# Tutorial: Handle authentication flows in a vanilla JavaScript single-page app --In the [previous article](./how-to-single-page-app-vanillajs-prepare-app.md), you created a vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This tutorial demonstrates how to configure the application to authenticate and authorize users to access protected resources. --In this tutorial; --> [!div class="checklist"] -> * Configure the settings for the application -> * Add code to *authRedirect.js* to handle the authentication flow -> * Add code to *authPopup.js* to handle the authentication flow --## Prerequisites --* Completion of the prerequisites and steps in [Prepare a single-page application for authentication](how-to-single-page-app-vanillajs-prepare-app.md). --## Edit the authentication configuration file --The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-grant-flow.md) to authenticate users. The Implicit Grant Flow is a browser-based flow that doesn't require a back-end server. The flow redirects the user to the sign-in page, where the user signs in and consents to the permissions that are being requested by the application. The purpose of *authConfig.js* is to configure the authentication flow. --1. Open *public/authConfig.js* and add the following code snippet: -- ```javascript - /** - * Configuration object to be passed to MSAL instance on creation. - * For a full list of MSAL.js configuration parameters, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md - */ - const msalConfig = { - auth: { - clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply. - authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain - redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/ - navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response. - }, - cache: { - cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO. - storeAuthStateInCookie: false, // set this to true if you have to support IE - }, - system: { - loggerOptions: { - loggerCallback: (level, message, containsPii) => { - if (containsPii) { - return; - } - switch (level) { - case msal.LogLevel.Error: - console.error(message); - return; - case msal.LogLevel.Info: - console.info(message); - return; - case msal.LogLevel.Verbose: - console.debug(message); - return; - case msal.LogLevel.Warning: - console.warn(message); - return; - } - }, - }, - }, - }; - - /** - * An optional silentRequest object can be used to achieve silent SSO - * between applications by providing a "login_hint" property. - */ - - // const silentRequest = { - // scopes: ["openid", "profile"], - // loginHint: "example@domain.net" - // }; - - // exporting config object for jest - if (typeof exports !== 'undefined') { - module.exports = { - msalConfig: msalConfig, - loginRequest: loginRequest, - }; - } - ``` --1. Replace the following values with the values from the Azure portal: - - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center. - - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). -2. Save the file. --## Adding code to the redirection file --A redirection file is required to handle the response from the sign-in page. It is used to extract the access token from the URL fragment and use it to call the protected API. It is also used to handle errors that occur during the authentication process. --1. Open *public/authRedirect.js* and add the following code snippet: -- ```javascript - // Create the main myMSALObj instance - // configuration parameters are located at authConfig.js - const myMSALObj = new msal.PublicClientApplication(msalConfig); - - let username = ""; - - /** - * A promise handler needs to be registered for handling the - * response returned from redirect flow. For more information, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md#redirect-apis - */ - myMSALObj.handleRedirectPromise() - .then(handleResponse) - .catch((error) => { - console.error(error); - }); - - function selectAccount() { - - /** - * See here for more info on account retrieval: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md - */ - - const currentAccounts = myMSALObj.getAllAccounts(); - - if (!currentAccounts) { - return; - } else if (currentAccounts.length > 1) { - // Add your account choosing logic here - console.warn("Multiple accounts detected."); - } else if (currentAccounts.length === 1) { - welcomeUser(currentAccounts[0].username); - updateTable(currentAccounts[0]); - } - } - - function handleResponse(response) { - - /** - * To see the full list of response object properties, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response - */ - - if (response !== null) { - welcomeUser(response.account.username); - updateTable(response.account); - } else { - selectAccount(); - } - } - - function signIn() { - - /** - * You can pass a custom request object below. This will override the initial configuration. For more information, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request - */ - - myMSALObj.loginRedirect(loginRequest); - } - - function signOut() { - - /** - * You can pass a custom request object below. This will override the initial configuration. For more information, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request - */ - - // Choose which account to logout from by passing a username. - const logoutRequest = { - account: myMSALObj.getAccountByUsername(username), - postLogoutRedirectUri: '/signout', // remove this line if you would like navigate to index page after logout. - - }; - - myMSALObj.logoutRedirect(logoutRequest); - } - ``` --1. Save the file. --## Adding code to the *authPopup.js* file --The application uses *authPopup.js* to handle the authentication flow when the user signs in using the pop-up window. The pop-up window is used when the user is already signed in and the application needs to get an access token for a different resource. --1. Open *public/authPopup.js* and add the following code snippet: -- ```javascript - // Create the main myMSALObj instance - // configuration parameters are located at authConfig.js - const myMSALObj = new msal.PublicClientApplication(msalConfig); - - let username = ""; - - function selectAccount () { - - /** - * See here for more info on account retrieval: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md - */ - - const currentAccounts = myMSALObj.getAllAccounts(); - - if (!currentAccounts || currentAccounts.length < 1) { - return; - } else if (currentAccounts.length > 1) { - // Add your account choosing logic here - console.warn("Multiple accounts detected."); - } else if (currentAccounts.length === 1) { - username = currentAccounts[0].username - welcomeUser(currentAccounts[0].username); - updateTable(currentAccounts[0]); - } - } - - function handleResponse(response) { - - /** - * To see the full list of response object properties, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response - */ - - if (response !== null) { - username = response.account.username - welcomeUser(username); - updateTable(response.account); - } else { - selectAccount(); - } - } - - function signIn() { - - /** - * You can pass a custom request object below. This will override the initial configuration. For more information, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request - */ - - myMSALObj.loginPopup(loginRequest) - .then(handleResponse) - .catch(error => { - console.error(error); - }); - } - - function signOut() { - - /** - * You can pass a custom request object below. This will override the initial configuration. For more information, visit: - * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request - */ - - // Choose which account to logout from by passing a username. - const logoutRequest = { - account: myMSALObj.getAccountByUsername(username), - mainWindowRedirectUri: '/signout' - }; - - myMSALObj.logoutPopup(logoutRequest); - } - - selectAccount(); - ``` --1. Save the file. --## Next steps --> [!div class="nextstepaction"] -> [Sign in and sign out of the vanilla JS SPA](./how-to-single-page-app-vanillajs-sign-in-sign-out.md) |
active-directory | How To Single Page App Vanillajs Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-app.md | - Title: Tutorial - Prepare a vanilla JavaScript single-page app (SPA) for authentication in a customer tenant -description: Learn how to prepare a vanilla JavaScript single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant. --------- Previously updated : 06/09/2023-#Customer intent: As a developer, I want to learn how to configure vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant. ---# Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant --In the [previous article](./how-to-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant. --In this tutorial; --> [!div class="checklist"] -> * Create a vanilla JavaScript project in Visual Studio Code -> * Install required packages -> * Add code to *server.js* to create a server --## Prerequisites --* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md). -* Although any integrated development environment (IDE) that supports vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page. -* [Node.js](https://nodejs.org/en/download/). --## Create a new vanilla JS project and install dependencies --1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. -1. Open a new terminal by selecting **Terminal** > **New Terminal**. -1. Run the following command to create a new vanilla JS project: -- ```powershell - npm init -y - ``` -1. Create additional folders and files to achieve the following project structure: -- ``` - ΓööΓöÇΓöÇ public - ΓööΓöÇΓöÇ authConfig.js - ΓööΓöÇΓöÇ authPopup.js - ΓööΓöÇΓöÇ authRedirect.js - ΓööΓöÇΓöÇ https://docsupdatetracker.net/index.html - ΓööΓöÇΓöÇ signout.html - ΓööΓöÇΓöÇ styles.css - ΓööΓöÇΓöÇ ui.js - ΓööΓöÇΓöÇ server.js - ``` - -## Install app dependencies --1. In the **Terminal**, run the following command to install the required dependencies for the project: -- ```powershell - npm install express morgan @azure/msal-browser - ``` --## Edit the *server.js* file --**Express** is a web application framework for **Node.js**. It's used to create a server that hosts the application. **Morgan** is the middleware that logs HTTP requests to the console. The server file is used to host these dependencies and contains the routes for the application. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/). --1. Add the following code snippet to the *server.js* file: -- ```javascript - const express = require('express'); - const morgan = require('morgan'); - const path = require('path'); - - const DEFAULT_PORT = process.env.PORT || 3000; - - // initialize express. - const app = express(); - - // Configure morgan module to log all requests. - app.use(morgan('dev')); - - // serve public assets. - app.use(express.static('public')); - - // serve msal-browser module - app.use(express.static(path.join(__dirname, "node_modules/@azure/msal-browser/lib"))); - - // set up a route for signout.html - app.get('/signout', (req, res) => { - res.sendFile(path.join(__dirname + '/public/signout.html')); - }); - - // set up a route for redirect.html - app.get('/redirect', (req, res) => { - res.sendFile(path.join(__dirname + '/public/redirect.html')); - }); - - // set up a route for https://docsupdatetracker.net/index.html - app.get('/', (req, res) => { - res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html')); - }); - - app.listen(DEFAULT_PORT, () => { - console.log(`Sample app listening on port ${DEFAULT_PORT}!`); - }); -- ``` --In this code, the **app** variable is initialized with the **express** module and **express** is used to serve the public assets. **Msal-browser** is served as a static asset and is used to initiate the authentication flow. --## Next steps --> [!div class="nextstepaction"] -> [Configure SPA for authentication](how-to-single-page-app-vanillajs-configure-authentication.md) |
active-directory | How To Single Page App Vanillajs Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-prepare-tenant.md | - Title: Tutorial - Prepare your customer tenant to authenticate users in a Vanilla JavaScript single-page application -description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a Vanilla JavaScript single-page app (SPA). --------- Previously updated : 06/09/2023-#Customer intent: As a developer, I want to learn how to configure a vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. ---# Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app --This tutorial series demonstrates how to build a vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences. --In this tutorial; --> [!div class="checklist"] -> * Register a SPA in the Microsoft Entra admin center, and record its identifiers -> * Define the platform and URLs -> * Grant permissions to the SPA to access the Microsoft Graph API -> * Create a sign in and sign out user flow in the Microsoft Entra admin center -> * Associate your SPA with the user flow --## Prerequisites --- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions:-- * Application administrator - * Application developer - * Cloud application administrator --- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one.--## Register the SPA and record identifiers ---## Add a platform redirect URL ---## Grant API permissions ---## Create a user flow ---## Associate the SPA with the user flow ---## Next steps --> [!div class="nextstepaction"] -> [Prepare your Vanilla JS SPA](how-to-single-page-app-vanillajs-prepare-app.md) |
active-directory | How To Single Page App Vanillajs Sign In Sign Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-single-page-app-vanillajs-sign-in-sign-out.md | - Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant -description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant. -------- Previously updated : 05/25/2023-#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. ---# Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant --In the [previous article](how-to-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality. --In this tutorial; --> [!div class="checklist"] -> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface -> * Add code to the *signout.html* file to create the sign-out page -> * Sign in and sign out of the application --## Prerequisites --* Completion of the prerequisites and steps in [Create components for authentication and authorization](how-to-single-page-app-vanillajs-configure-authentication.md). --## Add code to the *https://docsupdatetracker.net/index.html* file --The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button. --1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet: -- ```html - <!DOCTYPE html> - <html lang="en"> - - <head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no"> - <title>Microsoft identity platform</title> - <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> - <link rel="stylesheet" href="./styles.css"> - - <!-- adding Bootstrap 5 for UI components --> - <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet" - integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous"> - - <!-- msal.min.js can be used in the place of msal-browser.js --> - <script src="/msal-browser.min.js"></script> - </head> - - <body> - <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle"> - <a class="navbar-brand" href="/">Microsoft identity platform</a> - <div class="navbar-collapse justify-content-end"> - <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button> - <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button> - </div> - </nav> - <br> - <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js - </h5> - <h5 id="welcome-div" class="card-header text-center d-none"></h5> - <br> - <div class="table-responsive-ms" id="table"> - <table id="table-div" class="table table-striped d-none"> - <thead id="table-head-div"> - <tr> - <th>Claim Type</th> - <th>Value</th> - <th>Description</th> - </tr> - </thead> - <tbody id="table-body-div"> - </tbody> - </table> - </div> - <!-- importing bootstrap.js and supporting js libraries --> - <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" - integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"> - </script> - <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js" - integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3" - crossorigin="anonymous"></script> - <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js" - integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3" - crossorigin="anonymous"></script> - - <!-- importing app scripts (load order is important) --> - <script type="text/javascript" src="./authConfig.js"></script> - <script type="text/javascript" src="./ui.js"></script> - <script type="text/javascript" src="./claimUtils.js"></script> - <!-- <script type="text/javascript" src="./authRedirect.js"></script> --> - <!-- uncomment the above line and comment the line below if you would like to use the redirect flow --> - <script type="text/javascript" src="./authPopup.js"></script> - </body> - - </html> - ``` --1. Save the file. --## Add code to the *signout.html* file --1. Open *public/signout.html* and add the following code snippet: -- ```html - <!DOCTYPE html> - <html lang="en"> - <head> - <meta charset="UTF-8"> - <meta name="viewport" content="width=device-width, initial-scale=1.0"> - <title>Azure AD | Vanilla JavaScript SPA</title> - <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> - - <!-- adding Bootstrap 4 for UI components --> - <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous"> - </head> - <body> - <div class="jumbotron" style="margin: 10%"> - <h1>Goodbye!</h1> - <p>You have signed out and your cache has been cleared.</p> - <a class="btn btn-primary" href="/" role="button">Take me back</a> - </div> - </body> - </html> - ``` --1. Save the file. --## Add code to the *ui.js* file --When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button. --1. Open *public/ui.js* and add the following code snippet: -- ```javascript - // Select DOM elements to work with - const signInButton = document.getElementById('signIn'); - const signOutButton = document.getElementById('signOut'); - const titleDiv = document.getElementById('title-div'); - const welcomeDiv = document.getElementById('welcome-div'); - const tableDiv = document.getElementById('table-div'); - const tableBody = document.getElementById('table-body-div'); - - function welcomeUser(username) { - signInButton.classList.add('d-none'); - signOutButton.classList.remove('d-none'); - titleDiv.classList.add('d-none'); - welcomeDiv.classList.remove('d-none'); - welcomeDiv.innerHTML = `Welcome ${username}!`; - }; - - function updateTable(account) { - tableDiv.classList.remove('d-none'); - - const tokenClaims = createClaimsTable(account.idTokenClaims); - - Object.keys(tokenClaims).forEach((key) => { - let row = tableBody.insertRow(0); - let cell1 = row.insertCell(0); - let cell2 = row.insertCell(1); - let cell3 = row.insertCell(2); - cell1.innerHTML = tokenClaims[key][0]; - cell2.innerHTML = tokenClaims[key][1]; - cell3.innerHTML = tokenClaims[key][2]; - }); - }; - ``` --1. Save the file. --## Add code to the *styles.css* file --1. Open *public/styles.css* and add the following code snippet: -- ```css - .navbarStyle { - padding: .5rem 1rem !important; - } - - .table-responsive-ms { - max-height: 39rem !important; - padding-left: 10%; - padding-right: 10%; - } - ``` --1. Save the file. --## Run your project and sign in --Now that all the required code snippets have been added, the application can be called and tested in a web browser. --1. Open a new terminal and run the following command to start your express web server. - ```powershell - npm start - ``` -1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`. -1. Select **No account? Create one**, which starts the sign-up flow. -1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application. -1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed. -- 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**. --1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. -- :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: --## Sign out of the application --1. To sign out of the application, select **Sign out** in the navigation bar. -1. A window appears asking which account to sign out of. -1. Upon successful sign out, a final window appears advising you to close all browser windows. --## Next steps --- [Enable self-service password reset](./how-to-enable-password-reset-customers.md) |
active-directory | How To User Flow Add Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-add-application.md | Because you might want the same sign-in experience for all of your customer-faci If you already registered your application in your customer tenant, you can add it to the new user flow. This step activates the sign-up and sign-in experience for users who visit your application. An application can have only one user flow, but a user flow can be used by multiple applications. -1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory** > **External Identities** > **User flows**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). ++1. Browse to **Identity** > **External Identities** > **User flows**. 1. From the list, select your user flow. |
active-directory | How To User Flow Sign Up Sign In Customers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-user-flow-sign-up-sign-in-customers.md | Follow these steps to create a user flow a customer can use to sign in or sign u ### To add a new user flow -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. -1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**. +1. Browse to **Identity** > **External Identities** > **User flows**. 1. Select **New user flow**. Follow these steps to create a user flow a customer can use to sign in or sign u You can choose the order in which the attributes are displayed on the sign-up page. -1. In the [Microsoft Entra admin center](https://entra.microsoft.com/), select **Azure Active Directory**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. In the left pane, select **Azure Active Directory** > **External Identities** > **User flows**. +1. Browse to **Identity** > **External Identities** > **User flows**. 1. From the list, select your user flow. |
active-directory | How To Web App Node Use Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/how-to-web-app-node-use-certificate.md | -In production, you should purchase a certificate signed by a well-known certificate authority, and use [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) to manage certificate access and lifetime for you. However, for testing purposes, you can create a self-signed certificate and configure your apps to authenticate with it. +In production, you should purchase a certificate signed by a well-known certificate authority, and use [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) to manage certificate access and lifetime for you. However, for testing purposes, you can create a self-signed certificate and configure your apps to authenticate with it. -In this article, you learn to generate a self-signed certificate by using [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) on the Azure portal, OpenSSL or Windows PowerShell. +In this article, you learn to generate a self-signed certificate by using [Azure Key Vault](https://azure.microsoft.com/products/key-vault/) on the Azure portal, OpenSSL or Windows PowerShell. If you have a client secret already, you'll learn how to safely delete it. When needed, you can also create a self-signed certificate programmatically by using [.NET](/azure/key-vault/certificates/quick-create-net), [Node.js](/azure/key-vault/certificates/quick-create-node), [Go](/azure/key-vault/certificates/quick-create-go), [Python](/azure/key-vault/certificates/quick-create-python) or [Java](/azure/key-vault/certificates/quick-create-java) client libraries. After the command finishes execution, you should have a *.crt* and a *.key* file [!INCLUDE [active-directory-customers-app-integration-add-user-flow](./includes/register-app/add-client-app-certificate.md)] + ## Configure your Node.js app to use certificate Once you associate your app registration with the certificate, you need to update your app code to start using the certificate: -1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then update it to look similar to the following code: +1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then update it to look similar to the following code. If you have a client secret present, make sure you remove it: ```javascript require('dotenv').config(); Once you associate your app registration with the certificate, you need to updat auth: { clientId: process.env.CLIENT_ID || 'Enter_the_Application_Id_Here', // 'Application (client) ID' of app registration in Azure portal - this value is a GUID authority: process.env.AUTHORITY || `https://${TENANT_SUBDOMAIN}.ciamlogin.com/`, - //clientSecret: process.env.CLIENT_SECRET || 'Enter_the_Client_Secret_Here', // Client secret generated from the app registration in Azure portal clientCertificate: { thumbprint: "YOUR_CERT_THUMBPRINT", // replace with thumbprint obtained during step 2 above privateKey: privateKey Once you associate your app registration with the certificate, you need to updat You can use your existing certificate directly from Azure Key Vault: -1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then comment the `clientSecret` property: +1. Locate the file that contains your MSAL configuration object, such as `msalConfig` in *authConfig.js*, then remove the `clientSecret` property: ```java const msalConfig = { auth: { clientId: process.env.CLIENT_ID || 'Enter_the_Application_Id_Here', // 'Application (client) ID' of app registration in Azure portal - this value is a GUID authority: process.env.AUTHORITY || `https://${TENANT_SUBDOMAIN}.ciamlogin.com/`, - //clientSecret: process.env.CLIENT_SECRET || 'Enter_the_Client_Secret_Here', // Client secret generated from the app registration in Azure portal }, //... }; |
active-directory | Microsoft Graph Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/microsoft-graph-operations.md | During registration, you'll specify a **Redirect URI** which redirects the user The following steps show you how to register your app in the Microsoft Entra admin center: -1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. If you have access to multiple tenants, make sure you use the directory that contains your Azure AD for customers tenant: +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. - 1. Select the **Directories + subscriptions** icon :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the portal toolbar. -- 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD for customers directory in the **Directory name** list, and then select **Switch**. --1. On the sidebar menu, select **Azure Active Directory**. --1. Select **Applications**, then select **App Registrations**. +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **+ New registration**. |
active-directory | Quickstart Tenant Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/quickstart-tenant-setup.md | In this quickstart, you'll learn how to create a tenant with customer configurat ## Create a new tenant with customer configurations -1. Sign in to your organization's [Microsoft Entra admin center](https://entra.microsoft.com/). -1. From the left menu, select **Azure Active Directory** > **Overview**. -1. Select **Manage tenants** at the top of the page. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. Browse to **Identity** > **Overview** > **Manage tenants**. 1. Select **Create**. :::image type="content" source="media/how-to-create-customer-tenant-portal/create-tenant.png" alt-text="Screenshot of the create tenant option."::: In this quickstart, you'll learn how to create a tenant with customer configurat If you're not going to continue to use this tenant, you can delete it using the following steps: -1. Ensure that you're signed in to the directory that you want to delete through the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the Azure portal. Switch to the target directory if needed. -1. From the left menu, select **Azure Active Directory** > **Overview**. -1. Select **Manage tenants** at the top of the page. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). +1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to your customer tenant. +1. Browse to **Identity** > **Overview** > **Manage tenants**. 1. Select the tenant you want to delete, and then select **Delete**. :::image type="content" source="media/how-to-create-customer-tenant-portal/delete-tenant.png" alt-text="Screenshot that shows how to delete the tenant."::: |
active-directory | Sample Single Page App Vanillajs Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/sample-single-page-app-vanillajs-sign-in.md | Title: Sign in users in a sample vanilla JavaScript single-page application -description: Learn how to configure a sample JavaSCript single-page application (SPA) to sign in and sign out users. +description: Learn how to configure a sample JavaScript single-page application (SPA) to sign in and sign out users. If you choose to download the `.zip` file, extract the sample app file to a fold ``` 1. Open a web browser and navigate to `http://localhost:3000/`.-1. Select **No account? Create one**, which starts the sign-up flow. -1. In the **Create account** window, enter the email address registered to your customer tenant, which starts the sign-up flow as a user for your application. -1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed. -1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**. +1. Sign-in with an account registered to the customer tenant. +1. Once signed in the display name is shown next to the **Sign out** button as shown in the following screenshot. 1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: |
active-directory | Samples Ciam All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/samples-ciam-all.md | These samples and how-to guides demonstrate how to integrate a single-page appli > [!div class="mx-tdCol2BreakAll"] > | Language/<br/>Platform | Code sample guide | Build and integrate guide | > | - | -- | - |-> | JavaScript, Vanilla | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) | +> | JavaScript, Vanilla | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) | > | JavaScript, Angular | • [Sign in users](./sample-single-page-app-angular-sign-in.md) | | > | JavaScript, React | • [Sign in users](./sample-single-page-app-react-sign-in.md) | • [Sign in users](./tutorial-single-page-app-react-sign-in-prepare-tenant.md) | These samples and how-to guides demonstrate how to write a daemon application th > [!div class="mx-tdCol2BreakAll"] > | App type | Code sample guide | Build and integrate guide | > | - | -- | - |-> | Single-page application | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](how-to-single-page-app-vanillajs-prepare-tenant.md) | +> | Single-page application | • [Sign in users](./sample-single-page-app-vanillajs-sign-in.md) | • [Sign in users](tutorial-single-page-app-vanillajs-prepare-tenant.md) | ### JavaScript, Angular |
active-directory | Tutorial Single Page App Vanillajs Configure Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-configure-authentication.md | + + Title: Tutorial - Handle authentication flows in a Vanilla JavaScript single-page app +description: Learn how to configure authentication for a Vanilla JavaScript single-page app (SPA) with your Azure Active Directory (AD) for customers tenant. +++++++++ Last updated : 08/17/2023+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app ++In the [previous article](./tutorial-single-page-app-vanillajs-prepare-app.md), you created a Vanilla JavaScript (JS) single-page application (SPA) and a server to host it. This tutorial demonstrates how to configure the application to authenticate and authorize users to access protected resources. ++In this tutorial; ++> [!div class="checklist"] +> * Configure the settings for the application +> * Add code to *authRedirect.js* to handle the authentication flow +> * Add code to *authPopup.js* to handle the authentication flow ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare a single-page application for authentication](tutorial-single-page-app-vanillajs-prepare-app.md). ++## Edit the authentication configuration file ++The application uses the [Implicit Grant Flow](../../develop/v2-oauth2-implicit-grant-flow.md) to authenticate users. The Implicit Grant Flow is a browser-based flow that doesn't require a back-end server. The flow redirects the user to the sign-in page, where the user signs in and consents to the permissions that are being requested by the application. The purpose of *authConfig.js* is to configure the authentication flow. ++1. Open *public/authConfig.js* and add the following code snippet: ++ ```javascript + /** + * Configuration object to be passed to MSAL instance on creation. + * For a full list of MSAL.js configuration parameters, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/configuration.md + */ + const msalConfig = { + auth: { + clientId: 'Enter_the_Application_Id_Here', // This is the ONLY mandatory field that you need to supply. + authority: 'https://Enter_the_Tenant_Subdomain_Here.ciamlogin.com/', // Replace "Enter_the_Tenant_Subdomain_Here" with your tenant subdomain + redirectUri: '/', // You must register this URI on Azure Portal/App Registration. Defaults to window.location.href e.g. http://localhost:3000/ + navigateToLoginRequestUrl: true, // If "true", will navigate back to the original request location before processing the auth code response. + }, + cache: { + cacheLocation: 'sessionStorage', // Configures cache location. "sessionStorage" is more secure, but "localStorage" gives you SSO. + storeAuthStateInCookie: false, // set this to true if you have to support IE + }, + system: { + loggerOptions: { + loggerCallback: (level, message, containsPii) => { + if (containsPii) { + return; + } + switch (level) { + case msal.LogLevel.Error: + console.error(message); + return; + case msal.LogLevel.Info: + console.info(message); + return; + case msal.LogLevel.Verbose: + console.debug(message); + return; + case msal.LogLevel.Warning: + console.warn(message); + return; + } + }, + }, + }, + }; + + /** + * An optional silentRequest object can be used to achieve silent SSO + * between applications by providing a "login_hint" property. + */ + + // const silentRequest = { + // scopes: ["openid", "profile"], + // loginHint: "example@domain.net" + // }; + + // exporting config object for jest + if (typeof exports !== 'undefined') { + module.exports = { + msalConfig: msalConfig, + loginRequest: loginRequest, + }; + } + ``` ++1. Replace the following values with the values from the Azure portal: + - Find the `Enter_the_Application_Id_Here` value and replace it with the **Application ID (clientId)** of the app you registered in the Microsoft Entra admin center. + - In **Authority**, find `Enter_the_Tenant_Subdomain_Here` and replace it with the subdomain of your tenant. For example, if your tenant primary domain is `contoso.onmicrosoft.com`, use `contoso`. If you don't have your tenant name, [learn how to read your tenant details](how-to-create-customer-tenant-portal.md#get-the-customer-tenant-details). +2. Save the file. ++## Adding code to the redirection file ++A redirection file is required to handle the response from the sign-in page. It is used to extract the access token from the URL fragment and use it to call the protected API. It is also used to handle errors that occur during the authentication process. ++1. Open *public/authRedirect.js* and add the following code snippet: ++ ```javascript + // Create the main myMSALObj instance + // configuration parameters are located at authConfig.js + const myMSALObj = new msal.PublicClientApplication(msalConfig); + + let username = ""; + + /** + * A promise handler needs to be registered for handling the + * response returned from redirect flow. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/initialization.md#redirect-apis + */ + myMSALObj.handleRedirectPromise() + .then(handleResponse) + .catch((error) => { + console.error(error); + }); + + function selectAccount() { + + /** + * See here for more info on account retrieval: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md + */ + + const currentAccounts = myMSALObj.getAllAccounts(); + + if (!currentAccounts) { + return; + } else if (currentAccounts.length > 1) { + // Add your account choosing logic here + console.warn("Multiple accounts detected."); + } else if (currentAccounts.length === 1) { + welcomeUser(currentAccounts[0].username); + updateTable(currentAccounts[0]); + } + } + + function handleResponse(response) { + + /** + * To see the full list of response object properties, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response + */ + + if (response !== null) { + welcomeUser(response.account.username); + updateTable(response.account); + } else { + selectAccount(); + } + } + + function signIn() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + myMSALObj.loginRedirect(loginRequest); + } + + function signOut() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + // Choose which account to logout from by passing a username. + const logoutRequest = { + account: myMSALObj.getAccountByUsername(username), + postLogoutRedirectUri: '/signout', // remove this line if you would like navigate to index page after logout. + + }; + + myMSALObj.logoutRedirect(logoutRequest); + } + ``` ++1. Save the file. ++## Adding code to the *authPopup.js* file ++The application uses *authPopup.js* to handle the authentication flow when the user signs in using the pop-up window. The pop-up window is used when the user is already signed in and the application needs to get an access token for a different resource. ++1. Open *public/authPopup.js* and add the following code snippet: ++ ```javascript + // Create the main myMSALObj instance + // configuration parameters are located at authConfig.js + const myMSALObj = new msal.PublicClientApplication(msalConfig); + + let username = ""; + + function selectAccount () { + + /** + * See here for more info on account retrieval: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-common/docs/Accounts.md + */ + + const currentAccounts = myMSALObj.getAllAccounts(); + + if (!currentAccounts || currentAccounts.length < 1) { + return; + } else if (currentAccounts.length > 1) { + // Add your account choosing logic here + console.warn("Multiple accounts detected."); + } else if (currentAccounts.length === 1) { + username = currentAccounts[0].username + welcomeUser(currentAccounts[0].username); + updateTable(currentAccounts[0]); + } + } + + function handleResponse(response) { + + /** + * To see the full list of response object properties, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#response + */ + + if (response !== null) { + username = response.account.username + welcomeUser(username); + updateTable(response.account); + } else { + selectAccount(); + } + } + + function signIn() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + myMSALObj.loginPopup(loginRequest) + .then(handleResponse) + .catch(error => { + console.error(error); + }); + } + + function signOut() { + + /** + * You can pass a custom request object below. This will override the initial configuration. For more information, visit: + * https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-browser/docs/request-response-object.md#request + */ + + // Choose which account to logout from by passing a username. + const logoutRequest = { + account: myMSALObj.getAccountByUsername(username), + mainWindowRedirectUri: '/signout' + }; + + myMSALObj.logoutPopup(logoutRequest); + } + + selectAccount(); + ``` ++1. Save the file. ++## Next steps ++> [!div class="nextstepaction"] +> [Sign in and sign out of the Vanilla JS SPA](./tutorial-single-page-app-vanillajs-sign-in-sign-out.md) |
active-directory | Tutorial Single Page App Vanillajs Prepare App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-app.md | + + Title: Tutorial - Prepare a Vanilla JavaScript single-page app (SPA) for authentication in a customer tenant +description: Learn how to prepare a Vanilla JavaScript single-page app (SPA) for authentication and authorization with your Azure Active Directory (AD) for customers tenant. +++++++++ Last updated : 08/17/2023+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure AD for customers tenant. +++# Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant ++In the [previous article](tutorial-single-page-app-vanillajs-prepare-tenant.md), you registered an application and configured user flows in your Azure Active Directory (AD) for customers tenant. This article shows you how to create a Vanilla JavaScript (JS) single-page app (SPA) and configure it to sign in and sign out users with your customer tenant. ++In this tutorial; ++> [!div class="checklist"] +> * Create a Vanilla JavaScript project in Visual Studio Code +> * Install required packages +> * Add code to *server.js* to create a server ++## Prerequisites ++* Completion of the prerequisites and steps in [Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md). +* Although any integrated development environment (IDE) that supports Vanilla JS applications can be used, **Visual Studio Code** is recommended for this guide. It can be downloaded from the [Downloads](https://visualstudio.microsoft.com/downloads) page. +* [Node.js](https://nodejs.org/en/download/). ++## Create a new Vanilla JS project and install dependencies ++1. Open Visual Studio Code, select **File** > **Open Folder...**. Navigate to and select the location in which to create your project. +1. Open a new terminal by selecting **Terminal** > **New Terminal**. +1. Run the following command to create a new Vanilla JS project: ++ ```powershell + npm init -y + ``` +1. Create additional folders and files to achieve the following project structure: ++ ``` + ΓööΓöÇΓöÇ public + ΓööΓöÇΓöÇ authConfig.js + ΓööΓöÇΓöÇ authPopup.js + ΓööΓöÇΓöÇ authRedirect.js + ΓööΓöÇΓöÇ claimUtils.js + ΓööΓöÇΓöÇ https://docsupdatetracker.net/index.html + ΓööΓöÇΓöÇ signout.html + ΓööΓöÇΓöÇ styles.css + ΓööΓöÇΓöÇ ui.js + ΓööΓöÇΓöÇ server.js + ``` + +## Install app dependencies ++1. In the **Terminal**, run the following command to install the required dependencies for the project: ++ ```powershell + npm install express morgan @azure/msal-browser + ``` ++## Edit the *server.js* file ++**Express** is a web application framework for **Node.js**. It's used to create a server that hosts the application. **Morgan** is the middleware that logs HTTP requests to the console. The server file is used to host these dependencies and contains the routes for the application. Authentication and authorization are handled by the [Microsoft Authentication Library for JavaScript (MSAL.js)](/javascript/api/overview/). ++1. Add the following code snippet to the *server.js* file: ++ ```javascript + const express = require('express'); + const morgan = require('morgan'); + const path = require('path'); + + const DEFAULT_PORT = process.env.PORT || 3000; + + // initialize express. + const app = express(); + + // Configure morgan module to log all requests. + app.use(morgan('dev')); + + // serve public assets. + app.use(express.static('public')); + + // serve msal-browser module + app.use(express.static(path.join(__dirname, "node_modules/@azure/msal-browser/lib"))); + + // set up a route for signout.html + app.get('/signout', (req, res) => { + res.sendFile(path.join(__dirname + '/public/signout.html')); + }); + + // set up a route for redirect.html + app.get('/redirect', (req, res) => { + res.sendFile(path.join(__dirname + '/public/redirect.html')); + }); + + // set up a route for https://docsupdatetracker.net/index.html + app.get('/', (req, res) => { + res.sendFile(path.join(__dirname + '/https://docsupdatetracker.net/index.html')); + }); + + app.listen(DEFAULT_PORT, () => { + console.log(`Sample app listening on port ${DEFAULT_PORT}!`); + }); ++ ``` ++In this code, the **app** variable is initialized with the **express** module and **express** is used to serve the public assets. **Msal-browser** is served as a static asset and is used to initiate the authentication flow. ++## Next steps ++> [!div class="nextstepaction"] +> [Configure SPA for authentication](tutorial-single-page-app-vanillajs-configure-authentication.md) |
active-directory | Tutorial Single Page App Vanillajs Prepare Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-prepare-tenant.md | + + Title: Tutorial - Prepare your customer tenant to authenticate users in a Vanilla JavaScript single-page application +description: Learn how to configure your Azure Active Directory (AD) for customers tenant for authentication with a Vanilla JavaScript single-page app (SPA). +++++++++ Last updated : 08/17/2023+#Customer intent: As a developer, I want to learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app ++This tutorial series demonstrates how to build a Vanilla JavaScript single-page application (SPA) and prepare it for authentication using the Microsoft Entra admin center. You'll use the [Microsoft Authentication Library for JavaScript](/javascript/api/overview/msal-overview) library to authenticate your app with your Azure Active Directory (Azure AD) for customers tenant. Finally, you'll run the application and test the sign-in and sign-out experiences. ++In this tutorial; ++> [!div class="checklist"] +> * Register a SPA in the Microsoft Entra admin center, and record its identifiers +> * Define the platform and URLs +> * Grant permissions to the SPA to access the Microsoft Graph API +> * Create a sign in and sign out user flow in the Microsoft Entra admin center +> * Associate your SPA with the user flow ++## Prerequisites ++- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- This Azure account must have permissions to manage applications. Any of the following Azure AD roles include the required permissions: ++ * Application administrator + * Application developer + * Cloud application administrator ++- An Azure AD for customers tenant. If you haven't already, [create one now](https://aka.ms/ciam-free-trial?wt.mc_id=ciamcustomertenantfreetrial_linkclick_content_cnl). You can use an existing customer tenant if you have one. ++## Register the SPA and record identifiers +++## Add a platform redirect URL +++## Grant API permissions +++## Create a user flow +++## Associate the SPA with the user flow +++## Next steps ++> [!div class="nextstepaction"] +> [Prepare your Vanilla JS SPA](tutorial-single-page-app-Vanillajs-prepare-app.md) |
active-directory | Tutorial Single Page App Vanillajs Sign In Sign Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/tutorial-single-page-app-vanillajs-sign-in-sign-out.md | + + Title: Tutorial - Add sign-in and sign-out to a Vanilla JavaScript single-page app (SPA) for a customer tenant +description: Learn how to configure a Vanilla JavaScript single-page app (SPA) to sign in and sign out users with your Azure Active Directory (AD) for customers tenant. ++++++++ Last updated : 08/02/2023+#Customer intent: As a developer, I want to learn how to configure Vanilla JavaScript single-page app (SPA) to sign in and sign out users with my Azure Active Directory (AD) for customers tenant. +++# Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant ++In the [previous article](tutorial-single-page-app-vanillajs-configure-authentication.md), you edited the popup and redirection files that handle the sign-in page response. This tutorial demonstrates how to build a responsive user interface (UI) that contains a **Sign-In** and **Sign-Out** button and run the project to test the sign-in and sign-out functionality. ++In this tutorial; ++> [!div class="checklist"] +> * Add code to the *https://docsupdatetracker.net/index.html* file to create the user interface +> * Add code to the *signout.html* file to create the sign-out page +> * Sign in and sign out of the application ++## Prerequisites ++* Completion of the prerequisites and steps in [Create components for authentication and authorization](tutorial-single-page-app-vanillajs-configure-authentication.md). ++## Add code to the *https://docsupdatetracker.net/index.html* file ++The main page of the SPA, *https://docsupdatetracker.net/index.html*, is the first page that is loaded when the application is started. It's also the page that is loaded when the user selects the **Sign-Out** button. ++1. Open *public/https://docsupdatetracker.net/index.html* and add the following code snippet: ++ ```html + <!DOCTYPE html> + <html lang="en"> + + <head> + <meta charset="UTF-8"> + <meta name="viewport" content="width=device-width, initial-scale=1.0, shrink-to-fit=no"> + <title>Microsoft identity platform</title> + <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> + <link rel="stylesheet" href="./styles.css"> + + <!-- adding Bootstrap 5 for UI components --> + <link href="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/css/bootstrap.min.css" rel="stylesheet" + integrity="sha384-Zenh87qX5JnK2Jl0vWa8Ck2rdkQ2Bzep5IDxbcnCeuOxjzrPF/et3URy9Bv1WTRi" crossorigin="anonymous"> + + <!-- msal.min.js can be used in the place of msal-browser.js --> + <script src="/msal-browser.min.js"></script> + </head> + + <body> + <nav class="navbar navbar-expand-sm navbar-dark bg-primary navbarStyle"> + <a class="navbar-brand" href="/">Microsoft identity platform</a> + <div class="navbar-collapse justify-content-end"> + <button type="button" id="signIn" class="btn btn-secondary" onclick="signIn()">Sign-in</button> + <button type="button" id="signOut" class="btn btn-success d-none" onclick="signOut()">Sign-out</button> + </div> + </nav> + <br> + <h5 id="title-div" class="card-header text-center">Vanilla JavaScript single-page application secured with MSAL.js + </h5> + <h5 id="welcome-div" class="card-header text-center d-none"></h5> + <br> + <div class="table-responsive-ms" id="table"> + <table id="table-div" class="table table-striped d-none"> + <thead id="table-head-div"> + <tr> + <th>Claim Type</th> + <th>Value</th> + <th>Description</th> + </tr> + </thead> + <tbody id="table-body-div"> + </tbody> + </table> + </div> + <!-- importing bootstrap.js and supporting js libraries --> + <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" + integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"> + </script> + <script src="https://cdn.jsdelivr.net/npm/@popperjs/core@2.11.6/dist/umd/popper.min.js" + integrity="sha384-oBqDVmMz9ATKxIep9tiCxS/Z9fNfEXiDAYTujMAeBAsjFuCZSmKbSSUnQlmh/jp3" + crossorigin="anonymous"></script> + <script src="https://cdn.jsdelivr.net/npm/bootstrap@5.2.2/dist/js/bootstrap.bundle.min.js" + integrity="sha384-OERcA2EqjJCMA+/3y+gxIOqMEjwtxJY7qPCqsdltbNJuaOe923+mo//f6V8Qbsw3" + crossorigin="anonymous"></script> + + <!-- importing app scripts (load order is important) --> + <script type="text/javascript" src="./authConfig.js"></script> + <script type="text/javascript" src="./ui.js"></script> + <script type="text/javascript" src="./claimUtils.js"></script> + <!-- <script type="text/javascript" src="./authRedirect.js"></script> --> + <!-- uncomment the above line and comment the line below if you would like to use the redirect flow --> + <script type="text/javascript" src="./authPopup.js"></script> + </body> + + </html> + ``` ++1. Save the file. ++## Add code to the *claimUtils.js* file ++1. Open *public/claimUtils.js* and add the following code snippet: + + ```javascript + /** + * Populate claims table with appropriate description + * @param {Object} claims ID token claims + * @returns claimsObject + */ + const createClaimsTable = (claims) => { + let claimsObj = {}; + let index = 0; + + Object.keys(claims).forEach((key) => { + if (typeof claims[key] !== 'string' && typeof claims[key] !== 'number') return; + switch (key) { + case 'aud': + populateClaim( + key, + claims[key], + "Identifies the intended recipient of the token. In ID tokens, the audience is your app's Application ID, assigned to your app in the Azure portal.", + index, + claimsObj + ); + index++; + break; + case 'iss': + populateClaim( + key, + claims[key], + 'Identifies the issuer, or authorization server that constructs and returns the token. It also identifies the Azure AD tenant for which the user was authenticated. If the token was issued by the v2.0 endpoint, the URI will end in /v2.0. The GUID that indicates that the user is a consumer user from a Microsoft account is 9188040d-6c67-4c5b-b112-36a304b66dad.', + index, + claimsObj + ); + index++; + break; + case 'iat': + populateClaim( + key, + changeDateFormat(claims[key]), + 'Issued At indicates when the authentication for this token occurred.', + index, + claimsObj + ); + index++; + break; + case 'nbf': + populateClaim( + key, + changeDateFormat(claims[key]), + 'The nbf (not before) claim identifies the time (as UNIX timestamp) before which the JWT must not be accepted for processing.', + index, + claimsObj + ); + index++; + break; + case 'exp': + populateClaim( + key, + changeDateFormat(claims[key]), + "The exp (expiration time) claim identifies the expiration time (as UNIX timestamp) on or after which the JWT must not be accepted for processing. It's important to note that in certain circumstances, a resource may reject the token before this time. For example, if a change in authentication is required or a token revocation has been detected.", + index, + claimsObj + ); + index++; + break; + case 'name': + populateClaim( + key, + claims[key], + "The principal about which the token asserts information, such as the user of an application. This value is immutable and can't be reassigned or reused. It can be used to perform authorization checks safely, such as when the token is used to access a resource. By default, the subject claim is populated with the object ID of the user in the directory", + index, + claimsObj + ); + index++; + break; + case 'preferred_username': + populateClaim( + key, + claims[key], + 'The primary username that represents the user. It could be an email address, phone number, or a generic username without a specified format. Its value is mutable and might change over time. Since it is mutable, this value must not be used to make authorization decisions. It can be used for username hints, however, and in human-readable UI as a username. The profile scope is required in order to receive this claim.', + index, + claimsObj + ); + index++; + break; + case 'nonce': + populateClaim( + key, + claims[key], + 'The nonce matches the parameter included in the original /authorize request to the IDP. If it does not match, your application should reject the token.', + index, + claimsObj + ); + index++; + break; + case 'oid': + populateClaim( + key, + claims[key], + 'The oid (userΓÇÖs object id) is the only claim that should be used to uniquely identify a user in an Azure AD tenant. The token might have one or more of the following claim, that might seem like a unique identifier, but is not and should not be used as such.', + index, + claimsObj + ); + index++; + break; + case 'tid': + populateClaim( + key, + claims[key], + 'The tenant ID. You will use this claim to ensure that only users from the current Azure AD tenant can access this app.', + index, + claimsObj + ); + index++; + break; + case 'upn': + populateClaim( + key, + claims[key], + '(user principal name) ΓÇô might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place or might change to reflect a personal change like marriage.', + index, + claimsObj + ); + index++; + break; + case 'email': + populateClaim( + key, + claims[key], + 'Email might be unique amongst the active set of users in a tenant but tend to get reassigned to new employees as employees leave the organization and others take their place.', + index, + claimsObj + ); + index++; + break; + case 'acct': + populateClaim( + key, + claims[key], + 'Available as an optional claim, it lets you know what the type of user (homed, guest) is. For example, for an individualΓÇÖs access to their data you might not care for this claim, but you would use this along with tenant id (tid) to control access to say a company-wide dashboard to just employees (homed users) and not contractors (guest users).', + index, + claimsObj + ); + index++; + break; + case 'sid': + populateClaim(key, claims[key], 'Session ID, used for per-session user sign-out.', index, claimsObj); + index++; + break; + case 'sub': + populateClaim( + key, + claims[key], + 'The sub claim is a pairwise identifier - it is unique to a particular application ID. If a single user signs into two different apps using two different client IDs, those apps will receive two different values for the subject claim.', + index, + claimsObj + ); + index++; + break; + case 'ver': + populateClaim( + key, + claims[key], + 'Version of the token issued by the Microsoft identity platform', + index, + claimsObj + ); + index++; + break; + case 'auth_time': + populateClaim( + key, + claims[key], + 'The time at which a user last entered credentials, represented in epoch time. There is no discrimination between that authentication being a fresh sign-in, a single sign-on (SSO) session, or another sign-in type.', + index, + claimsObj + ); + index++; + break; + case 'at_hash': + populateClaim( + key, + claims[key], + 'An access token hash included in an ID token only when the token is issued together with an OAuth 2.0 access token. An access token hash can be used to validate the authenticity of an access token', + index, + claimsObj + ); + index++; + break; + case 'uti': + case 'rh': + index++; + break; + default: + populateClaim(key, claims[key], '', index, claimsObj); + index++; + } + }); + + return claimsObj; + }; + + /** + * Populates claim, description, and value into an claimsObject + * @param {string} claim + * @param {string} value + * @param {string} description + * @param {number} index + * @param {Object} claimsObject + */ + const populateClaim = (claim, value, description, index, claimsObject) => { + let claimsArray = []; + claimsArray[0] = claim; + claimsArray[1] = value; + claimsArray[2] = description; + claimsObject[index] = claimsArray; + }; + + /** + * Transforms Unix timestamp to date and returns a string value of that date + * @param {string} date Unix timestamp + * @returns + */ + const changeDateFormat = (date) => { + let dateObj = new Date(date * 1000); + return `${date} - [${dateObj.toString()}]`; + }; + ``` ++1. Save the file. ++## Add code to the *signout.html* file ++1. Open *public/signout.html* and add the following code snippet: ++ ```html + <!DOCTYPE html> + <html lang="en"> + <head> + <meta charset="UTF-8"> + <meta name="viewport" content="width=device-width, initial-scale=1.0"> + <title>Azure AD | Vanilla JavaScript SPA</title> + <link rel="SHORTCUT ICON" href="./favicon.svg" type="image/x-icon"> + + <!-- adding Bootstrap 4 for UI components --> + <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/boot8strap/4.4.1/css/bootstrap.min.css" integrity="sha384-Vkoo8x4CGsO3+Hhxv8T/Q5PaXtkKtu6ug5TOeNV6gBiFeWPGFN9MuhOf23Q9Ifjh" crossorigin="anonymous"> + </head> + <body> + <div class="jumbotron" style="margin: 10%"> + <h1>Goodbye!</h1> + <p>You have signed out and your cache has been cleared.</p> + <a class="btn btn-primary" href="/" role="button">Take me back</a> + </div> + </body> + </html> + ``` ++1. Save the file. ++## Add code to the *ui.js* file ++When authorization has been configured, the user interface can be created to allow users to sign in and sign out when the project is run. To build the user interface (UI) for the application, [Bootstrap](https://getbootstrap.com/) is used to create a responsive UI that contains a **Sign-In** and **Sign-Out** button. ++1. Open *public/ui.js* and add the following code snippet: ++ ```javascript + // Select DOM elements to work with + const signInButton = document.getElementById('signIn'); + const signOutButton = document.getElementById('signOut'); + const titleDiv = document.getElementById('title-div'); + const welcomeDiv = document.getElementById('welcome-div'); + const tableDiv = document.getElementById('table-div'); + const tableBody = document.getElementById('table-body-div'); + + function welcomeUser(username) { + signInButton.classList.add('d-none'); + signOutButton.classList.remove('d-none'); + titleDiv.classList.add('d-none'); + welcomeDiv.classList.remove('d-none'); + welcomeDiv.innerHTML = `Welcome ${username}!`; + }; + + function updateTable(account) { + tableDiv.classList.remove('d-none'); + + const tokenClaims = createClaimsTable(account.idTokenClaims); + + Object.keys(tokenClaims).forEach((key) => { + let row = tableBody.insertRow(0); + let cell1 = row.insertCell(0); + let cell2 = row.insertCell(1); + let cell3 = row.insertCell(2); + cell1.innerHTML = tokenClaims[key][0]; + cell2.innerHTML = tokenClaims[key][1]; + cell3.innerHTML = tokenClaims[key][2]; + }); + }; + ``` ++1. Save the file. ++## Add code to the *styles.css* file ++1. Open *public/styles.css* and add the following code snippet: ++ ```css + .navbarStyle { + padding: .5rem 1rem !important; + } + + .table-responsive-ms { + max-height: 39rem !important; + padding-left: 10%; + padding-right: 10%; + } + ``` ++1. Save the file. ++## Run your project and sign in ++Now that all the required code snippets have been added, the application can be called and tested in a web browser. ++1. Open a new terminal and run the following command to start your express web server. + ```powershell + npm start + ``` +1. Open a new private browser, and enter the application URI into the browser, `http://localhost:3000/`. +1. Select **No account? Create one**, which starts the sign-up flow. +1. In the **Create account** window, enter the email address registered to your Azure Active Directory (AD) for customers tenant, which starts the sign-up flow as a user for your application. +1. After entering a one-time passcode from the customer tenant, enter a new password and more account details, this sign-up flow is completed. ++ 1. If a window appears prompting you to **Stay signed in**, choose either **Yes** or **No**. ++1. The SPA will now display a button saying **Request Profile Information**. Select it to display profile data. ++ :::image type="content" source="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png" alt-text="Screenshot of sign in into a Vanilla JS SPA." lightbox="media/how-to-spa-vanillajs-sign-in-sign-in-out/display-vanillajs-welcome.png"::: ++## Sign out of the application ++1. To sign out of the application, select **Sign out** in the navigation bar. +1. A window appears asking which account to sign out of. +1. Upon successful sign out, a final window appears advising you to close all browser windows. ++## Next steps ++- [Enable self-service password reset](./how-to-enable-password-reset-customers.md) |
active-directory | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customers/whats-new-docs.md | Title: "What's new in Azure Active Directory for customers" description: "New and updated documentation for the Azure Active Directory for customers documentation." Previously updated : 08/01/2023 Last updated : 08/17/2023 Welcome to what's new in Azure Active Directory for customers documentation. Thi - [Add user attributes to token claims](how-to-add-attributes-to-token.md) - Added attributes to token claims: fixed steps for updating the app manifest - [Tutorial: Prepare a React single-page app (SPA) for authentication in a customer tenant](./tutorial-single-page-app-react-sign-in-prepare-app.md) - JavaScript tutorial edits, code sample updates and fixed SPA aligning content styling - [Tutorial: Add sign-in and sign-out to a React single-page app (SPA) for a customer tenant](./tutorial-single-page-app-react-sign-in-sign-out.md) - JavaScript tutorial edits and fixed SPA aligning content styling-- [Tutorial: Handle authentication flows in a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare a vanilla JavaScript single-page app for authentication in a customer tenant](how-to-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling-- [Tutorial: Prepare your customer tenant to authenticate a vanilla JavaScript single-page app](how-to-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling-- [Tutorial: Add sign-in and sign-out to a vanilla JavaScript single-page app for a customer tenant](how-to-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling+- [Tutorial: Handle authentication flows in a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-configure-authentication.md) - Fixed SPA aligning content styling +- [Tutorial: Prepare a Vanilla JavaScript single-page app for authentication in a customer tenant](tutorial-single-page-app-vanillajs-prepare-app.md) - Fixed SPA aligning content styling +- [Tutorial: Prepare your customer tenant to authenticate a Vanilla JavaScript single-page app](tutorial-single-page-app-vanillajs-prepare-tenant.md) - Fixed SPA aligning content styling +- [Tutorial: Add sign-in and sign-out to a Vanilla JavaScript single-page app for a customer tenant](tutorial-single-page-app-vanillajs-sign-in-sign-out.md) - Fixed SPA aligning content styling - [Tutorial: Prepare your customer tenant to authenticate users in a React single-page app (SPA)](tutorial-single-page-app-react-sign-in-prepare-tenant.md) - Fixed SPA aligning content styling - [Tutorial: Prepare an ASP.NET web app for authentication in a customer tenant](tutorial-web-app-dotnet-sign-in-prepare-app.md) - ASP.NET web app fixes - [Tutorial: Prepare your customer tenant to authenticate users in an ASP.NET web app](tutorial-web-app-dotnet-sign-in-prepare-tenant.md) - ASP.NET web app fixes |
active-directory | Customize Invitation Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md | description: Azure Active Directory B2B collaboration supports your cross-compan + Last updated 12/02/2022 -# Customer intent: As a tenant administrator, I want to customize the invitation process with the API. +# Customer intent: As a tenant administrator, I want to customize the invitation process with the API. # Azure Active Directory B2B collaboration API and customization Check out the invitation API reference in [https://developer.microsoft.com/graph - [What is Azure AD B2B collaboration?](what-is-b2b.md) - [Add and invite guest users](add-users-administrator.md) - [The elements of the B2B collaboration invitation email](invitation-email-elements.md)- |
active-directory | Direct Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md | Last updated 03/15/2023 -+ |
active-directory | External Collaboration Settings Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md | description: Learn how to enable Active Directory B2B external collaboration and + Last updated 10/24/2022 |
active-directory | Facebook Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md | Last updated 01/20/2023 -+ --# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login. +# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login. # Add Facebook as an identity provider for External Identities |
active-directory | Google Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md | Last updated 01/20/2023 -+ |
active-directory | Invite Internal Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md | description: If you have internal user accounts for partners, distributors, supp + Last updated 07/27/2023 |
active-directory | Tenant Restrictions V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tenant-restrictions-v2.md | -> The **Tenant restrictions** settings, which are included with cross-tenant access settings, are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> The **Tenant restrictions** settings, which are included with cross-tenant access settings, are preview features of Azure Active Directory. For more information about previews, see [Universal License Terms for Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). For increased security, you can limit what your users can access when they use an external account to sign in from your networks or devices. With the **Tenant restrictions** settings included with [cross-tenant access settings](cross-tenant-access-overview.md), you can control the external apps that your Windows device users can access when they're using external accounts. For example, let's say a user in your organization has created a separate accoun :::image type="content" source="media/tenant-restrictions-v2/authentication-flow.png" alt-text="Diagram illustrating tenant restrictions v2."::: -| | | ++| Steps | Description | ||| |**1** | Contoso configures **Tenant restrictions** in their cross-tenant access settings to block all external accounts and external apps. Contoso enforces the policy on each Windows device by updating the local computer configuration with Contoso's tenant ID and the tenant restrictions policy ID. | |**2** | A user with a Contoso-managed Windows device tries to sign in to an external app using an account from an unknown tenant. The Windows device adds an HTTP header to the authentication request. The header contains Contoso's tenant ID and the tenant restrictions policy ID. | |**3** | *Authentication plane protection:* Azure AD uses the header in the authentication request to look up the tenant restrictions policy in the Azure AD cloud. Because Contoso's policy blocks external accounts from accessing external tenants, the request is blocked at the authentication level. | |**4** | *Data plane protection:* The user tries to access the external application by copying an authentication response token they obtained outside of Contoso's network and pasting it into the Windows device. However, Azure AD compares the claim in the token to the HTTP header added by the Windows device. Because they don't match, Azure AD blocks the session so the user can't access the application. |-||| + This article describes how to configure tenant restrictions V2 using the Azure portal. You can also use the [Microsoft Graph cross-tenant access API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-beta&preserve-view=true) to create these same tenant restrictions policies. Settings for tenant restrictions V2 are located in the Azure portal under **Cros 1. Under **Applies to**, select one of the following: - **All external applications**: Applies the action you chose under **Access status** to all external applications. If you block access to all external applications, you also need to block access for all of your users and groups (on the **Users and groups** tab).- - **Select external applications**: Lets you choose the external applications you want the action under **Access status** to apply to. To select applications, choose **Add Microsoft applications** or **Add other applications**. Then search by the application name or the application ID (either the *client app ID* or the *resource app ID*) and select the app. ([See a list of IDs for commonly used Microsoft applications.](https://learn.microsoft.com/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) If you want to add more apps, use the **Add** button. When you're done, select **Submit**. + - **Select external applications**: Lets you choose the external applications you want the action under **Access status** to apply to. To select applications, choose **Add Microsoft applications** or **Add other applications**. Then search by the application name or the application ID (either the *client app ID* or the *resource app ID*) and select the app. ([See a list of IDs for commonly used Microsoft applications.](/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) If you want to add more apps, use the **Add** button. When you're done, select **Submit**. :::image type="content" source="media/tenant-restrictions-v2/tenant-restrictions-default-applications-applies-to.png" alt-text="Screenshot showing selecting the external applications tab."::: Suppose you use tenant restrictions to block access by default, but you want to 1. If you chose **Select external applications**, do the following for each application you want to add: - Select **Add Microsoft applications** or **Add other applications**. For our Microsoft Learn example, we choose **Add other applications**.- - In the search box, type the application name or the application ID (either the *client app ID* or the *resource app ID*). ([See a list of IDs for commonly used Microsoft applications.](https://learn.microsoft.com/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) For our Microsoft Learn example, we enter the application ID `18fbca16-2224-45f6-85b0-f7bf2b39b3f3`. + - In the search box, type the application name or the application ID (either the *client app ID* or the *resource app ID*). ([See a list of IDs for commonly used Microsoft applications.](/troubleshoot/azure/active-directory/verify-first-party-apps-sign-in)) For our Microsoft Learn example, we enter the application ID `18fbca16-2224-45f6-85b0-f7bf2b39b3f3`. - Select the application in the search results, and then select **Add**. - Repeat for each application you want to add. - When you're done selecting applications, select **Submit**. |
active-directory | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md | Last updated 05/23/2023 tags: active-directory -+ |
active-directory | User Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md | Last updated 05/18/2023 -+ --# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption. +# Customer intent: As a tenant administrator, I want to learn about B2B collaboration guest user properties and states before and after invitation redemption. # Properties of an Azure Active Directory B2B collaboration user |
active-directory | Custom Security Attributes Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-add.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). [Custom security attributes](custom-security-attributes-overview.md) in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. This article describes how to add, edit, or deactivate custom security attribute definitions. |
active-directory | Custom Security Attributes Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-manage.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). For people in your organization to effectively work with [custom security attributes](custom-security-attributes-overview.md), you must grant the appropriate access. Depending on the information you plan to include in custom security attributes, you might want to restrict custom security attributes or you might want to make them broadly accessible in your organization. This article describes how to manage access to custom security attributes. |
active-directory | Custom Security Attributes Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-overview.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). Custom security attributes in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. These attributes can be used to store information, categorize objects, or enforce fine-grained access control over specific Azure resources. Custom security attributes can be used with [Azure attribute-based access control (Azure ABAC)](../../role-based-access-control/conditions-overview.md). |
active-directory | Custom Security Attributes Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/custom-security-attributes-troubleshoot.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Symptom - Custom security attributes page is disabled |
active-directory | Data Storage Eu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/data-storage-eu.md | The following sections provide information about customer data that doesn't meet ## Services permanently excluded from the EU Data Residency and EU Data Boundary -* **Reason for customer data egress** - Some forms of communication rely on a network that is operated by global providers, such as phone calls and SMS. Device vendor-specific services such Apple Push Notifications, may be outside of Europe. +* **Reason for customer data egress** - Some forms of communication, such as phone calls or text messaging platforms like SMS, RCS, or WhatsApp, rely on a network that is operated by global providers. Device vendor-specific services, such as push notifications from Apple or Google, may be outside of Europe. * **Types of customer data being egressed** - User account data (phone number). * **Customer data location at rest** - In EU Data Boundary. * **Customer data processing** - Some processing may occur globally. |
active-directory | How To Create Delete Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-create-delete-users.md | -The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +The updated experience for creating new users covered in this article is available as an Azure AD preview feature. This feature is enabled by default, but you can opt out by going to **Azure AD** > **Preview features** and disabling the **Create user experience** feature. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). Instructions for the legacy create user process can be found in the [Add or delete users](./add-users.md) article. |
active-directory | Identity Secure Score | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/identity-secure-score.md | -# What is the identity secure score in Azure Active Directory? +# What is identity secure score? -How secure is your Azure AD tenant? If you don't know how to answer this question, this article explains how the identity secure score helps you to monitor and improve your identity security posture. --## What is an identity secure score? --The identity secure score is percentage that functions as an indicator for how aligned you are with Microsoft's best practice recommendations for security. Each improvement action in identity secure score is tailored to your specific configuration. +The identity secure score is shown as a percentage that functions as an indicator for how aligned you are with Microsoft's recommendations for security. Each improvement action in identity secure score is tailored to your configuration. ![Secure score](./media/identity-secure-score/identity-secure-score-overview.png) -The score helps you to: +This score helps to: - Objectively measure your identity security posture - Plan identity security improvements By following the improvement actions, you can: ## How do I get my secure score? -The identity secure score is available in all editions of Azure AD. Organizations can access their identity secure score from the **Azure portal** > **Azure Active Directory** > **Security** > **Identity Secure Score**. +Identity secure score is available to free and paid customers. Organizations can access their identity secure score in the [Microsoft Entra admin center](https://entra.microsoft.com/) under **Protection** > **Identity Secure Score**. ## How does it work? -Every 48 hours, Azure looks at your security configuration and compares your settings with the recommended best practices. Based on the outcome of this evaluation, a new score is calculated for your directory. ItΓÇÖs possible that your security configuration isnΓÇÖt fully aligned with the best practice guidance and the improvement actions are only partially met. In these scenarios, you will only be awarded a portion of the max score available for the control. +Every 48 hours, Azure looks at your security configuration and compares your settings with the recommended best practices. Based on the outcome of this evaluation, a new score is calculated for your directory. ItΓÇÖs possible that your security configuration isnΓÇÖt fully aligned with the best practice guidance and the improvement actions are only partially met. In these scenarios, you're awarded a portion of the max score available for the control. -Each recommendation is measured based on your Azure AD configuration. If you are using third-party products to enable a best practice recommendation, you can indicate this configuration in the settings of an improvement action. You also have the option to set recommendations to be ignored if they don't apply to your environment. An ignored recommendation does not contribute to the calculation of your score. +Each recommendation is measured based on your Azure AD configuration. If you're using third-party products to enable a best practice recommendation, you can indicate this configuration in the settings of an improvement action. You may set recommendations to be ignored if they don't apply to your environment. An ignored recommendation doesn't contribute to the calculation of your score. ![Ignore or mark action as covered by third party](./media/identity-secure-score/identity-secure-score-ignore-or-third-party-reccomendations.png) - **To address** - You recognize that the improvement action is necessary and plan to address it at some point in the future. This state also applies to actions that are detected as partially, but not fully completed. - **Planned** - There are concrete plans in place to complete the improvement action.-- **Risk accepted** - Security should always be balanced with usability, and not every recommendation will work for your environment. When that is the case, you can choose to accept the risk, or the remaining risk, and not enact the improvement action. You won't be given any points, but the action will no longer be visible in the list of improvement actions. You can view this action in history or undo it at any time.-- **Resolved through third party** and **Resolved through alternate mitigation** - The improvement action has already been addressed by a third-party application or software, or an internal tool. You'll gain the points that the action is worth, so your score better reflects your overall security posture. If a third party or internal tool no longer covers the control, you can choose another status. Keep in mind, Microsoft will have no visibility into the completeness of implementation if the improvement action is marked as either of these statuses.+- **Risk accepted** - Security should always be balanced with usability, and not every recommendation works for everyone. When that is the case, you can choose to accept the risk, or the remaining risk, and not enact the improvement action. You aren't awarded any points, and the action isn't visible in the list of improvement actions. You can view this action in history or undo it at any time. +- **Resolved through third party** and **Resolved through alternate mitigation** - The improvement action has already been addressed by a third-party application or software, or an internal tool. You're awarded the points the action is worth, so your score better reflects your overall security posture. If a third party or internal tool no longer covers the control, you can choose another status. Keep in mind, Microsoft has no visibility into the completeness of implementation if the improvement action is marked as either of these statuses. ## How does it help me? To access identity secure score, you must be assigned one of the following roles With read and write access, you can make changes and directly interact with identity secure score. -* Global administrator -* Security administrator -* Exchange administrator -* SharePoint administrator +* Global Administrator +* Security Administrator +* Exchange Administrator +* SharePoint Administrator #### Read-only roles With read-only access, you aren't able to edit status for an improvement action. -* Helpdesk administrator -* User administrator -* Service support administrator -* Security reader -* Security operator -* Global reader +* Helpdesk Administrator +* User Administrator +* Service support Administrator +* Security Reader +* Security Operator +* Global Reader ### How are controls scored? -Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states youΓÇÖll get a maximum of 10.71% if you protect all your users with MFA and you only have 5 of 100 total users protected, you would be given a partial score around 0.53% (5 protected / 100 total * 10.71% maximum = 0.53% partial score). +Controls can be scored in two ways. Some are scored in a binary fashion - you get 100% of the score if you have the feature or setting configured based on our recommendation. Other scores are calculated as a percentage of the total configuration. For example, if the improvement recommendation states there's a maximum of 10.71% increase if you protect all your users with MFA and you have 5 of 100 total users protected, you're given a partial score around 0.53% (5 protected / 100 total * 10.71% maximum = 0.53% partial score). ### What does [Not Scored] mean? -Actions labeled as [Not Scored] are ones you can perform in your organization but won't be scored because they aren't hooked up in the tool (yet!). So, you can still improve your security, but you won't get credit for those actions right now. --In addition, the recommended actions: -* Protect all users with a user risk policy -* Protect all users with a sign-in risk policy --Also won't give you credits when configured using Conditional Access Policies, yet, for the same reason as above. For now, these actions give credits only when configured through Identity Protection policies. +Actions labeled as [Not Scored] are ones you can perform in your organization but aren't scored. So, you can still improve your security, but you aren't given credit for those actions right now. ### How often is my score updated? The score is calculated once per day (around 1:00 AM PST). If you make a change ### My score changed. How do I figure out why? -Head over to the [Microsoft 365 Defender portal](https://security.microsoft.com/), where youΓÇÖll find your complete Microsoft secure score. You can easily see all the changes to your secure score by reviewing the in-depth changes on the history tab. +Head over to the [Microsoft 365 Defender portal](https://security.microsoft.com/), where you find your complete Microsoft secure score. You can easily see all the changes to your secure score by reviewing the in-depth changes on the history tab. ### Does the secure score measure my risk of getting breached? -In short, no. The secure score does not express an absolute measure of how likely you are to get breached. It expresses the extent to which you have adopted features that can offset the risk of being breached. No service can guarantee that you will not be breached, and the secure score should not be interpreted as a guarantee in any way. +No, secure score doesn't express an absolute measure of how likely you're to get breached. It expresses the extent to which you have adopted features that can offset risk. No service can guarantee protection, and the secure score shouldn't be interpreted as a guarantee in any way. ### How should I interpret my score? |
active-directory | New Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/new-name.md | -To unify the [Microsoft Entra](/entra) product family, reflect the progression to modern multicloud identity security, and simplify secure access experiences for all, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID. +To communicate the multicloud, multiplatform functionality of the products, alleviate confusion with Windows Server Active Directory, and unify the [Microsoft Entra](/entra) product family, we're renaming Azure Active Directory (Azure AD) to Microsoft Entra ID. -## No action is required from you +## No interruptions to usage or service If you're using Azure AD today or are currently deploying Azure AD in your organizations, you can continue to use the service without interruption. All existing deployments, configurations, and integrations will continue to function as they do today without any action from you. You can continue to use familiar Azure AD capabilities that you can access through the Azure portal, Microsoft 365 admin center, and the [Microsoft Entra admin center](https://entra.microsoft.com). -## Only the name is changing - All features and capabilities are still available in the product. Licensing, terms, service-level agreements, product certifications, support and pricing remain the same. +To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling. + Service plan display names will change on October 1, 2023. Microsoft Entra ID Free, Microsoft Entra ID P1, and Microsoft Entra ID P2 will be the new names of standalone offers, and all capabilities included in the current Azure AD plans remain the same. Microsoft Entra ID ΓÇô currently known as Azure AD ΓÇô will continue to be included in Microsoft 365 licensing plans, including Microsoft 365 E3 and Microsoft 365 E5. Details on pricing and whatΓÇÖs included are available on the [pricing and free trials page](https://aka.ms/PricingEntra). :::image type="content" source="./media/new-name/azure-ad-new-name.png" alt-text="Diagram showing the new name for Azure AD and Azure AD External Identities." border="false" lightbox="./media/new-name/azure-ad-new-name-high-res.png"::: During 2023, you may see both the current Azure AD name and the new Microsoft Entra ID name in support area paths. For self-service support, look for the topic path of "Microsoft Entra" or "Azure Active Directory/Microsoft Entra ID." -## Identity developer and devops experiences aren't impacted by the rename +## Guide to Azure AD name changes and exceptions -To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling. +We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to update your experiences and use the new name by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces. -Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts. +### Product name -Naming is also not changing for: +Microsoft Entra ID is the new name for Azure AD. Please replace the product names Azure Active Directory, Azure AD, and AAD with Microsoft Entra ID. ++- Microsoft Entra is the name for the product family of identity and network access solutions. +- Microsoft Entra ID is one of the products within that family. +- Acronym usage is not encouraged, but if you must replace AAD with an acronym due to space limitations, please use ME-ID. -- [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Use to acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API.-- [Microsoft Graph](/graph) - Get programmatic access to organizations, user, and application data stored in Microsoft Entra ID.-- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph.-- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as "Active Directory," and all related Windows Server identity services associated with Active Directory.-- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name "Active Directory" or any corresponding features.-- [Azure Active Directory B2C](../../active-directory-b2c/index.yml) will continue to be available as an Azure service.-- [Any deprecated or retired functionality, feature, or service](what-is-deprecated.md) of Azure AD.+### Logo/icon ++Please change the Azure AD product icon in your experiences. The Azure AD icons are now at end-of-life. ++| **Azure AD product icons** | **Microsoft Entra ID product icon** | +|:--:|:--:| +| ![Azure AD product icon](./media/new-name/azure-ad-icon-1.png) ![Alternative Azure AD product icon](./media/new-name/azure-ad-icon-2.png) | ![Microsoft Entra ID product icon](./media/new-name/microsoft-entra-id-icon.png) | ++You can download the new Microsoft Entra ID icon here: [Microsoft Entra architecture icons](../architecture/architecture-icons.md) ++### Feature names ++Capabilities or services formerly known as "Azure Active Directory <feature name>" or "Azure AD <feature name>" will be branded as Microsoft Entra product family features. This is done across our portfolio to avoid naming length and complexity, and because many features work across all the products. For example: ++- "Azure AD Conditional Access" is now "Microsoft Entra Conditional Access" +- "Azure AD single sign-on" is now "Microsoft Entra single sign-on" ++See the [Glossary of updated terminology](#glossary-of-updated-terminology) later in this article for more examples. ++### Exceptions and clarifications to the Azure AD name change ++Names aren't changing for Active Directory, developer tools, Azure AD B2C, nor deprecated or retired functionality, features, or services. ++Don't rename the following features, functionality, or services. ++#### Azure AD renaming exceptions and clarifications ++| **Correct terminology** | **Details** | +|-|-| +| Active Directory <br/><br/>• Windows Server Active Directory <br/>• Active Directory Federation Services (AD FS) <br/>• Active Directory Domain Services (AD DS) <br/>• Active Directory <br/>• Any Active Directory feature(s) | Windows Server Active Directory, commonly known as Active Directory, and related features and services associated with Active Directory aren't branded with Microsoft Entra. | +| Authentication library <br/><br/>• Azure AD Authentication Library (ADAL) <br/>• Microsoft Authentication Library (MSAL) | Azure Active Directory Authentication Library (ADAL) is deprecated. While existing apps that use ADAL will continue to work, Microsoft will no longer release security fixes on ADAL. Migrate applications to the Microsoft Authentication Library (MSAL) to avoid putting your app's security at risk. <br/><br/>[Microsoft Authentication Library (MSAL)](../develop/msal-overview.md) - Provides security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. | +| B2C <br/><br/>• Azure Active Directory B2C <br/>• Azure AD B2C | [Azure Active Directory B2C](/azure/active-directory-b2c) isn't being renamed. Microsoft Entra External ID for customers is Microsoft's new customer identity and access management (CIAM) solution. | +| Graph <br/><br/>• Azure Active Directory Graph <br/>• Azure AD Graph <br/>• Microsoft Graph | Azure Active Directory (Azure AD) Graph is deprecated. Going forward, we will make no further investment in Azure AD Graph, and Azure AD Graph APIs have no SLA or maintenance commitment beyond security-related fixes. Investments in new features and functionalities will only be made in Microsoft Graph.<br/><br/>[Microsoft Graph](/graph) - Grants programmatic access to organization, user, and application data stored in Microsoft Entra ID. | +| PowerShell <br/><br/>• Azure Active Directory PowerShell <br/>• Azure AD PowerShell <br/>• Microsoft Graph PowerShell | Azure AD PowerShell for Graph is planned for deprecation on March 30, 2024. For more info on the deprecation plans, see the deprecation update. We encourage you to migrate to Microsoft Graph PowerShell, which is the recommended module for interacting with Azure AD. <br/><br/>[Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) - Acts as an API wrapper for the Microsoft Graph APIs and helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph. | +| Accounts <br/><br/>• Microsoft account <br/>• Work or school account | For end user sign-ins and account experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md). | +| Microsoft identity platform | The Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts. | +| <br/>• Azure AD Sync <br/>• DirSync | DirSync and Azure AD Sync aren't supported and no longer work. If you're still using DirSync or Azure AD Sync, you must upgrade to Microsoft Entra Connect to resume your sync process. For more info, see [Microsoft Entra Connect](/azure/active-directory/hybrid/connect/how-to-dirsync-upgrade-get-started). | ++## Glossary of updated terminology ++Features of the identity and network access products are attributed to Microsoft EntraΓÇöthe product family, not the individual product name. ++You're not required to use the Microsoft Entra attribution with features. Only use if needed to clarify whether you're talking about a concept versus the feature in a specific product, or when comparing a Microsoft Entra feature with a competing feature. ++Only official product names are capitalized, plus Conditional Access and My * apps. ++| **Category** | **Old terminology** | **Correct name as of July 2023** | +|-||-| +| **Microsoft Entra product family** | Microsoft Azure Active Directory<br/> Azure Active Directory<br/> Azure Active Directory (Azure AD)<br/> Azure AD<br/> AAD | Microsoft Entra ID<br/> (Second use: Microsoft Entra ID is preferred, ID is acceptable in product/UI experiences, ME-ID if abbreviation is necessary) | +| | Azure Active Directory External Identities<br/> Azure AD External Identities | Microsoft Entra External ID<br/> (Second use: External ID) | +| | Azure Active Directory Identity Governance<br/> Azure AD Identity Governance<br/> Microsoft Entra Identity Governance | Microsoft Entra ID Governance<br/> (Second use: ID Governance) | +| | *New* | Microsoft Entra Internet Access<br/> (Second use: Internet Access) | +| | Cloud Knox | Microsoft Entra Permissions Management<br/> (Second use: Permissions Management) | +| | *New* | Microsoft Entra Private Access<br/> (Second use: Private Access) | +| | Azure Active Directory Verifiable Credentials<br/> Azure AD Verifiable Credentials | Microsoft Entra Verified ID<br/> (Second use: Verified ID) | +| | Azure Active Directory Workload Identities<br/> Azure AD Workload Identities | Microsoft Entra Workload ID<br/> (Second use: Workload ID) | +| | Azure Active Directory Domain Services<br/> Azure AD Domain Services | Microsoft Entra Domain Services<br/> (Second use: Domain Services) | +| **Microsoft Entra ID SKUs** | Azure Active Directory Premium P1 | Microsoft Entra ID P1 | +| | Azure Active Directory Premium P1 for faculty | Microsoft Entra ID P1 for faculty | +| | Azure Active Directory Premium P1 for students | Microsoft Entra ID P1 for students | +| | Azure Active Directory Premium P1 for government | Microsoft Entra ID P1 for government | +| | Azure Active Directory Premium P2 | Microsoft Entra ID P2 | +| | Azure Active Directory Premium P2 for faculty | Microsoft Entra ID P2 for faculty | +| | Azure Active Directory Premium P2 for students | Microsoft Entra ID P2 for students | +| | Azure Active Directory Premium P2 for government | Microsoft Entra ID P2 for government | +| | Azure Active Directory Premium F2 | Microsoft Entra ID F2 | +| **Microsoft Entra ID service plans** | Azure Active Directory Free | Microsoft Entra ID Free | +| | Azure Active Directory Premium P1 | Microsoft Entra ID P1 | +| | Azure Active Directory Premium P2 | Microsoft Entra ID P2 | +| | Azure Active Directory for education | Microsoft Entra ID for education | +| **Features and functionality** | Azure AD access token authentication<br/> Azure Active Directory access token authentication | Microsoft Entra access token authenticationΓÇ»| +| | Azure AD account<br/> Azure Active Directory account | Microsoft Entra account<br/><br/> This terminology is only used with IT admins and developers. End users authenticate with a work or school account. | +| | Azure AD activity logs | Microsoft Entra activity logs | +| | Azure AD admin<br/> Azure Active Directory admin | Microsoft Entra admin | +| | Azure AD admin center<br/> Azure Active Directory admin center | Replace with Microsoft Entra admin center and update link to entra.microsoft.com | +| | Azure AD application proxy<br/> Azure Active Directory application proxy | Microsoft Entra application proxy | +| | Azure AD audit log | Microsoft Entra audit log | +| | Azure AD authentication<br/> authenticate with an Azure AD identity<br/> authenticate with Azure AD<br/> authentication to Azure AD | Microsoft Entra authentication<br/> authenticate with a Microsoft Entra identity<br/> authenticate with Microsoft Entra<br/> authentication to Microsoft Entra<br/><br/> This terminology is only used with administrators. End users authenticate with a work or school account. | +| | Azure AD B2B<br/> Azure Active Directory B2B | Microsoft Entra B2B | +| | Azure AD built-in roles<br/> Azure Active Directory built-in roles | Microsoft Entra built-in roles | +| | Azure AD Conditional Access<br/> Azure Active Directory Conditional Access | Microsoft Entra Conditional Access<br/> (Second use: Conditional Access) | +| | Azure AD cloud-only identities<br/> Azure Active Directory cloud-only identities | Microsoft Entra cloud-only identities | +| | Azure AD Connect<br/> Azure Active Directory Connect | Microsoft Entra Connect | +| | Azure AD Connect Sync<br/> Azure Active Directory Connect Sync | Microsoft Entra Connect Sync | +| | Azure AD domain<br/> Azure Active Directory domain | Microsoft Entra domain | +| | Azure AD Domain Services<br/> Azure Active Directory Domain Services | Microsoft Entra Domain Services | +| | Azure AD enterprise application<br/> Azure Active Directory enterprise application | Microsoft Entra enterprise application | +| | Azure AD federation services<br/> Azure Active Directory federation services | Active Directory Federation Services | +| | Azure AD groups<br/> Azure Active Directory groups | Microsoft Entra groups | +| | Azure AD hybrid identities<br/> Azure Active Directory hybrid identities | Microsoft Entra hybrid identities | +| | Azure AD identities<br/> Azure Active Directory identities | Microsoft Entra identities | +| | Azure AD identity protection<br/> Azure Active Directory identity protection | Microsoft Entra ID Protection | +| | Azure AD integrated authentication<br/> Azure Active Directory integrated authentication | Microsoft Entra integrated authentication | +| | Azure AD join<br/> Azure AD joined<br/> Azure Active Directory join<br/> Azure Active Directory joined | Microsoft Entra join<br/> Microsoft Entra joined | +| | Azure AD login<br/> Azure Active Directory login | Microsoft Entra login | +| | Azure AD managed identities<br/> Azure Active Directory managed identities | Microsoft Entra managed identities | +| | Azure AD multifactor authentication (MFA)<br/> Azure Active Directory multifactor authentication (MFA) | Microsoft Entra multifactor authentication (MFA)<br/> (Second use: MFA) | +| | Azure AD OAuth and OpenID Connect<br/> Azure Active Directory OAuth and OpenID Connect | Microsoft Entra ID OAuth and OpenID Connect | +| | Azure AD object<br/> Azure Active Directory object | Microsoft Entra object | +| | Azure Active Directory-only authentication<br/> Azure AD-only authentication | Microsoft Entra-only authentication | +| | Azure AD pass-through authentication (PTA)<br/> Azure Active Directory pass-through authentication (PTA) | Microsoft Entra pass-through authentication | +| | Azure AD password authentication<br/> Azure Active Directory password authentication | Microsoft Entra password authentication | +| | Azure AD password hash synchronization (PHS)<br/> Azure Active Directory password hash synchronization (PHS) | Microsoft Entra password hash synchronization | +| | Azure AD password protection<br/> Azure Active Directory password protection | Microsoft Entra password protection | +| | Azure AD principal ID<br/> Azure Active Directory principal ID | Microsoft Entra principal ID | +| | Azure AD Privileged Identity Management (PIM)<br/> Azure Active Directory Privileged Identity Management (PIM) | Microsoft Entra Privileged Identity Management (PIM) | +| | Azure AD registered<br/> Azure Active Directory registered | Microsoft Entra registered | +| | Azure AD reporting and monitoring<br/> Azure Active Directory reporting and monitoring | Microsoft Entra reporting and monitoring | +| | Azure AD role<br/> Azure Active Directory role | Microsoft Entra role | +| | Azure AD schema<br/> Azure Active Directory schema | Microsoft Entra schema | +| | Azure AD Seamless single sign-on (SSO)<br/> Azure Active Directory Seamless single sign-on (SSO) | Microsoft Entra seamless single sign-on (SSO)<br/> (Second use: SSO) | +| | Azure AD self-service password reset (SSPR)<br/> Azure Active Directory self-service password reset (SSPR) | Microsoft Entra self-service password reset (SSPR) | +| | Azure AD service principal<br/> Azure Active Directory service principal | Microsoft Entra service principal | +| | Azure AD tenant<br/> Azure Active Directory tenant | Microsoft Entra tenant | +| | Create a user in Azure AD<br/> Create a user in Azure Active Directory | Create a user in Microsoft Entra | +| | Federated with Azure AD<br/> Federated with Azure Active Directory | Federated with Microsoft Entra | +| | Hybrid Azure AD Join<br/> Hybrid Azure AD Joined | Microsoft Entra hybrid join<br/> Microsoft Entra hybrid joined | +| | Managed identities in Azure AD for Azure SQL | Managed identities in Microsoft Entra for Azure SQL | +| **Acronym usage** | AAD | ME-ID<br/><br/> Note that this isn't an official abbreviation for the product but may be used in code or when absolute shortest form is required. | ## Frequently asked questions ### When is the name change happening? -The name change will start appearing across Microsoft experiences after a 30-day notification period, which started July 11, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences to be completed by the end of 2023. +The name change will appear across Microsoft experiences starting August 15, 2023. Display names for SKUs and service plans will change on October 1, 2023. We expect most naming text string changes in Microsoft experiences and partner experiences to be completed by the end of 2023. ### Why is the name being changed? No, only the name Azure AD is going away. Capabilities remain the same. ### What will happen to the Azure AD capabilities and features like App Gallery or Conditional Access? +All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption. + The naming of features changes to Microsoft Entra. For example: - Azure AD tenant -> Microsoft Entra tenant - Azure AD account -> Microsoft Entra account-- Azure AD joined -> Microsoft Entra joined-- Azure AD Conditional Access -> Microsoft Entra Conditional Access -All features and capabilities remain unchanged aside from the name. Customers can continue to use all features without any interruption. +See the [Glossary of updated terminology](#glossary-of-updated-terminology) for more examples. ### Are licenses changing? Are there any changes to pricing? There are no changes to the identity features and functionality available in Mic In addition to the capabilities they already have, Microsoft 365 E5 customers will also get access to new identity protection capabilities like token protection, Conditional Access based on GPS-based location and step-up authentication for the most sensitive actions. Microsoft 365 E5 includes Microsoft Entra P2, currently known as Azure AD Premium P2. -### How and when are customers being notified? --The name changes are publicly announced as of July 11, 2023. +### What's changing for identity developer and devops experience? -Banners, alerts, and message center posts will notify users of the name change. These will be displayed on the tenant overview page, portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn. --### What if I use the Azure AD name in my content or app? +Identity developer and devops experiences aren't being renamed. To make the transition seamless, all existing login URLs, APIs, PowerShell cmdlets, and Microsoft Authentication Libraries (MSAL) stay the same, as do developer experiences and tooling. -We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the following section ([Azure AD name changes and exceptions](#azure-ad-name-changes-and-exceptions)) to make the name change in your content and product experiences by the end of 2023. --## Azure AD name changes and exceptions --We encourage content creators, organizations with internal documentation for IT or identity security admins, developers of Azure AD-enabled apps, independent software vendors, or partners of Microsoft to stay current with the new naming guidance by updating copy by the end of 2023. We recommend changing the name in customer-facing experiences, prioritizing highly visible surfaces. --### Product name +Many technical components either have low visibility to customers (for example, sign-in URLs), or usually aren't branded, like APIs. -Replace the product name "Azure Active Directory" or "Azure AD" or "AAD" with Microsoft Entra ID. --*Microsoft Entra* is the correct name for the family of identity and network access solutions, one of which is *Microsoft Entra ID.* --### Logo/icon +Microsoft identity platform encompasses all our identity and access developer assets. It will continue to provide the resources to help you build applications that your users and customers can sign in to using their Microsoft identities or social accounts. -Azure AD is becoming Microsoft Entra ID, and the product icon is also being updated. Work with your Microsoft partner organization to obtain the new product icon. +Naming is also not changing for: -### Feature names +- [Microsoft Authentication Library (MSAL)](/azure/active-directory/develop/msal-overview) ΓÇô Acquire security tokens from the Microsoft identity platform to authenticate users and access secured web APIs to provide secure access to Microsoft Graph, other Microsoft APIs, third-party web APIs, or your own web API. +- [Microsoft Graph](/graph) ΓÇô Get programmatic access to organizational, user, and application data stored in Microsoft Entra ID. +- [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) ΓÇô Acts as an API wrapper for the Microsoft Graph APIs; helps administer every Microsoft Entra ID feature that has an API in Microsoft Graph. +- [Windows Server Active Directory](/troubleshoot/windows-server/identity/active-directory-overview), commonly known as ΓÇ£Active DirectoryΓÇ¥, and all related Windows Server identity services, associated with Active Directory. +- [Active Directory Federation Services (AD FS)](/windows-server/identity/active-directory-federation-services) nor [Active Directory Domain Services (AD DS)](/windows-server/identity/ad-ds/active-directory-domain-services) nor the product name ΓÇ£Active DirectoryΓÇ¥ or any corresponding features. +- [Azure Active Directory B2C](/azure/active-directory-b2c) will continue to be available as an Azure service. +- Any deprecated or retired functionality, feature, or service of Azure Active Directory. -Capabilities or services formerly known as "Azure Active Directory <feature name>" or "Azure AD <feature name>" will be branded as Microsoft Entra product family features. For example: +### How and when are customers being notified? -- "Azure AD Conditional Access" is becoming "Microsoft Entra Conditional Access"-- "Azure AD single sign-on" is becoming "Microsoft Entra single sign-on"-- "Azure AD tenant" is becoming "Microsoft Entra tenant"+The name changes were publicly announced on July 11, 2023. -### Exceptions to Azure AD name change +Banners, alerts, and message center posts notified users of the name change. The change was also displayed on the tenant overview page in the portals including Azure, Microsoft 365, and Microsoft Entra admin center, and Microsoft Learn. -Products or features that are being deprecated aren't being renamed. These products or features include: +### What if I use the Azure AD name in my content or app? -- Azure AD Authentication Library (ADAL), replaced by [Microsoft Authentication Library (MSAL)](../develop/msal-overview.md)-- Azure AD Graph, replaced by [Microsoft Graph](/graph)-- Azure Active Directory PowerShell for Graph (Azure AD PowerShell), replaced by [Microsoft Graph PowerShell](/powershell/microsoftgraph)+We'd like your help spreading the word about the name change and implementing it in your own experiences. If you're a content creator, author of internal documentation for IT or identity security admins, developer of Azure ADΓÇôenabled apps, independent software vendor, or Microsoft partner, we hope you use the naming guidance outlined in the ([Glossary of updated terminology](#glossary-of-updated-terminology)) to make the name change in your content and product experiences by the end of 2023. -Names that don't have "Azure AD" also aren't changing. These products or features include Active Directory Federation Services (AD FS), Microsoft identity platform, and Windows Server Active Directory Domain Services (AD DS). +## Revision history -End users shouldn't be exposed to the Azure AD or Microsoft Entra ID name. For sign-ins and account user experiences, follow guidance for work and school accounts in [Sign in with Microsoft branding guidelines](../develop/howto-add-branding-in-apps.md). +| Date | Change description | +||--| +| August 29, 2023 | <br/>• In the [glossary](#glossary-of-updated-terminology), corrected the entry for "Azure AD activity logs" to separate "Azure AD audit log", which is a distinct type of activity log. <br/>• Added Azure AD Sync and DirSync to the [Azure AD renaming exceptions and clarifications](#azure-ad-renaming-exceptions-and-clarifications) section. | +| August 18, 2023 | <br/>• Updated the article to include a new section [Glossary of updated terminology](#glossary-of-updated-terminology), which includes the old and new terminology.<br/>• Updated info and added link to usage of the Microsoft Entra ID icon, and updates to verbiage in some sections. | +| July 11, 2023 | Published the original guidance as part of the [Microsoft Entra moment and related announcement](https://www.microsoft.com/security/blog/2023/07/11/microsoft-entra-expands-into-security-service-edge-and-azure-ad-becomes-microsoft-entra-id/?culture=en-us&country=us). | ## Next steps |
active-directory | Scenario Azure First Sap Identity Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/scenario-azure-first-sap-identity-integration.md | This document provides advice on the **technical design and configuration** of S | [IDS](https://help.sap.com/viewer/65de2977205c403bbc107264b8eccf4b/Cloud/en-US/d6a8db70bdde459f92f2837349f95090.html) | SAP ID Service. An instance of IAS used by SAP to authenticate customers and partners to SAP-operated PaaS and SaaS services. | | [IPS](https://help.sap.com/viewer/f48e822d6d484fa5ade7dda78b64d9f5/Cloud/en-US/2d2685d469a54a56b886105a06ccdae6.html) | SAP Cloud Identity Services - Identity Provisioning Service. IPS helps to synchronize identities between different stores / target systems. | | [XSUAA](https://blogs.sap.com/2019/01/07/uaa-xsuaa-platform-uaa-cfuaa-what-is-it-all-about/) | Extended Services for Cloud Foundry User Account and Authentication. XSUAA is a multi-tenant OAuth authorization server within the SAP BTP. |-| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multi-cloud offering for BTP (AWS, Azure, GCP, Alibaba). | +| [CF](https://www.cloudfoundry.org/) | Cloud Foundry. Cloud Foundry is the environment on which SAP built their multicloud offering for BTP (AWS, Azure, GCP, Alibaba). | | [Fiori](https://www.sap.com/products/fiori.html) | The web-based user experience of SAP (as opposed to the desktop-based experience). | ## Overview Regardless of where the authorization information comes from, it can then be emi ## Next Steps - Learn more about the initial setup in [this tutorial](../saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial.md)-- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#azure-ad) and beyond+- Discover additional [SAP integration scenarios with Azure AD](../../sap/workloads/integration-get-started.md#microsoft-entra-id-formerly-azure-ad) and beyond |
active-directory | Security Defaults | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/security-defaults.md | description: Get protected from common identity threats using Azure AD security + Last updated 07/31/2023 After security defaults are enabled in your tenant, all authentication requests Organizations use various Azure services managed through the Azure Resource Manager API, including: - Azure portal -- Microsoft Entra Admin Center+- Microsoft Entra admin center - Azure PowerShell - Azure CLI It's important to verify the identity of users who want to access Azure Resource After you enable security defaults in your tenant, any user accessing the following services must complete multifactor authentication: - Azure portal+- Microsoft Entra admin center - Azure PowerShell - Azure CLI |
active-directory | Users Default Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/users-default-permissions.md | Users and contacts | <ul><li>Enumerate the list of all users and contacts<li>Rea Groups | <ul><li>Create security groups<li>Create Microsoft 365 groups<li>Enumerate the list of all groups<li>Read all properties of groups<li>Read non-hidden group memberships<li>Read hidden Microsoft 365 group memberships for joined groups<li>Manage properties, ownership, and membership of groups that the user owns<li>Add guests to owned groups<li>Manage dynamic membership settings<li>Delete owned groups<li>Restore owned Microsoft 365 groups</li></ul> | <ul><li>Read properties of non-hidden groups, including membership and ownership (even non-joined groups)<li>Read hidden Microsoft 365 group memberships for joined groups<li>Search for groups by display name or object ID (if allowed)</li></ul> | <ul><li>Read object ID for joined groups<li>Read membership and ownership of joined groups in some Microsoft 365 apps (if allowed)</li></ul> Applications | <ul><li>Register (create) new applications<li>Enumerate the list of all applications<li>Read properties of registered and enterprise applications<li>Manage application properties, assignments, and credentials for owned applications<li>Create or delete application passwords for users<li>Delete owned applications<li>Restore owned applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications<li>List permissions granted to applications</ul> | <ul><li>Read properties of registered and enterprise applications</li><li>List permissions granted to applications</li></ul> Devices</li></ul> | <ul><li>Enumerate the list of all devices<li>Read all properties of devices<li>Manage all properties of owned devices</li></ul> | No permissions | No permissions-Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> +Organization | <ul><li>Read all company information<li>Read all domains<li>Read configuration of certificate-based authentication<li>Read all partner contracts</li><li>Read multi-tenant organization basic details and active tenants</li></ul> | <ul><li>Read company display name<li>Read all domains<li>Read configuration of certificate-based authentication</li></ul> | <ul><li>Read company display name<li>Read all domains</li></ul> Roles and scopes | <ul><li>Read all administrative roles and memberships<li>Read all properties and membership of administrative units</li></ul> | No permissions | No permissions Subscriptions | <ul><li>Read all licensing subscriptions<li>Enable service plan memberships</li></ul> | No permissions | No permissions Policies | <ul><li>Read all properties of policies<li>Manage all properties of owned policies</li></ul> | No permissions | No permissions |
active-directory | What Is Deprecated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/what-is-deprecated.md | |
active-directory | Whats New Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md | The What's new in Azure Active Directory? release notes provide information abou +## February 2023 ++### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal ++**Type:** New feature +**Service category:** Privileged Identity Management +**Product capability:** Privileged Identity Management ++Privileged Identity Management (PIM) role activation has been expanded to the Billing and AD extensions in the Azure portal. Shortcuts have been added to Subscriptions (billing) and Access Control (AD) to allow users to activate PIM roles directly from these settings. From the Subscriptions settings, select **View eligible subscriptions** in the horizontal command menu to check your eligible, active, and expired assignments. From there, you can activate an eligible assignment in the same pane. In Access control (IAM) for a resource, you can now select **View my access** to see your currently active and eligible role assignments and activate directly. By integrating PIM capabilities into different Azure portal blades, this new feature allows users to gain temporary access to view or edit subscriptions and resources more easily. +++For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md). ++++### General Availability - Follow Azure AD best practices with recommendations ++**Type:** New feature +**Service category:** Reporting +**Product capability:** Monitoring & Reporting ++Azure AD recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure AD Overview. ++This release includes our first 3 recommendations: ++- Convert from per-user MFA to Conditional Access MFA +- Migration applications from AD FS to Azure AD +- Minimize MFA prompts from known devices +++For more information, see: ++- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md) +- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview) ++++### Public Preview - Azure AD PIM + Conditional Access integration ++**Type:** New feature +**Service category:** Privileged Identity Management +**Product capability:** Privileged Identity Management ++Now you can require users who are eligible for a role to satisfy Conditional Access policy requirements for activation: use specific authentication method enforced through Authentication Strengths, activate from Intune compliant device, comply with Terms of Use, and use 3rd party MFA and satisfy location requirements. ++For more information, see: [Configure Azure AD role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md). +++++### General Availability - More information on why a sign-in was flagged as "unfamiliar" ++**Type:** Changed feature +**Service category:** Identity Protection +**Product capability:** Identity Security & Protection ++Unfamiliar sign-in properties risk detection now provides risk reasons as to which properties are unfamiliar for customers to better investigate that risk. ++Identity Protection now surfaces the unfamiliar properties in the Azure portal on UX and in API as *Additional Info* with a user-friendly description explaining that *the following properties are unfamiliar for this sign-in of the given user*. ++There's no additional work to enable this feature, the unfamiliar properties are shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md). +++++### General Availability - New Federated Apps available in Azure AD Application gallery - February 2023 ++++**Type:** New feature +**Service category:** Enterprise Apps +**Product capability:** 3rd Party Integration ++In February 2023 we've added the following 10 new applications in our App gallery with Federation support: ++[PROCAS](https://accounting.procas.com/), [Tanium Cloud SSO](../saas-apps/tanium-sso-tutorial.md), [LeanDNA](../saas-apps/leandna-tutorial.md), [CalendarAnything LWC](https://silverlinecrm.com/calendaranything/), [courses.work](../saas-apps/courseswork-tutorial.md), [Udemy Business SAML](../saas-apps/udemy-business-saml-tutorial.md), [Canva](../saas-apps/canva-tutorial.md), [Kno2fy](../saas-apps/kno2fy-tutorial.md), [IT-Conductor](../saas-apps/it-conductor-tutorial.md), [ナレッジワーク(Knowledge Work)](../saas-apps/knowledge-work-tutorial.md), [Valotalive Digital Signage Microsoft 365 integration](https://store.valotalive.com/#main), [Priority Matrix HIPAA](https://hipaa.prioritymatrix.com/), [Priority Matrix Government](https://hipaa.prioritymatrix.com/), [Beable](../saas-apps/beable-tutorial.md), [Grain](https://grain.com/app?dialog=integrations&integration=microsoft+teams), [DojoNavi](../saas-apps/dojonavi-tutorial.md), [Global Validity Access Manager](https://myaccessmanager.com/), [FieldEquip](https://app.fieldequip.com/), [Peoplevine](https://control.peoplevine.com/), [Respondent](../saas-apps/respondent-tutorial.md), [WebTMA](../saas-apps/webtma-tutorial.md), [ClearIP](https://clearip.com/login), [Pennylane](../saas-apps/pennylane-tutorial.md), [VsimpleSSO](https://app.vsimple.com/login), [Compliance Genie](../saas-apps/compliance-genie-tutorial.md), [Dataminr Corporate](https://dmcorp.okta.com/), [Talon](../saas-apps/talon-tutorial.md). +++You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. ++For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest ++++### Public Preview - New provisioning connectors in the Azure AD Application Gallery - February 2023 ++**Type:** New feature +**Service category:** App Provisioning +**Product capability:** 3rd Party Integration + ++We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps: ++- [Atmos](../saas-apps/atmos-provisioning-tutorial.md) +++For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). +++++ ## January 2023 ### Public Preview - Cross-tenant synchronization For more information on how to enable this feature, see: [Cloud Sync directory e **Service category:** Audit **Product capability:** Monitoring & Reporting -This feature analyzes uploaded client-side logs, also known as diagnostic logs, from a Windows 10+ device that is having an issue(s) and suggests remediation steps to resolve the issue(s). Admins can work with end user to collect client-side logs, and then upload them to this troubleshooter in the Entra Portal. For more information, see: [Troubleshooting Windows devices in Azure AD](../devices/troubleshoot-device-windows-joined.md). +This feature analyzes uploaded client-side logs, also known as diagnostic logs, from a Windows 10+ device that is having an issue(s) and suggests remediation steps to resolve the issue(s). Admins can work with end user to collect client-side logs, and then upload them to this troubleshooter in the Microsoft Entra admin center. For more information, see: [Troubleshooting Windows devices in Azure AD](../devices/troubleshoot-device-windows-joined.md). The ability for users to create tenants from the Manage Tenant overview has been **Service category:** My Apps **Product capability:** End User Experiences -We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md). +We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Microsoft Entra admin centers. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md). Customers can now meet their complex audit and recertification requirements thro Currently, users can self-service leave for an organization without the visibility of their IT administrators. Some organizations may want more control over this self-service process. -With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra portal. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties. +With this feature, IT administrators can now allow or restrict external identities to leave an organization by Microsoft provided self-service controls via Azure Active Directory in the Microsoft Entra admin center. In order to restrict users to leave an organization, customers need to include "Global privacy contact" and "Privacy statement URL" under tenant properties. A new policy API is available for the administrators to control tenant wide policy: [externalIdentitiesPolicy resource type](/graph/api/resources/externalidentitiespolicy?view=graph-rest-beta&preserve-view=true) Identity Protection risk detections (alerts) are now also available in Microsoft In August 2022, we've added the following 40 new applications in our App gallery with Federation support -[Albourne Castle](https://village.albourne.com/castle), [Adra by Trintech](../saas-apps/adra-by-trintech-tutorial.md), [workhub](../saas-apps/workhub-tutorial.md), [4DX](../saas-apps/4dx-tutorial.md), [Ecospend IAM V1](https://iamapi.sb.ecospend.com/account/login), [TigerGraph](../saas-apps/tigergraph-tutorial.md), [Sketch](../saas-apps/sketch-tutorial.md), [Lattice](../saas-apps/lattice-tutorial.md), [snapADDY Single Sign On](https://app.snapaddy.com/login), [RELAYTO Content Experience Platform](https://relayto.com/signin), [oVice](https://tour.ovice.in/login), [Arena](../saas-apps/arena-tutorial.md), [QReserve](../saas-apps/qreserve-tutorial.md), [Curator](../saas-apps/curator-tutorial.md), [NetMotion Mobility](../saas-apps/netmotion-mobility-tutorial.md), [HackNotice](../saas-apps/hacknotice-tutorial.md), [ERA_EHS_CORE](../saas-apps/era-ehs-core-tutorial.md), [AnyClip Teams Connector](https://videomanager.anyclip.com/login), [Wiz SSO](../saas-apps/wiz-sso-tutorial.md), [Tango Reserve by AgilQuest (EU Instance)](../saas-apps/tango-reserve-tutorial.md), [valid8Me](../saas-apps/valid8me-tutorial.md), [Ahrtemis](../saas-apps/ahrtemis-tutorial.md), [KPMG Leasing Tool](../saas-apps/kpmg-tool-tutorial.md) [Mist Cloud Admin SSO](../saas-apps/mist-cloud-admin-tutorial.md), [Work-Happy](https://live.work-happy.com/?azure=true), [Ediwin SaaS EDI](../saas-apps/ediwin-saas-edi-tutorial.md), [LUSID](../saas-apps/lusid-tutorial.md), [Next Gen Math](https://nextgenmath.com/), [Total ID](https://www.tokyo-shoseki.co.jp/ict/), [Cheetah For Benelux](../saas-apps/cheetah-for-benelux-tutorial.md), [Live Center Australia](https://au.livecenter.com/), [Shop Floor Insight](https://www.dmsiworks.com/apps/shop-floor-insight), [Warehouse Insight](https://www.dmsiworks.com/apps/warehouse-insight), [myAOS](../saas-apps/myaos-tutorial.md), [Hero](https://admin.linc-ed.com/), [FigBytes](../saas-apps/figbytes-tutorial.md), [VerosoftDesign](https://verosoft-design.vercel.app/), [ViewpointOne - UK](https://identity-uk.team.viewpoint.com/), [EyeRate Reviews](https://azure-login.eyeratereviews.com/), [Lytx DriveCam](../saas-apps/lytx-drivecam-tutorial.md) +[Albourne Castle](https://village.albourne.com/castle), [Adra by Trintech](../saas-apps/adra-by-trintech-tutorial.md), [workhub](../saas-apps/workhub-tutorial.md), [4DX](../saas-apps/4dx-tutorial.md), [Ecospend IAM V1](https://iamapi.sb.ecospend.com/account/login), [TigerGraph](../saas-apps/tigergraph-tutorial.md), [Sketch](../saas-apps/sketch-tutorial.md), [Lattice](../saas-apps/lattice-tutorial.md), [snapADDY Single Sign On](https://app.snapaddy.com/login), [RELAYTO Content Experience Platform](https://relayto.com/signin), [oVice](https://tour.ovice.in/login), [Arena](../saas-apps/arena-tutorial.md), [QReserve](../saas-apps/qreserve-tutorial.md), [Curator](../saas-apps/curator-tutorial.md), [NetMotion Mobility](../saas-apps/netmotion-mobility-tutorial.md), [HackNotice](../saas-apps/hacknotice-tutorial.md), [ERA_EHS_CORE](../saas-apps/era-ehs-core-tutorial.md), [AnyClip Teams Connector](https://videomanager.anyclip.com/login), [Wiz SSO](../saas-apps/wiz-sso-tutorial.md), [Tango Reserve by AgilQuest (EU Instance)](../saas-apps/tango-reserve-tutorial.md), [valid8Me](../saas-apps/valid8me-tutorial.md), [Ahrtemis](../saas-apps/ahrtemis-tutorial.md), [KPMG Leasing Tool](../saas-apps/kpmg-tool-tutorial.md) [Mist Cloud Admin SSO](../saas-apps/mist-cloud-admin-tutorial.md), [Ediwin SaaS EDI](../saas-apps/ediwin-saas-edi-tutorial.md), [LUSID](../saas-apps/lusid-tutorial.md), [Next Gen Math](https://nextgenmath.com/), [Total ID](https://www.tokyo-shoseki.co.jp/ict/), [Cheetah For Benelux](../saas-apps/cheetah-for-benelux-tutorial.md), [Live Center Australia](https://au.livecenter.com/), [Shop Floor Insight](https://www.dmsiworks.com/apps/shop-floor-insight), [Warehouse Insight](https://www.dmsiworks.com/apps/warehouse-insight), [myAOS](../saas-apps/myaos-tutorial.md), [Hero](https://admin.linc-ed.com/), [FigBytes](../saas-apps/figbytes-tutorial.md), [VerosoftDesign](https://verosoft-design.vercel.app/), [ViewpointOne - UK](https://identity-uk.team.viewpoint.com/), [EyeRate Reviews](https://azure-login.eyeratereviews.com/), [Lytx DriveCam](../saas-apps/lytx-drivecam-tutorial.md) You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial, For listing your application in the Azure AD app gallery, please read the detail -## February 2022 - --- --### General Availability - France digital accessibility requirement --**Type:** Plan for change -**Service category:** Other -**Product capability:** End User Experiences - --This change provides users who are signing into Azure Active Directory on iOS, Android, and Web UI flavors information about the accessibility of Microsoft's online services via a link on the sign-in page. This ensures that the France digital accessibility compliance requirements are met. The change will only be available for French language experiences.[Learn more](https://www.microsoft.com/fr-fr/accessibility/accessibilite/accessibility-statement) - --- --### General Availability - Downloadable access review history report --**Type:** New feature -**Service category:** Access Reviews -**Product capability:** Identity Governance - --With Azure Active Directory (Azure AD) Access Reviews, you can create a downloadable review history to help your organization gain more insight. The report pulls the decisions that were taken by reviewers when a report is created. These reports can be constructed to include specific access reviews, for a specific time frame, and can be filtered to include different review types and review results.[Learn more](../governance/access-reviews-downloadable-review-history.md) - ----- --### Public Preview of Identity Protection for Workload Identities --**Type:** New feature -**Service category:** Identity Protection -**Product capability:** Identity Security & Protection - --Azure AD Identity Protection is extending its core capabilities of detecting, investigating, and remediating identity-based risk to workload identities. This allows organizations to better protect their applications, service principals, and managed identities. We're also extending Conditional Access so you can block at-risk workload identities. [Learn more](../identity-protection/concept-workload-identity-risk.md) - --- --### Public Preview - Cross-tenant access settings for B2B collaboration --**Type:** New feature -**Service category:** B2B -**Product capability:** Collaboration -- --Cross-tenant access settings enable you to control how users in your organization collaborate with members of external Azure AD organizations. Now you have granular inbound and outbound access control settings that work on a per org, user, group, and application basis. These settings also make it possible for you to trust security claims from external Azure AD organizations like multi-factor authentication (MFA), device compliance, and hybrid Azure AD joined devices. [Learn more](../external-identities/cross-tenant-access-overview.md) - --- --### Public preview - Create Azure AD access reviews with multiple stages of reviewers --**Type:** New feature -**Service category:** Access Reviews -**Product capability:** Identity Governance - --Use multi-stage reviews to create Azure AD access reviews in sequential stages, each with its own set of reviewers and configurations. Supports multiple stages of reviewers to satisfy scenarios such as: independent groups of reviewers reaching quorum, escalations to other reviewers, and reducing burden by allowing for later stage reviewers to see a filtered-down list. For public preview, multi-stage reviews are only supported on reviews of groups and applications. [Learn more](../governance/create-access-review.md) - --- --### New Federated Apps available in Azure AD Application gallery - February 2022 --**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** Third Party Integration - --In February 2022 we added the following 20 new applications in our App gallery with Federation support: --[Embark](../saas-apps/embark-tutorial.md), [FENCE-Mobile RemoteManager SSO](../saas-apps/fence-mobile-remotemanager-sso-tutorial.md), [カオナビ](../saas-apps/kao-navi-tutorial.md), [Adobe Identity Management (OIDC)](../saas-apps/adobe-identity-management-tutorial.md), [AppRemo](../saas-apps/appremo-tutorial.md), [Live Center](https://livecenter.norkon.net/Login), [Offishall](https://app.offishall.io/), [MoveWORK Flow](https://www.movework-flow.fm/login), [Cirros SL](https://www.cirros.net/), [ePMX Procurement Software](https://azure.epmxweb.com/admin/index.php?), [Vanta O365](https://app.vanta.com/connections), [Hubble](../saas-apps/hubble-tutorial.md), [Medigold Gateway](https://gateway.medigoldcore.com), [クラウドログ](../saas-apps/crowd-log-tutorial.md),[Amazing People Schools](../saas-apps/amazing-people-schools-tutorial.md), [XplicitTrust Network Access](https://console.xplicittrust.com/#/dashboard), [Spike Email - Mail & Team Chat](https://spikenow.com/web/), [AltheaSuite](https://planmanager.altheasuite.com/), [Balsamiq Wireframes](../saas-apps/balsamiq-wireframes-tutorial.md). --You can also find the documentation of all the applications from here: [https://aka.ms/AppsTutorial](../saas-apps/tutorial-list.md), --For listing your application in the Azure AD app gallery, please read the details here: [https://aka.ms/AzureADAppRequest](../manage-apps/v2-howto-app-gallery-listing.md) -- --- --### Two new MDA detections in Identity Protection --**Type:** New feature -**Service category:** Identity Protection -**Product capability:** Identity Security & Protection - --Identity Protection has added two new detections from Microsoft Defender for Cloud Apps, (formerly MCAS). The Mass Access to Sensitive Files detection detects anomalous user activity, and the Unusual Addition of Credentials to an OAuth app detects suspicious service principal activity.[Learn more](../identity-protection/concept-identity-protection-risks.md) - --- --### Public preview - New provisioning connectors in the Azure AD Application Gallery - February 2022 --**Type:** New feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration - --You can now automate creating, updating, and deleting user accounts for these newly integrated apps: --- [BullseyeTDP](../saas-apps/bullseyetdp-provisioning-tutorial.md)-- [GitHub Enterprise Managed User (OIDC)](../saas-apps/github-enterprise-managed-user-oidc-provisioning-tutorial.md)-- [Gong](../saas-apps/gong-provisioning-tutorial.md)-- [LanSchool Air](../saas-apps/lanschool-air-provisioning-tutorial.md)-- [ProdPad](../saas-apps/prodpad-provisioning-tutorial.md)--For more information about how to better secure your organization by using automated user account provisioning, see [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). - --- --### General Availability - Privileged Identity Management (PIM) role activation for SharePoint Online enhancements --**Type:** Changed feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management - --We've improved the Privileged Identity management (PIM) time to role activation for SharePoint Online. Now, when activating a role in PIM for SharePoint Online, you should be able to use your permissions right away in SharePoint Online. This change rolls out in stages, so you might not yet see these improvements in your organization. [Learn more](../privileged-identity-management/pim-how-to-activate-role.md) - -- |
active-directory | Whats New Sovereign Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md | In the **All Devices** settings under the Registered column, you can now select **Service category:** My Apps **Product capability:** End User Experiences -We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Entra portals. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md). +We have consolidated relevant app launcher settings in a new App launchers section in the Azure and Microsoft Entra admin centers. The entry point can be found under Enterprise applications, where Collections used to be. You can find the Collections option by selecting App launchers. In addition, we've added a new App launchers Settings option. This option has some settings you may already be familiar with like the Microsoft 365 settings. The new Settings options also have controls for previews. As an admin, you can choose to try out new app launcher features while they are in preview. Enabling a preview feature means that the feature turns on for your organization. This enabled feature reflects in the My Apps portal, and other app launchers for all of your users. To learn more about the preview settings, see: [End-user experiences for applications](../manage-apps/end-user-experiences.md). |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md | Starting July 2023, we're modernizing the following Terms of Use end user experi No functionalities are removed. The new PDF viewer adds functionality and the limited visual changes in the end-user experiences will be communicated in a future update. If your organization has allow-listed only certain domains, you must ensure your allowlist includes the domains ‘myaccount.microsoft.com’ and ‘*.myaccount.microsoft.com’ for Terms of Use to continue working as expected. ---## February 2023 --### General Availability - Expanding Privileged Identity Management Role Activation across the Azure portal --**Type:** New feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management --Privileged Identity Management (PIM) role activation has been expanded to the Billing and AD extensions in the Azure portal. Shortcuts have been added to Subscriptions (billing) and Access Control (AD) to allow users to activate PIM roles directly from these settings. From the Subscriptions settings, select **View eligible subscriptions** in the horizontal command menu to check your eligible, active, and expired assignments. From there, you can activate an eligible assignment in the same pane. In Access control (IAM) for a resource, you can now select **View my access** to see your currently active and eligible role assignments and activate directly. By integrating PIM capabilities into different Azure portal blades, this new feature allows users to gain temporary access to view or edit subscriptions and resources more easily. ---For more information Microsoft cloud settings, see: [Activate my Azure resource roles in Privileged Identity Management](../privileged-identity-management/pim-resource-roles-activate-your-roles.md). ----### General Availability - Follow Azure AD best practices with recommendations --**Type:** New feature -**Service category:** Reporting -**Product capability:** Monitoring & Reporting --Azure AD recommendations help you improve your tenant posture by surfacing opportunities to implement best practices. On a daily basis, Azure AD analyzes the configuration of your tenant. During this analysis, Azure AD compares the data of a recommendation with the actual configuration of your tenant. If a recommendation is flagged as applicable to your tenant, the recommendation appears in the Recommendations section of the Azure AD Overview. --This release includes our first 3 recommendations: --- Convert from per-user MFA to Conditional Access MFA-- Migration applications from AD FS to Azure AD-- Minimize MFA prompts from known devices---For more information, see: --- [What are Azure Active Directory recommendations?](../reports-monitoring/overview-recommendations.md)-- [Use the Azure AD recommendations API to implement Azure AD best practices for your tenant](/graph/api/resources/recommendations-api-overview)----### Public Preview - Azure AD PIM + Conditional Access integration --**Type:** New feature -**Service category:** Privileged Identity Management -**Product capability:** Privileged Identity Management --Now you can require users who are eligible for a role to satisfy Conditional Access policy requirements for activation: use specific authentication method enforced through Authentication Strengths, activate from Intune compliant device, comply with Terms of Use, and use 3rd party MFA and satisfy location requirements. --For more information, see: [Configure Azure AD role settings in Privileged Identity Management](../privileged-identity-management/pim-how-to-change-default-settings.md). -----### General Availability - More information on why a sign-in was flagged as "unfamiliar" --**Type:** Changed feature -**Service category:** Identity Protection -**Product capability:** Identity Security & Protection --Unfamiliar sign-in properties risk detection now provides risk reasons as to which properties are unfamiliar for customers to better investigate that risk. --Identity Protection now surfaces the unfamiliar properties in the Azure portal on UX and in API as *Additional Info* with a user-friendly description explaining that *the following properties are unfamiliar for this sign-in of the given user*. --There's no additional work to enable this feature, the unfamiliar properties are shown by default. For more information, see: [Sign-in risk](../identity-protection/concept-identity-protection-risks.md). -----### General Availability - New Federated Apps available in Azure AD Application gallery - February 2023 ----**Type:** New feature -**Service category:** Enterprise Apps -**Product capability:** 3rd Party Integration --In February 2023 we've added the following 10 new applications in our App gallery with Federation support: --[PROCAS](https://accounting.procas.com/), [Tanium Cloud SSO](../saas-apps/tanium-sso-tutorial.md), [LeanDNA](../saas-apps/leandna-tutorial.md), [CalendarAnything LWC](https://silverlinecrm.com/calendaranything/), [courses.work](../saas-apps/courseswork-tutorial.md), [Udemy Business SAML](../saas-apps/udemy-business-saml-tutorial.md), [Canva](../saas-apps/canva-tutorial.md), [Kno2fy](../saas-apps/kno2fy-tutorial.md), [IT-Conductor](../saas-apps/it-conductor-tutorial.md), [ナレッジワーク(Knowledge Work)](../saas-apps/knowledge-work-tutorial.md), [Valotalive Digital Signage Microsoft 365 integration](https://store.valotalive.com/#main), [Priority Matrix HIPAA](https://hipaa.prioritymatrix.com/), [Priority Matrix Government](https://hipaa.prioritymatrix.com/), [Beable](../saas-apps/beable-tutorial.md), [Grain](https://grain.com/app?dialog=integrations&integration=microsoft+teams), [DojoNavi](../saas-apps/dojonavi-tutorial.md), [Global Validity Access Manager](https://myaccessmanager.com/), [FieldEquip](https://app.fieldequip.com/), [Peoplevine](https://control.peoplevine.com/), [Respondent](../saas-apps/respondent-tutorial.md), [WebTMA](../saas-apps/webtma-tutorial.md), [ClearIP](https://clearip.com/login), [Pennylane](../saas-apps/pennylane-tutorial.md), [VsimpleSSO](https://app.vsimple.com/login), [Compliance Genie](../saas-apps/compliance-genie-tutorial.md), [Dataminr Corporate](https://dmcorp.okta.com/), [Talon](../saas-apps/talon-tutorial.md). ---You can also find the documentation of all the applications from here https://aka.ms/AppsTutorial. --For listing your application in the Azure AD app gallery, read the details here https://aka.ms/AzureADAppRequest ----### Public Preview - New provisioning connectors in the Azure AD Application Gallery - February 2023 --**Type:** New feature -**Service category:** App Provisioning -**Product capability:** 3rd Party Integration - --We've added the following new applications in our App gallery with Provisioning support. You can now automate creating, updating, and deleting of user accounts for these newly integrated apps: --- [Atmos](../saas-apps/atmos-provisioning-tutorial.md)---For more information about how to better secure your organization by using automated user account provisioning, see: [Automate user provisioning to SaaS applications with Azure AD](../app-provisioning/user-provisioning.md). -- |
active-directory | Access Reviews Downloadable Review History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-downloadable-review-history.md | Review history and request review history are available for any user if they're **Prerequisite role:** All users authorized to view access reviews -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, under **Access Reviews** select **Review history**. +1. Browse to **Identity governance** > **Access Reviews** > **Review History**. 1. Select **New report**. The reports provide details on a per-user basis showing the following informatio | Element name | Description | | | | | AccessReviewId | Review object ID |-| AccessReviewSeriesId | Object ID of the review series, if the review is an instance of a recurring review. If the review is one time, the value is am empty GUID. | +| AccessReviewSeriesId | Object ID of the review series, if the review is an instance of a recurring review. If the review is one time, the value is an empty GUID. | | ReviewType | Review types include group, application, Azure AD role, Azure role, and access package| |ResourceDisplayName | Display Name of the resource being reviewed | | ResourceId | ID of the resource being reviewed | |
active-directory | Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/apps.md | Microsoft Entra identity governance can be integrated with many other applicatio | SAML-based apps | | ΓùÅ | | [SAP Analytics Cloud](../../active-directory/saas-apps/sap-analytics-cloud-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP Cloud Platform](../../active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md) | ΓùÅ | ΓùÅ |-| [SAP ECC 7.0](../../active-directory/app-provisioning/on-premises-sap-connector-configure.md) | ΓùÅ | | -| SAP R/3 | ΓùÅ | | +| [SAP R/3 and ERP](../../active-directory/app-provisioning/on-premises-sap-connector-configure.md) | ΓùÅ | | | [SAP HANA](../../active-directory/saas-apps/saphana-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors to Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-tutorial.md) | ΓùÅ | ΓùÅ | | [SAP SuccessFactors to Azure Active Directory](../../active-directory/saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md) | ΓùÅ | ΓùÅ | |
active-directory | Check Status Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md | When a workflow is created, it's important to check its status, and run history You're able to retrieve run information of a workflow using Lifecycle Workflows. To check the runs of a workflow using the Azure portal, you would do the following steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. Select **Azure Active Directory** and then select **Identity Governance**. --1. On the left menu, select **Lifecycle Workflows**. --1. On the Lifecycle Workflows overview page, select **Workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. 1. Select the workflow you want to run history of. |
active-directory | Check Workflow Execution Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-workflow-execution-scope.md | Workflow scheduling will automatically process the workflow for users meeting th To check the users who fall under the execution scope of a workflow, you'd follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. Type in **Identity Governance** on the search bar near the top of the page and select it. --1. In the left menu, select **Lifecycle workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. 1. From the list of workflows, select the workflow you want to check the execution scope of. |
active-directory | Complete Access Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/complete-access-review.md | For more information, see [License requirements](access-reviews-overview.md#lice ## View the status of an access review-- You can track the progress of access reviews as they're completed. -1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/). - -1. In the left menu, select **Access reviews**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Access Reviews**. 1. In the list, select an access review. Manually or automatically applying results doesn't have an effect on a group tha On review creation, the creator can choose between two options for denied guest users in an access review. - Denied guest users can have their access to the resource removed. This is the default.+ - The denied guest user can be blocked from signing in for 30 days, then deleted from the tenant. During the 30-day period the guest user is able to be restored access to the tenant by an administrator. After the 30-day period is completed, if the guest user hasn't had access to the resource granted to them again, they'll be removed from the tenant permanently. In addition, using the Microsoft Entra admin center, a Global Administrator can explicitly [permanently delete a recently deleted user](../fundamentals/users-restore.md) before that time period is reached. Once a user has been permanently deleted, the data about that guest user will be removed from active access reviews. Audit information about deleted users remains in the audit log. ### Actions taken on denied B2B direct connect users |
active-directory | Create Access Review Pim For Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review-pim-for-groups.md | For more information, see [License requirements](access-reviews-overview.md#lice ### Scope +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. +1. Browse to **Identity governance** > **Access Reviews** > **Review History**. -2. On the left menu, select **Access reviews**. --3. Select **New access review** to create a new access review. +1. Select **New access review** to create a new access review. ![Screenshot that shows the Access reviews pane in Identity Governance.](./media/create-access-review/access-reviews.png) -4. In the **Select what to review** box, select **Teams + Groups**. +1. In the **Select what to review** box, select **Teams + Groups**. ![Screenshot that shows creating an access review.](./media/create-access-review/select-what-review.png) -5. Select **Teams + Groups** and then select **Select Teams + groups** under **Review Scope**. A list of groups to choose from appears on the right. +1. Select **Teams + Groups** and then select **Select Teams + groups** under **Review Scope**. A list of groups to choose from appears on the right. ![Screenshot that shows selecting Teams + Groups.](./media/create-access-review/create-pim-review.png) |
active-directory | Create Access Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-access-review.md | If you're reviewing access to an application, then before creating the review, s ### Scope +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. --2. On the left menu, select **Access reviews**. +1. Browse to **Identity governance** > **Access Reviews**. 3. Select **New access review** to create a new access review. B2B direct connect users and teams are included in access reviews of the Teams-e Use the following instructions to create an access review on a team with shared channels: -1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator, User Admin or Identity Governance Admin. - -1. Open the [Identity Governance](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/) page. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. On the left menu, select **Access reviews**. +1. Browse to **Identity governance** > **Access Reviews**. 1. Select **+ New access review**. Use the following instructions to create an access review on a team with shared The prerequisite role is a Global or User administrator. -1. Sign in to the [Azure portal](https://portal.azure.com) and open the [Identity Governance page](https://portal.azure.com/#blade/Microsoft_AAD_ERM/DashboardBlade/). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. On the menu on the left, under **Access reviews**, select **Settings**. +1. Browse to **Identity governance** > **Access Reviews** > **Settings**. 1. On the **Delegate who can create and manage access reviews** page, set **Group owners can create and manage access reviews for groups they own** to **Yes**. |
active-directory | Create Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md | Lifecycle workflows allow for tasks associated with the lifecycle process to be - **Tasks**: Actions taken when a workflow is triggered. - **Execution conditions**: The who and when of a workflow. These conditions define which users (scope) this workflow should run against, and when (trigger) the workflow should run. -You can create and customize workflows for common scenarios by using templates, or you can build a workflow from scratch without using a template. Currently, if you use the Azure portal, any workflow that you create must be based on a template. If you want to create a workflow without using a template, use Microsoft Graph. +You can create and customize workflows for common scenarios by using templates, or you can build a workflow from scratch without using a template. Currently, if you use the Microsoft Entra admin center, any workflow that you create must be based on a template. If you want to create a workflow without using a template, use Microsoft Graph. ## Prerequisites [!INCLUDE [Microsoft Entra ID Governance license](../../../includes/active-directory-entra-governance-license.md)] -## Create a lifecycle workflow by using a template in the Azure portal +## Create a lifecycle workflow by using a template in the Microsoft Entra admin center -If you're using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios. -To create a workflow based on a template: --1. Sign in to the [Azure portal](https://portal.azure.com). +If you're using the Microsoft Entra admin center to create a workflow, you can customize existing templates to meet your organization's needs. These templates include one for pre-hire common scenarios. -1. Select **Azure Active Directory** > **Identity Governance**. +To create a workflow based on a template: -1. On the left menu, select **Lifecycle Workflows**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. Select **Workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **Create a workflow**. 1. On the **Choose a workflow** page, select the workflow template that you want to use. |
active-directory | Customize Workflow Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-email.md | For more information on these customizable parameters, see [Common email task pa When you're customizing an email sent via lifecycle workflows, you can choose to customize either a new task or an existing task. You do these customizations the same way whether the task is new or existing, but the following steps walk you through updating an existing task. To customize emails sent from tasks within workflows by using the Azure portal: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. On the search bar near the top of the page, enter **Identity Governance** and select the result. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. -1. On the left menu, select **Lifecycle workflows**. --1. On the left menu, select **Workflows**. --1. Select **Tasks**. +1. Select the workflow that contain the email tasks you want to customize. 1. On the pane that lists tasks, select the task for which you want to customize the email. |
active-directory | Customize Workflow Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md | When you create workflows by using lifecycle workflows, you can fully customize Workflows that you create within lifecycle workflows follow the same schedule that you define on the **Workflow settings** pane. To adjust the schedule, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. On the search bar near the top of the page, enter **Identity Governance** and select the result. --1. On the left menu, select **Lifecycle workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows**. 1. On the **Lifecycle workflows** overview page, select **Workflow settings**. |
active-directory | Delete Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md | When a workflow is deleted, it enters a soft-delete state. During this period, y [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. On the search bar near the top of the page, enter **Identity Governance**. Then select **Identity Governance** in the results. --1. On the left menu, select **Lifecycle Workflows**. --1. Select **Workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. 1. On the **Workflows** page, select the workflow that you want to delete. Then select **Delete**. |
active-directory | Entitlement Management Access Package Approval Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-approval-policy.md | -Each access package must have one or more access package assignment policies, before a user can be assigned access. When an access package is created in the Entra portal, the Entra portal automatically creates the first access package assignment policy for that access package. The policy determines who can request access, and who if anyone must approve access. +Each access package must have one or more access package assignment policies, before a user can be assigned access. When an access package is created in the Microsoft Entra admin center, the Microsoft Entra admin center automatically creates the first access package assignment policy for that access package. The policy determines who can request access, and who if anyone must approve access. As an access package manager, you can change the approval and requestor information settings for an access package at any time by editing an existing policy or adding a new additional policy for requesting access. Follow these steps to specify the approval settings for requests for the access **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open an access package. 1. Either select a policy to edit or add a new policy to the access package 1. Select **Policies** and then **Add policy** if you want to create a new policy. For example, if you listed Alice and Bob as the first stage approver(s), list Ca ## Collect additional requestor information for approval -In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or Multiple Choice questions at the time of request. There's a limit of 20 questions per policy and a limit of 25 answers for Multiple Choice questions. The questions will then be shown to approvers to help them make a decision. +In order to make sure users are getting access to the right access packages, you can require requestors to answer custom text field or Multiple Choice questions at the time of request. The questions will then be shown to approvers to help them make a decision. 1. Go to the **Requestor information** tab and select the **Questions** sub tab. |
active-directory | Entitlement Management Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-assignments.md | To use entitlement management and assign users to access packages, you must have **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open an access package. 1. Select **Assignments** to see a list of active assignments. You can also retrieve assignments in an access package using Microsoft Graph. A ### View assignments with PowerShell -You can perform this query in PowerShell with the `Get-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet. +You can perform this query in PowerShell with the `Get-MgEntitlementManagementAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0. This cmdlet takes as a parameter the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.Read.All"-Select-MgProfile -Name "beta" -$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -$assignments = Get-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop -$assignments | ft Id,AssignmentState,TargetId,{$_.Target.DisplayName} +$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayName eq 'Marketing Campaign'" +$assignments = Get-MgEntitlementManagementAssignment -AccessPackageId $accesspackage.Id -ExpandProperty target -All -ErrorAction Stop +$assignments | ft Id,state,{$_.Target.id},{$_.Target.displayName} ``` ## Directly assign a user In some cases, you might want to directly assign specific users to an access pac **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package. +1. On the **Access packages** page open an access package. 1. In the left menu, select **Assignments**. Entitlement management also allows you to directly assign external users to an a **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package in which you want to add a user. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open an access package. 1. In the left menu, select **Assignments**. You can also directly assign a user to an access package using Microsoft Graph. ### Assign a user to an access package with PowerShell -You can assign a user to an access package in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. This cmdlet takes as parameters -* the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, -* the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, -* the object ID of the target user, if the user is already present in your directory. +You can assign a user to an access package in PowerShell with the `New-MgEntitlementManagementAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"-Select-MgProfile -Name "beta" -$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies" -$policy = $accesspackage.AccessPackageAssignmentPolicies[0] -$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetId "a43ee6df-3cc5-491a-ad9d-ea964ef8e464" +$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty assignmentpolicies +$policy = $accesspackage.AssignmentPolicies[0] +$userid = "cdbdf152-82ce-479c-b5b8-df90f561d5c7" +$params = @{ + requestType = "adminAdd" + assignment = @{ + targetId = $userid + assignmentPolicyId = $policy.Id + accessPackageId = $accesspackage.Id + } +} +New-MgEntitlementManagementAssignmentRequest -BodyParameter $params ``` -You can also assign multiple users that are in your directory to an access package using PowerShell with the `New-MgEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.1 or later. This cmdlet takes as parameters +You can also assign multiple users that are in your directory to an access package using PowerShell with the `New-MgBetaEntitlementManagementAccessPackageAssignment` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.4.0 or later. This cmdlet takes as parameters * the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, * the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, * the object IDs of the target users, either as an array of strings, or as a list of user members returned from the `Get-MgGroupMember` cmdlet. For example, if you want to ensure all the users who are currently members of a ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Directory.Read.All"-Select-MgProfile -Name "beta" -$members = Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15" -$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies" -$policy = $accesspackage.AccessPackageAssignmentPolicies[0] -$req = New-MgEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members +$members = Get-MgGroupMember -GroupId "a34abd69-6bf8-4abd-ab6b-78218b77dc15" -All ++$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies" +$policy = $accesspackage.AssignmentPolicies[0] +$req = New-MgBetaEntitlementManagementAccessPackageAssignment -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -RequiredGroupMember $members ``` -If you wish to add an assignment for a user who is not yet in your directory, you can use the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. This cmdlet takes as parameters +If you wish to add an assignment for a user who is not yet in your directory, you can use the `New-MgBetaEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 2.4.0. This cmdlet takes as parameters * the access package ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackage` cmdlet, * the access package assignment policy ID, which is included in the response from the `Get-MgEntitlementManagementAccessPackageAssignmentPolicy`cmdlet, * the email address of the target user. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"-Select-MgProfile -Name "beta" -$accesspackage = Get-MgEntitlementManagementAccessPackage -DisplayNameEq "Marketing Campaign" -ExpandProperty "accessPackageAssignmentPolicies" -$policy = $accesspackage.AccessPackageAssignmentPolicies[0] -$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" +$accesspackage = Get-MgEntitlementManagementAccessPackage -Filter "displayname eq 'Marketing Campaign'" -ExpandProperty "assignmentPolicies" +$policy = $accesspackage.AssignmentPolicies[0] +$req = New-MgBetaEntitlementManagementAccessPackageAssignmentRequest -AccessPackageId $accesspackage.Id -AssignmentPolicyId $policy.Id -TargetEmail "sample@example.com" ``` ## Remove an assignment You can remove an assignment that a user or an administrator had previously requ **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package. +1. On the **Access packages** page open an access package. 1. In the left menu, select **Assignments**. You can also remove an assignment of a user to an access package using Microsoft ### Remove an assignment with PowerShell -You can remove a user's assignment in PowerShell with the `New-MgEntitlementManagementAccessPackageAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. +You can remove a user's assignment in PowerShell with the `New-MgEntitlementManagementAssignmentRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. This script illustrates using the Microsoft Graph PowerShell cmdlets module version 2.4.0. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"-Select-MgProfile -Name "beta" -$assignments = Get-MgEntitlementManagementAccessPackageAssignment -Filter "accessPackageId eq '9f573551-f8e2-48f4-bf48-06efbb37c7b8' and assignmentState eq 'Delivered'" -All -ErrorAction Stop -$toRemove = $assignments | Where-Object {$_.targetId -eq '76fd6e6a-c390-42f0-879e-93ca093321e7'} -$req = New-MgEntitlementManagementAccessPackageAssignmentRequest -AccessPackageAssignmentId $toRemove.Id -RequestType "AdminRemove" +$accessPackageId = "9f573551-f8e2-48f4-bf48-06efbb37c7b8" +$userId = "040a792f-4c5f-4395-902f-f0d9d192ab2c" +$filter = "accessPackage/Id eq '" + $accessPackageId + "' and state eq 'Delivered' and target/objectId eq '" + $userId + "'" +$assignment = Get-MgEntitlementManagementAssignment -Filter $filter -ExpandProperty target -all -ErrorAction stop +if ($assignment -ne $null) { + $params = @{ + requestType = "adminRemove" + assignment = @{ id = $assignment.id } + } + New-MgEntitlementManagementAssignmentRequest -BodyParameter $params +} ``` ## Next steps |
active-directory | Entitlement Management Access Package Auto Assignment Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-auto-assignment-policy.md | You'll need to have attributes populated on the users who will be in scope for b ## Create an automatic assignment policy -To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new policy for an access package. +To create a policy for an access package, you need to start from the access package's policy tab. Follow these steps to create a new automatic assignment policy for an access package. **Prerequisite role:** Global administrator or Identity Governance administrator -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, click **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. Click **Policies** and then **Add auto-assignment policy** to create a new policy. +1. On the **Access packages** page open an access package. -1. In the first tab, you'll specify the rule. Click **Edit**. +1. Select **Policies** and then **Add auto-assignment policy** to create a new policy. ++1. In the first tab, you'll specify the rule. Select **Edit**. 1. Provide a dynamic membership rule, using the [membership rule builder](../enterprise-users/groups-dynamic-membership.md) or by clicking **Edit** on the rule syntax text box. > [!NOTE]- > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Azure portal](../enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal). + > The rule builder might not be able to display some rules constructed in the text box, and validating a rule currently requires the you to be in the Global administrator role. For more information, see [rule builder in the Entra admin center](../enterprise-users/groups-create-rule.md#rule-builder-in-the-azure-portal). ![Screenshot of an access package automatic assignment policy rule configuration.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-rule-configuration.png) -1. Click **Save** to close the dynamic membership rule editor, then click **Next** to open the **Custom Extensions** tab. +1. Select **Save** to close the dynamic membership rule editor. +1. By default, the checkboxes to automatically create and remove assignments should remain checked. +1. If you wish users to retain access for a limited time after they go out of scope, you can specify a duration in hours or days. For example, when an employee leaves the sales department, you may wish to allow them to continue to retain access for 7 days to allow them to use sales apps and transfer ownership of their resources in those apps to another employee. +1. Select **Next** to open the **Custom Extensions** tab. 1. If you have [custom extensions](entitlement-management-logic-apps-integration.md) in your catalog you wish to have run when the policy assigns or removes access, you can add them to this policy. Then click next to open the **Review** tab. To create a policy for an access package, you need to start from the access pack ![Screenshot of an access package automatic assignment policy review tab.](./media/entitlement-management-access-package-auto-assignment-policy/auto-assignment-review.png) -1. Click **Create** to save the policy. +1. Select **Create** to save the policy. > [!NOTE] > At this time, Entitlement management will automatically create a dynamic security group corresponding to each policy, in order to evaluate the users in scope. This group should not be modified except by Entitlement Management itself. This group may also be modified or deleted automatically by Entitlement Management, so don't use this group for other applications or scenarios. |
active-directory | Entitlement Management Access Package Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-create.md | Title: Create an access package in entitlement management -description: Learn how to create an access package of resources that you want to share in Azure Active Directory entitlement management. +description: Learn how to create an access package of resources that you want to share in Microsoft Entra entitlement management. documentationCenter: '' Then once the access package is created, you can [change the hidden setting](ent To complete the following steps, you need a role of global administrator, Identity Governance administrator, user administrator, catalog owner, or access package manager. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Select **Azure Active Directory**, and then select **Identity Governance**. --1. On the left menu, select **Access packages**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. 1. Select **New access package**. - ![Screenshot that shows the button for creating a new access package in the Azure portal.](./media/entitlement-management-shared/access-packages-list.png) + ![Screenshot that shows the button for creating a new access package in the Microsoft Entra admin center.](./media/entitlement-management-shared/access-packages-list.png) ## Configure basics You can create an access package by using Microsoft Graph. A user in an appropri ### Create an access package by using Microsoft PowerShell -You can also create an access package in PowerShell by using the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. +You can also create an access package in PowerShell by using the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 2.4.0. -First, retrieve the ID of the catalog (and of the resources and their roles in that catalog) that you want to include in the access package. Use a script similar to the following example: +First, retrieve the ID of the catalog (and of the resource and its roles in that catalog) that you want to include in the access package. Use a script similar to the following example: ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"-Select-MgProfile -Name "beta" -$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'" -$rsc = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes" -$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc[0].Id + "')" -$rr = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource" +$catalog = Get-MgBetaEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'" ++$rsc = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes" +$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc.Id + "')" +$rr = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource" ``` Then, create the access package: $params = @{ Description = "outside sales representatives" } -$ap = New-MgEntitlementManagementAccessPackage -BodyParameter $params +$ap = New-MgBetaEntitlementManagementAccessPackage -BodyParameter $params ``` -After you create the access package, assign the resource roles to it. For example, if you want to include the second resource role of the first resource returned earlier as a resource role of the new access package, you can use a script similar to this one: +After you create the access package, assign the resource roles to it. For example, if you want to include the second resource role of the resource returned earlier as a resource role of the new access package, you can use a script similar to this one: ```powershell $rparams = @{ $rparams = @{ DisplayName = $rr[2].DisplayName OriginSystem = $rr[2].OriginSystem AccessPackageResource = @{- Id = $rsc[0].Id - ResourceType = $rsc[0].ResourceType - OriginId = $rsc[0].OriginId - OriginSystem = $rsc[0].OriginSystem + Id = $rsc.Id + ResourceType = $rsc.ResourceType + OriginId = $rsc.OriginId + OriginSystem = $rsc.OriginSystem } } AccessPackageResourceScope = @{- OriginId = $rsc[0].OriginId - OriginSystem = $rsc[0].OriginSystem + OriginId = $rsc.OriginId + OriginSystem = $rsc.OriginSystem } }-New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams +New-MgBetaEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $ap.Id -BodyParameter $rparams ``` Finally, create the policies. In this policy, only the administrator can assign access, and there are no access reviews. For more examples, see [Create an assignment policy through PowerShell](entitlement-management-access-package-request-policy.md#create-an-access-package-assignment-policy-through-powershell) and [Create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true). |
active-directory | Entitlement Management Access Package Edit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-edit.md | Follow these steps to change the **Hidden** setting for an access package. **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open an access package. 1. On the Overview page, select **Edit**. An access package can only be deleted if it has no active user assignments. Foll **Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package. +1. On the **Access packages** page open the access package. 1. In the left menu, select **Assignments** and remove access for all users. |
active-directory | Entitlement Management Access Package First | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-first.md | Title: Tutorial - Manage access to resources in entitlement management -description: Step-by-step tutorial for how to create your first access package using the Azure portal in entitlement management. +description: Step-by-step tutorial for how to create your first access package using the Microsoft Entra admin center in entitlement management. documentationCenter: '' In this tutorial, you learn how to: > * Allow a user in your directory to request access > * Demonstrate how an internal user can request the access package -For a step-by-step demonstration of the process of deploying Azure Active Directory entitlement management, including creating your first access package, view the following video: +For a step-by-step demonstration of the process of deploying Microsoft Entra entitlement management, including creating your first access package, view the following video: >[!VIDEO https://www.youtube.com/embed/zaaKvaaYwI4] -This rest of this article uses the Azure portal to configure and demonstrate entitlement management. +This rest of this article uses the Microsoft Entra admin center to configure and demonstrate entitlement management. ## Prerequisites A resource directory has one or more resources to share. In this step, you creat ![Diagram that shows the users and groups for this tutorial.](./media/entitlement-management-access-package-first/elm-users-groups.png) -1. Sign in to the [Azure portal](https://portal.azure.com) as a Global administrator or User administrator. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a Global administrator or User administrator. -1. In the left navigation, select **Azure Active Directory**. +1. In the left navigation, select **Identity**. 1. [Create two users](../fundamentals/add-users.md). Use the following names or different names. An *access package* is a bundle of resources that a team or project needs and is ![Diagram that describes the relationship between the access package elements.](./media/entitlement-management-access-package-first/elm-access-package.png) -1. In the Azure portal, in the left navigation, select **Azure Active Directory**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Identity Governance** +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages**. If you see **Access denied**, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory. +1. On the **Access packages** page open an access package. ++1. When opening the access package if you see **Access denied**, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory. 1. Select **New access package**. In this step, you perform the steps as the **internal requestor** and request ac **Prerequisite role:** Internal requestor -1. Sign out of the Azure portal. +1. Sign out of the Microsoft Entra admin center. 1. In a new browser window, navigate to the My Access portal link you copied in the previous step. In this step, you confirm that the **internal requestor** was assigned the acces 1. Sign out of the My Access portal. -1. Sign in to the [Azure portal](https://portal.azure.com) as **Admin1**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as **Admin1**. -1. Select **Azure Active Directory** and then select **Identity Governance**. +1. Select **Identity Governance**. 1. In the left menu, select **Access packages**. In this step, you confirm that the **internal requestor** was assigned the acces :::image type="content" source="./media/entitlement-management-access-package-first/request-details.png" alt-text="Screenshot of the access package request details." lightbox="./media/entitlement-management-access-package-first/request-details.png"::: -1. In the left navigation, select **Azure Active Directory**. +1. In the left navigation, select **Identity**. 1. Select **Groups** and open the **Marketing resources** group. In this step, you remove the changes you made and delete the **Marketing Campaig **Prerequisite role:** Global administrator or User administrator -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. In the Microsoft Entra admin center **Identity Governance**. 1. Open the **Marketing Campaign** access package. In this step, you remove the changes you made and delete the **Marketing Campaig 1. For **Marketing Campaign**, select the ellipsis (**...**) and then select **Delete**. In the message that appears, select **Yes**. -1. In Azure Active Directory, delete any users you created such as **Requestor1** and **Admin1**. +1. In **Identity**, delete any users you created such as **Requestor1** and **Admin1**. 1. Delete the **Marketing resources** group. |
active-directory | Entitlement Management Access Package Incompatible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-incompatible.md | To use entitlement management and assign users to access packages, you must have Follow these steps to change the list of incompatible groups or other access packages for an existing access package: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Select **Azure Active Directory**, and then select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package which users will request. +1. On the **Access packages** page open the access package which users will request. 1. In the left menu, select **Separation of duties**. New-MgEntitlementManagementAccessPackageIncompatibleAccessPackageByRef -AccessPa Follow these steps to view the list of other access packages that have indicated that they're incompatible with an existing access package: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Select **Azure Active Directory**, and then select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package. +1. On the **Access packages** page open the access package. 1. In the left menu, select **Separation of duties**. If you've configured incompatible access settings on an access package that alre Follow these steps to view the list of users who have assignments to two access packages. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Select **Azure Active Directory**, and then select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package where you've configured another access package as incompatible. +1. On the **Access packages** page open the access package where you've configured another access package as incompatible. 1. In the left menu, select **Separation of duties**. If you're configuring incompatible access settings on an access package that alr Follow these steps to view the list of users who have assignments to two access packages. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Select **Azure Active Directory**, and then select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package where you'll be configuring incompatible assignments. +1. Open the access package where you'll be configuring incompatible assignments. 1. In the left menu, select **Assignments**. -1. In the **Status** field, ensure that **Delivered** status is selected. +1. In the **Status** field, ensure that **Delivered** status is selected. -1. Select the **Download** button and save the resulting CSV file as the first file with a list of assignments. +1. Select the **Download** button and save the resulting CSV file as the first file with a list of assignments. -1. In the navigation bar, select **Identity Governance**. +1. In the navigation bar, select **Identity Governance**. 1. In the left menu, select **Access packages** and then open the access package that you plan to indicate as incompatible. 1. In the left menu, select **Assignments**. -1. In the **Status** field, ensure that the **Delivered** status is selected. +1. In the **Status** field, ensure that the **Delivered** status is selected. -1. Select the **Download** button and save the resulting CSV file as the second file with a list of assignments. +1. Select the **Download** button and save the resulting CSV file as the second file with a list of assignments. -1. Use a spreadsheet program such as Excel to open the two files. +1. Use a spreadsheet program such as Excel to open the two files. -1. Users who are listed in both files will have already-existing incompatible assignments. +1. Users who are listed in both files will have already-existing incompatible assignments. ### Identifying users who already have incompatible access programmatically |
active-directory | Entitlement Management Access Package Lifecycle Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-lifecycle-policy.md | To change the lifecycle settings for an access package, you need to open the cor **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, click **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. Click **Policies** and then click the policy that has the lifecycle settings you want to edit. +1. On the **Access packages** page open the access package that you want to edit. ++1. Select **Policies** and then select the policy that has the lifecycle settings you want to edit. The Policy details pane opens at the bottom of the page. ![Access package - Policy details pane](./media/entitlement-management-shared/policy-details.png) -1. Click **Edit** to edit the policy. +1. Select **Edit** to edit the policy. ![Access package - Edit policy](./media/entitlement-management-shared/policy-edit.png) -1. Click the **Lifecycle** tab to open the lifecycle settings. +1. Select the **Lifecycle** tab to open the lifecycle settings. [!INCLUDE [Entitlement management lifecycle policy](../../../includes/active-directory-entitlement-management-lifecycle-policy.md)] |
active-directory | Entitlement Management Access Package Manage Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-manage-lifecycle.md | Entitlement management allows you to gain visibility into the state of a guest u - **Blank** - The lifecycle for the guest user isn't determined. This happens when the guest user had an access package assigned before managing user lifecycle was possible. > [!NOTE]-> When a guest user is set as **Governed**, based on ELM tenant settings their account will be deleted or disabled in specified days after their last access package assignment expires. Learn more about ELM settings here: [Manage external access with Azure Active Directory entitlement management](../architecture/6-secure-access-entitlement-managment.md). +> When a guest user is set as **Governed**, based on ELM tenant settings their account will be deleted or disabled in specified days after their last access package assignment expires. Learn more about ELM settings here: [Manage external access with Microsoft Entra entitlement management](../architecture/6-secure-access-entitlement-managment.md). You can directly convert ungoverned users to be governed by using the **Mark Guests as Governed (preview)** functionality in the top menu bar. To manage user lifecycle, you'd follow these steps: **Prerequisite role:** Global administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open the access package you want to manage guest user lifecycle of. 1. In the left menu, select **Assignments**. |
active-directory | Entitlement Management Access Package Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-request-policy.md | If you have a set of users that should have different request and approval setti **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, click **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. Click **Policies** and then **Add policy**. +1. On the **Access packages** page open the access package you want to edit. ++1. Select **Policies** and then **Add policy**. 1. You will start on the **Basics** tab. Type a name and a description for the policy. ![Create policy with name and description](./media/entitlement-management-access-package-request-policy/policy-name-description.png) -1. Click **Next** to open the **Requests** tab. +1. Select **Next** to open the **Requests** tab. 1. Change the **Users who can request access** setting. Use the steps in the following sections to change the setting to one of the following options: - [For users in your directory](#for-users-in-your-directory) Follow these steps if you want to allow users not in your directory to request t ![Access package - Requests - For users not in your directory](./media/entitlement-management-access-package-request-policy/for-users-not-in-your-directory.png) -1. Select one of the following options: +1. Select whether the users who can request access are required to be affiliated with an existing connected organization, or can be anyone on the Internet. A connected organization is one that you have a pre-existing relationship with, which might have an external Azure AD directory or another identity provider. Select one of the following options: | | Description | | | | | **Specific connected organizations** | Choose this option if you want to select from a list of organizations that your administrator previously added. All users from the selected organizations can request this access package. |- | **All configured connected organizations** | Choose this option if all users from all your configured connected organizations can request this access package. Only users from configured connected organizations can request access packages that are shown to users from all configured organizations. | + | **All configured connected organizations** | Choose this option if all users from all your configured connected organizations can request this access package. Only users from configured connected organizations can request access packages, so if a user is not from an Azure AD tenant, domain or identity provider associated with an existing connected organization, they will not be able to request. | | **All users (All connected organizations + any new external users)** | Choose this option if any user on the internet should be able to request this access package. If they donΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. The automatically created connected organization will be in a **proposed** state. For more information about the proposed state, see [State property of connected organizations](entitlement-management-organization.md#state-property-of-connected-organizations). | - A connected organization is an external Azure AD directory or domain that you have a relationship with. 1. If you selected **Specific connected organizations**, click **Add directories** to select from a list of connected organizations that your administrator previously added. Follow these steps if you want to allow users not in your directory to request t > [!NOTE] > All users from the selected connected organizations can request this access package. For a connected organization that has an Azure AD directory, users from all verified domains associated with the Azure AD directory can request, unless those domains are blocked by the Azure B2B allow or deny list. For more information, see [Allow or block invitations to B2B users from specific organizations](../external-identities/allow-deny-list.md). -1. If you want to require approval, use the steps in [Change approval settings for an access package in entitlement management](entitlement-management-access-package-approval-policy.md) to configure approval settings. +1. Next, use the steps in [Change approval settings for an access package in entitlement management](entitlement-management-access-package-approval-policy.md) to configure approval settings to specify who should approve requests from users not in your organization. 1. Go to the [Enable requests](#enable-requests) section. Follow these steps if you want to allow users not in your directory to request t Follow these steps if you want to bypass access requests and allow administrators to directly assign specific users to this access package. Users won't have to request the access package. You can still set lifecycle settings, but there are no request settings. -1. In the **Users who can request access** section, click **None (administrator direct assignments only**. +1. In the **Users who can request access** section, click **None (administrator direct assignments only)**. ![Access package - Requests - None administrator direct assignments only](./media/entitlement-management-access-package-request-policy/none-admin-direct-assignments-only.png) To change the request and approval settings for an access package, you need to o **Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, click **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. Click **Policies** and then click the policy you want to edit. +1. On the **Access packages** page open the access package whose policy request settings you want to edit. ++1. Select **Policies** and then click the policy you want to edit. The Policy details pane opens at the bottom of the page. ![Access package - Policy details pane](./media/entitlement-management-shared/policy-details.png) -1. Click **Edit** to edit the policy. +1. Select **Edit** to edit the policy. ![Access package - Edit policy](./media/entitlement-management-shared/policy-edit.png) -1. Click the **Requests** tab to open the request settings. +1. Select the **Requests** tab to open the request settings. 1. Use the steps in the previous sections to change the request settings as needed. To change the request and approval settings for an access package, you need to o ![Access package - Policy- Enable policy setting](./media/entitlement-management-access-package-approval-policy/enable-requests.png) -1. Click **Next**. +1. Select **Next**. 1. If you want to require requestors to provide additional information when requesting access to an access package, use the steps in [Change approval and requestor information settings for an access package in entitlement management](entitlement-management-access-package-approval-policy.md#collect-additional-requestor-information-for-approval) to configure requestor information. 1. Configure lifecycle settings. -1. If you are editing a policy click **Update**. If you are adding a new policy, click **Create**. +1. If you are editing a policy select **Update**. If you are adding a new policy, select **Create**. ## Create an access package assignment policy programmatically You can create a policy using Microsoft Graph. A user in an appropriate role wit ### Create an access package assignment policy through PowerShell -You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. +You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. -This script below illustrates using the `beta` profile, to create a policy for direct assignment to an access package. In this policy, only the administrator can assign access, and there are no access reviews. See [Create an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md#create-an-access-package-assignment-policy-through-powershell) for an example of how to create an automatic assignment policy, and [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-beta&preserve-view=true) for more examples. +This script below illustrates creating a policy for direct assignment to an access package. In this policy, only the administrator can assign access, and there are no approvals or access reviews. See [Create an automatic assignment policy](entitlement-management-access-package-auto-assignment-policy.md#create-an-access-package-assignment-policy-through-powershell) for an example of how to create an automatic assignment policy, and [create an accessPackageAssignmentPolicy](/graph/api/entitlementmanagement-post-assignmentpolicies?tabs=http&view=graph-rest-v1.0&preserve-view=true) for more examples. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"-Select-MgProfile -Name "beta" $apid = "cdd5f06b-752a-4c9f-97a6-82f4eda6c76d" -$pparams = @{ - AccessPackageId = $apid - DisplayName = "direct" - Description = "direct assignments by administrator" - AccessReviewSettings = $null - RequestorSettings = @{ - ScopeType = "NoSubjects" - AcceptRequests = $true - AllowedRequestors = @( - ) - } - RequestApprovalSettings = @{ - IsApprovalRequired = $false - IsApprovalRequiredForExtension = $false - IsRequestorJustificationRequired = $false - ApprovalMode = "NoApproval" - ApprovalStages = @( - ) - } +$params = @{ + displayName = "New Policy" + description = "policy for assignment" + allowedTargetScope = "notSpecified" + specificAllowedTargets = @( + ) + expiration = @{ + endDateTime = $null + duration = $null + type = "noExpiration" + } + requestorSettings = @{ + enableTargetsToSelfAddAccess = $false + enableTargetsToSelfUpdateAccess = $false + enableTargetsToSelfRemoveAccess = $false + allowCustomAssignmentSchedule = $true + enableOnBehalfRequestorsToAddAccess = $false + enableOnBehalfRequestorsToUpdateAccess = $false + enableOnBehalfRequestorsToRemoveAccess = $false + onBehalfRequestors = @( + ) + } + requestApprovalSettings = @{ + isApprovalRequiredForAdd = $false + isApprovalRequiredForUpdate = $false + stages = @( + ) + } + accessPackage = @{ + id = $apid + } }-New-MgEntitlementManagementAccessPackageAssignmentPolicy -BodyParameter $pparams ++New-MgEntitlementManagementAssignmentPolicy -BodyParameter $params ``` ## Prevent requests from users with incompatible access |
active-directory | Entitlement Management Access Package Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-requests.md | In entitlement management, you can see who has requested access packages, the po **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, click **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. Click **Requests**. +1. On the **Access packages** page open the access package you want to view requests of. -1. Click a specific request to see additional details. +1. Select **Requests**. ++1. Select a specific request to see additional details. ![List of requests for an access package](./media/entitlement-management-access-package-requests/requests-list.png) You can also retrieve requests for an access package using Microsoft Graph. A u You can also remove a completed request that is no longer needed. To remove a request: -1. In the Azure portal, click **Azure Active Directory** and then click **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, click **Access packages** and then open the access package. +1. On the **Access packages** page open the access package you want to remove requests for. -1. Click **Requests**. +1. Select **Requests**. 1. Find the request you want to remove from the access package. |
active-directory | Entitlement Management Access Package Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-resources.md | If you need to add resources to an access package, you should check whether the **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open the access package you want to check catalog for resources for. 1. In the left menu, select **Catalog** and then open the catalog. If you want some users to receive different roles than others, then you need to **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package. +1. On the **Access packages** page open the access package you want to add resource roles to. 1. In the left menu, select **Resource roles**. You can add a resource role to an access package using Microsoft Graph. A user i ### Add resource roles to an access package with Microsoft PowerShell -You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.16.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 1.x.x. +You can also create an access package in PowerShell with the cmdlets from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) beta module version 2.1.x or later beta module version. This script illustrates using the Graph `beta` profile and Microsoft Graph PowerShell cmdlets module version 2.4.0. -First, you would retrieve the ID of the catalog, and of the resources and their roles in that catalog that you wish to include in the access package, using a script similar to the following. +First, you would retrieve the ID of the catalog, and of the resource and its roles in that catalog that you wish to include in the access package, using a script similar to the following. This assumes there is a single application resource in the catalog. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All"-Select-MgProfile -Name "beta" -$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'" -$rsc = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes" -$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc[0].Id + "')" -$rr = Get-MgEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource" +$catalog = Get-MgBetaEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'" ++$rsc = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResource -AccessPackageCatalogId $catalog.Id -Filter "resourceType eq 'Application'" -ExpandProperty "accessPackageResourceScopes" +$filt = "(originSystem eq 'AadApplication' and accessPackageResource/id eq '" + $rsc.Id + "')" +$rr = Get-MgBetaEntitlementManagementAccessPackageCatalogAccessPackageResourceRole -AccessPackageCatalogId $catalog.Id -Filter $filt -ExpandProperty "accessPackageResource" ``` -Then, assign the resource roles to the access package. For example, if you wished to include the second resource role of the first resource returned earlier as a resource role of an access package, you would use a script similar to the following. +Then, assign the resource role from that resource to the access package. For example, if you wished to include the second resource role of the resource returned earlier as a resource role of an access package, you would use a script similar to the following. ```powershell $apid = "cdd5f06b-752a-4c9f-97a6-82f4eda6c76d" $rparams = @{ DisplayName = $rr[2].DisplayName OriginSystem = $rr[2].OriginSystem AccessPackageResource = @{- Id = $rsc[0].Id - ResourceType = $rsc[0].ResourceType - OriginId = $rsc[0].OriginId - OriginSystem = $rsc[0].OriginSystem + Id = $rsc.Id + ResourceType = $rsc.ResourceType + OriginId = $rsc.OriginId + OriginSystem = $rsc.OriginSystem } } AccessPackageResourceScope = @{- OriginId = $rsc[0].OriginId - OriginSystem = $rsc[0].OriginSystem + OriginId = $rsc.OriginId + OriginSystem = $rsc.OriginSystem } }-New-MgEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $apid -BodyParameter $rparams +New-MgBetaEntitlementManagementAccessPackageResourceRoleScope -AccessPackageId $apid -BodyParameter $rparams ``` ## Remove resource roles **Prerequisite role:** Global administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package. +1. On the **Access packages** page open the access package you want to remove resource roles for. 1. In the left menu, select **Resource roles**. |
active-directory | Entitlement Management Access Package Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-package-settings.md | In order for the external user from another directory to use the My Access porta **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page open the access package you want to share a link to request an access package for. 1. On the Overview page, check the **Hidden** setting. If the **Hidden** setting is **Yes**, then even users who do not have the My Access portal link can browse and request the access package. If you do not wish to have them browse for the access package, then change the setting to **No**. |
active-directory | Entitlement Management Access Reviews Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-access-reviews-create.md | For more information, see [License requirements](entitlement-management-overview You can enable access reviews when [creating a new access package](entitlement-management-access-package-create.md) or [editing an existing access package assignment policy](entitlement-management-access-package-lifecycle-policy.md) policy. If you have multiple policies, for different communities of users to request access, you can have independent access review schedules for each policy. Follow these steps to enable access reviews of an access package's assignments: -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. To create a new access policy, in the left menu, select **Access packages**, then select **New access** package. ++1. Browse to **Identity governance** > **Access reviews** > **Access package**. ++1. To create a new access policy, select **New access** package. 1. To edit an existing access policy, in the left menu, select **Access packages** and open the access package you want to edit. Then, in the left menu, select **Policies** and select the policy that has the lifecycle settings you want to edit. |
active-directory | Entitlement Management Catalog Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md | A catalog is a container of resources and access packages. You create a catalog To create a catalog: -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. On the left menu, select **Catalogs**. +1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**. - ![Screenshot that shows entitlement management catalogs in the Azure portal.](./media/entitlement-management-catalog-create/catalogs.png) + ![Screenshot that shows entitlement management catalogs in the Entra admin center.](./media/entitlement-management-catalog-create/catalogs.png) 1. Select **New catalog**. To include resources in an access package, the resources must exist in a catalog To add resources to a catalog: -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. On the left menu, select **Catalogs** and then open the catalog you want to add resources to. +1. Browse to **Identity governance** > **Catalogs**. ++1. On the **Catalogs** page open the catalog you want to add resources to. 1. On the left menu, select **Resources**. You can also add a resource to a catalog by using Microsoft Graph. A user in an ### Add a resource to a catalog with PowerShell -You can also add a resource to a catalog in PowerShell with the `New-MgEntitlementManagementAccessPackageResourceRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 1.6.0 or a later 1.x.x module version, or Microsoft Graph PowerShell cmdlets beta module version 2.1.x or later beta module version. The following example shows how to add a group to a catalog as a resource using Microsoft Graph beta and Microsoft Graph PowerShell cmdlets module version 1.x.x. +You can also add a resource to a catalog in PowerShell with the `New-MgEntitlementManagementResourceRequest` cmdlet from the [Microsoft Graph PowerShell cmdlets for Identity Governance](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.Governance/) module version 2.1.x or later module version. The following example shows how to add a group to a catalog as a resource using Microsoft Graph PowerShell cmdlets module version 2.4.0. ```powershell Connect-MgGraph -Scopes "EntitlementManagement.ReadWrite.All,Group.ReadWrite.All"-Select-MgProfile -Name "beta" + $g = Get-MgGroup -Filter "displayName eq 'Marketing'"-Import-Module Microsoft.Graph.Identity.Governance -$catalog = Get-MgEntitlementManagementAccessPackageCatalog -Filter "displayName eq 'Marketing'" -$nr = New-Object Microsoft.Graph.PowerShell.Models.MicrosoftGraphAccessPackageResource -$nr.OriginId = $g.Id -$nr.OriginSystem = "AadGroup" -$rr = New-MgEntitlementManagementAccessPackageResourceRequest -CatalogId $catalog.Id -AccessPackageResource $nr -$ar = Get-MgEntitlementManagementAccessPackageCatalog -AccessPackageCatalogId $catalog.Id -ExpandProperty accessPackageResources -$ar.AccessPackageResources ++$catalog = Get-MgEntitlementManagementCatalog -Filter "displayName eq 'Marketing'" +$params = @{ + requestType = "adminAdd" + resource = @{ + originId = $g.Id + originSystem = "AadGroup" + } + catalog = @{ id = $catalog.id } +} ++New-MgEntitlementManagementResourceRequest -BodyParameter $params +sleep 5 +$ar = Get-MgEntitlementManagementCatalog -AccessPackageCatalogId $catalog.Id -ExpandProperty resources +$ar.resources ``` ## Remove resources from a catalog You can remove resources from a catalog. A resource can be removed from a catalo To remove resources from a catalog: -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Catalogs**. -1. On the left menu, select **Catalogs** and then open the catalog you want to remove resources from. +1. On the **Catalogs** page open the catalog you want to remove resources from. 1. On the left menu, select **Resources**. The user who created a catalog becomes the first catalog owner. To delegate mana To assign a user to the catalog owner role: -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. On the left menu, select **Catalogs** and then open the catalog you want to add administrators to. +1. Browse to **Identity governance** > **Catalogs**. ++1. On the **Catalogs** page open the catalog you want to add administrators to. 1. On the left menu, select **Roles and administrators**. You can edit the name and description for a catalog. Users see this information To edit a catalog: -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Catalogs**. -1. On the left menu, select **Catalogs** and then open the catalog you want to edit. +1. On the **Catalogs** page open the catalog you want to edit. 1. On the catalog's **Overview** page, select **Edit**. You can delete a catalog, but only if it doesn't have any access packages. To delete a catalog: -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Catalogs**. -1. On the left menu, select **Catalogs** and then open the catalog you want to delete. +1. On the **Catalogs** page open the catalog you want to delete. 1. On the catalog's **Overview** page, select **Delete**. |
active-directory | Entitlement Management Custom Teams Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-custom-teams-extension.md | Prerequisite roles: Global administrator, Identity Governance administrator, or To create a Logic App and custom extension in a catalog, you'd follow these steps: -1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) +1. Navigate To Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) 1. In the left menu, select **Catalogs**. |
active-directory | Entitlement Management Delegate Catalog | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-catalog.md | Follow these steps to assign a user to the catalog creator role. **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, in the **Entitlement management** section, select **Settings**. +1. Browse to **Identity governance** > **Entitlement management** > **settings**. 1. Select **Edit**. Follow these steps to assign a user to the catalog creator role. 1. Select **Save**. -## Allow delegated roles to access the Azure portal +## Allow delegated roles to access the Microsoft Entra admin center -To allow delegated roles, such as catalog creators and access package managers, to access the Azure portal to manage access packages, you should check the administration portal setting. +To allow delegated roles, such as catalog creators and access package managers, to access the Microsoft Entra admin center to manage access packages, you should check the administration portal setting. -**Prerequisite role:** Global administrator or User administrator +**Prerequisite role:** Global administrator, Identity Governance administrator, or User administrator -1. In the Azure portal, select **Azure Active Directory** and then select **Users**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **User settings**. +1. Browse to **Identity** > **Users** > **User settings**. 1. Make sure **Restrict access to Azure AD administration portal** is set to **No**. |
active-directory | Entitlement Management Delegate Managers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate-managers.md | Follow these steps to assign a user to the access package manager role: **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, or Catalog owner -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Catalogs** and then open the catalog you want to add administrators to. +1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**. ++1. On the **Catalogs** page open the catalog you want to add administrators to. 1. In the left menu, select **Roles and administrators**. Follow these steps to remove a user from the access package manager role: **Prerequisite role:** Global administrator, User administrator, or Catalog owner -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**. -1. In the left menu, select **Catalogs** and then open the catalog you want to add administrators to. +1. On the **Catalogs** page open the catalog you want to add administrators to. 1. In the left menu, select **Roles and administrators**. |
active-directory | Entitlement Management Delegate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-delegate.md | For managing external collaboration, where the individual external users for a c * To allow users in external directories from connected organizations to be able to request access packages in a catalog, the catalog setting of **Enabled for external users** needs to be set to **Yes**. Changing this setting can be done by an administrator or a catalog owner of the catalog. * The access package must also have a policy set [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory). This policy can be created by an administrator, catalog owner or access package manager of the catalog.-* An access package with that policy will allow users in scope to be able to request access, including users not already in your directory. If their request is approved, or does not require approval, then the user will be automatically be added to your directory. +* An access package with that policy will allow users in scope to be able to request access, including users not already in your directory. If their request is approved, or does not require approval, then the user will be automatically added to your directory. * If the policy setting was for **All users**, and the user was not part of an existing connected organization, then a new proposed connected organization is automatically created. You can [view the list of connected organizations](entitlement-management-organization.md#view-the-list-of-connected-organizations) and remove organizations that are no longer needed. You can also configure what happens when an external user brought in by entitlement management loses their last assignment to any access packages. You can block them from signing in to this directory, or have their guest account removed, in the settings to [manage the lifecycle of external users](entitlement-management-external-users.md#manage-the-lifecycle-of-external-users). You can prevent users who are not in administrative roles from inviting individu To prevent delegated employees from configuring entitlement management to let external users request for external collaboration, then be sure to communicate this constraint to all global administrators, identity governance administrators, catalog creators, and catalog owners, as they are able to change catalogs, so that they do not inadvertently permit new collaboration in new or updated catalogs. They should ensure that catalogs are set with **Enabled for external users** to **No**, and do not have any access packages with policies for allowing a user not in the directory to request. -You can view the list of catalogs currently enabled for external users in the Azure portal. +You can view the list of catalogs currently enabled for external users in the Microsoft Entra admin center. -1. In the Azure portal, select **Azure Active Directory** > **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. On the left menu, select **Catalogs**. +1. Browse to **Identity governance** > **Entitlement management** > **Catalogs**. 1. Change the filter setting for **Enabled for external users** to **Yes**. |
active-directory | Entitlement Management External Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-external-users.md | - + Title: Govern access for external users in entitlement management description: Learn about the settings you can specify to govern access for external users in entitlement management. The following diagram and steps provide an overview of how external users are gr 1. If the policy settings include an expiration date, then later when the access package assignment for the external user expires, the external user's access rights from that access package are removed. -1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user is blocked from signing in and the guest user account is removed from your directory. +1. Depending on the lifecycle of external users settings, when the external user no longer has any access package assignments, the external user will be blocked from signing in, and the external user account will be removed from your directory. ## Settings for external users To ensure people outside of your organization can request access packages and ge ![Edit catalog settings](./media/entitlement-management-shared/catalog-edit.png) - If you're an administrator or catalog owner, you can view the list of catalogs currently enabled for external users in the Azure portal list of catalogs, by changing the filter setting for **Enabled for external users** to **Yes**. If any of those catalogs shown in that filtered view have a non-zero number of access packages, those access packages may have a policy [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory) that allow external users to request. + If you're an administrator or catalog owner, you can view the list of catalogs currently enabled for external users in the Microsoft Entra admin center list of catalogs, by changing the filter setting for **Enabled for external users** to **Yes**. If any of those catalogs shown in that filtered view have a non-zero number of access packages, those access packages may have a policy [for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory) that allow external users to request. ### Configure your Azure AD B2B external collaboration settings To ensure people outside of your organization can request access packages and ge :::image type="content" source="media/entitlement-management-external-users/exclude-app-guests-selection.png" alt-text="Screenshot of the exclude guests app selection."::: > [!NOTE]-> The Entitlement Management app includes the entitlement management side of MyAccess, the Entitlement Management side of Azure portal and the Entitlement Management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided. +> The Entitlement Management app includes the entitlement management side of MyAccess, the Entitlement Management side of the Microsoft Entra admin center, and the Entitlement Management part of MS graph. The latter two require additional permissions for access, hence won't be accessed by guests unless explicit permission is provided. ### Review your SharePoint Online external sharing settings To ensure people outside of your organization can request access packages and ge ### Review your Microsoft 365 group sharing settings -- If you want to include Microsoft 365 groups in your access packages for external users, make sure the **Let users add new guests to the organization** is set to **On** to allow guest access. For more information, see [Manage guest access to Microsoft 365 Groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups?view=microsoft-365-worldwide#manage-groups-guest-access).+- If you want to include Microsoft 365 groups in your access packages for external users, make sure the **Let users add new guests to the organization** is set to **On** to allow guest access. For more information, see [Manage guest access to Microsoft 365 Groups](/microsoft-365/admin/create-groups/manage-guest-access-in-groups#manage-groups-guest-access). - If you want external users to be able to access the SharePoint Online site and resources associated with a Microsoft 365 group, make sure you turn on SharePoint Online external sharing. For more information, see [Turn external sharing on or off](/sharepoint/turn-external-sharing-on-or-off#change-the-organization-level-external-sharing-setting). To ensure people outside of your organization can request access packages and ge ## Manage the lifecycle of external users -You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they're blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory. +You can select what happens when an external user, who was invited to your directory through making an access package request, no longer has any access package assignments. This can happen if the user relinquishes all their access package assignments, or their last access package assignment expires. By default, when an external user no longer has any access package assignments, they're blocked from signing in to your directory. After 30 days, their guest user account is removed from your directory. You can also configure that an external user is not blocked from sign in or deleted, or that an external user is not blocked from sign in but is deleted (preview). **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, in the **Entitlement management** section, select **Settings**. +1. Browse to **Identity governance** > **Entitlement management** > **Settings**. 1. Select **Edit**. You can select what happens when an external user, who was invited to your direc 1. Once an external user loses their last assignment to any access packages, if you want to block them from signing in to this directory, set the **Block external user from signing in to this directory** to **Yes**. > [!NOTE]- > If a user is blocked from signing in to this directory, then the user will be unable to re-request the access package or request additional access in this directory. Do not configure blocking them from signing in if they will subsequently need to request access to other access packages. + > Entitlement management only blocks external guest user accounts from signing in that were invited through entitlement management or that were added to entitlement management for lifecycle management. Also, note that a user will be blocked from signing in even if that user was added to resources in this directory that were not access package assignments. If a user is blocked from signing in to this directory, then the user will be unable to re-request the access package or request additional access in this directory. Do not configure blocking them from signing in if they will subsequently need to request access to this or other access packages. 1. Once an external user loses their last assignment to any access packages, if you want to remove their guest user account in this directory, set **Remove external user** to **Yes**. > [!NOTE]- > Entitlement management only removes accounts that were invited through entitlement management. Also, note that a user will be blocked from signing in and removed from this directory even if that user was added to resources in this directory that were not access package assignments. If the guest was present in this directory prior to receiving access package assignments, they will remain. However, if the guest was invited through an access package assignment, and after being invited was also assigned to a OneDrive for Business or SharePoint Online site, they will still be removed. + > Entitlement management only removes external guest user accounts that were invited through entitlement management or that were added to entitlement management for lifecycle managementh. Also, note that a user will be removed from this directory even if that user was added to resources in this directory that were not access package assignments. If the guest was present in this directory prior to receiving access package assignments, they will remain. However, if the guest was invited through an access package assignment, and after being invited was also assigned to a OneDrive for Business or SharePoint Online site, they will still be removed. -1. If you want to remove the guest user account in this directory, you can set the number of days before it's removed. If you want to remove the guest user account as soon as they lose their last assignment to any access packages, set **Number of days before removing external user from this directory** to **0**. +1. If you want to remove the guest user account in this directory, you can set the number of days before it's removed. While an external user is notified when their access package expires, there is no notification when their account is removed. If you want to remove the guest user account as soon as they lose their last assignment to any access packages, set **Number of days before removing external user from this directory** to **0**. 1. Select **Save**. |
active-directory | Entitlement Management Group Licenses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-licenses.md | For more information, see [License requirements](entitlement-management-overview **Prerequisite role:** Global Administrator, Identity Governance Administrator, User Administrator, Catalog Owner, or Access Package Manager -1. In the Azure portal, on the left pane, select **Azure Active Directory**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -2. Under **Manage**, select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -3. Under **Entitlement Management**, select **Access packages**. +1. On the **Access packages** page Select **New access package**. -4. Select **New access package**. +1. On the **Basics** tab, in the **Name** box, enter **Office Licenses**. In the **Description** box, enter **Access to licenses for Office applications**. -5. On the **Basics** tab, in the **Name** box, enter **Office Licenses**. In the **Description** box, enter **Access to licenses for Office applications**. --6. You can leave **General** in the **Catalog** list. +1. You can leave **General** in the **Catalog** list. ## Step 2: Configure the resources for your access package 1. Select **Next: Resource roles** to go to the **Resource roles** tab. -2. On this tab, you select the resources and the resource role to include in the access package. In this scenario, select **Groups and Teams** and search for your group that has assigned [Office licenses](../enterprise-users/licensing-groups-assign.md). +1. On this tab, you select the resources and the resource role to include in the access package. In this scenario, select **Groups and Teams** and search for your group that has assigned [Office licenses](../enterprise-users/licensing-groups-assign.md). -3. In the **Role** list, select **Member**. +1. In the **Role** list, select **Member**. ## Step 3: Configure requests for your access package For more information, see [License requirements](entitlement-management-overview On this tab, you create a request policy. A *policy* defines the rules for access to an access package. You create a policy that allows employees in the resource directory to request the access package. -3. In the **Users who can request access** section, select **For users in your directory** and then select **All members (excluding guests)**. These settings make it so that only members of your directory can request Office licenses. +1. In the **Users who can request access** section, select **For users in your directory** and then select **All members (excluding guests)**. These settings make it so that only members of your directory can request Office licenses. -4. Ensure that **Require approval** is set to **Yes**. +1. Ensure that **Require approval** is set to **Yes**. -5. Leave **Require requestor justification** set to **Yes**. +1. Leave **Require requestor justification** set to **Yes**. -6. Leave **How many stages** set to **1**. +1. Leave **How many stages** set to **1**. -7. Under **Approver**, select **Manager as approver**. This option allows the requestor's manager to approve the request. You can select a different person to be the fallback approver if the system can't find the manager. +1. Under **Approver**, select **Manager as approver**. This option allows the requestor's manager to approve the request. You can select a different person to be the fallback approver if the system can't find the manager. -8. Leave **Decision must be made in how many days?** set to **14**. +1. Leave **Decision must be made in how many days?** set to **14**. -9. Leave **Require approver justification** set to **Yes**. +1. Leave **Require approver justification** set to **Yes**. -10. Under **Enable new requests and assignments**, select **Yes** to enable employees to request the access package as soon as it's created. +1. Under **Enable new requests and assignments**, select **Yes** to enable employees to request the access package as soon as it's created. ## Step 4: Configure requestor information for your access package 1. Select **Next** to go to the **Requestor information** tab. -2. On this tab, you can ask questions to collect more information from the requestor. The questions are shown on the request form and can be either required or optional. In this scenario, you haven't been asked to include requestor information for the access package, so you can leave these boxes empty. +1. On this tab, you can ask questions to collect more information from the requestor. The questions are shown on the request form and can be either required or optional. In this scenario, you haven't been asked to include requestor information for the access package, so you can leave these boxes empty. ## Step 5: Configure the lifecycle for your access package 1. Select **Next: Lifecycle** to go to the **Lifecycle** tab. -2. In the **Expiration** section, for **Access package assignments expire**, select **Number of days**. +1. In the **Expiration** section, for **Access package assignments expire**, select **Number of days**. -3. In **Assignments expire after**, enter **365**. This box specifies when members who have access to the access package needs to renew their access. +1. In **Assignments expire after**, enter **365**. This box specifies when members who have access to the access package needs to renew their access. -4. You can also configure access reviews, which allow periodic checks of whether the employee still needs access to the access package. A review can be a self-review performed by the employee. Or you can set the employee's manager or another person as the reviewer. For more information, see [Access reviews](entitlement-management-access-reviews-create.md). +1. You can also configure access reviews, which allow periodic checks of whether the employee still needs access to the access package. A review can be a self-review performed by the employee. Or you can set the employee's manager or another person as the reviewer. For more information, see [Access reviews](entitlement-management-access-reviews-create.md). In this scenario, you want all employees to review whether they still need a license for Office each year. 1. Under **Require access reviews**, select **Yes**.- 2. You can leave **Starting on** set to the current date. This date is when the access review starts. After you create an access review, you can't update its start date. - 3. Under **Review frequency**, select **Annually**, because the review occurs once per year. The **Review frequency** box is where you determine how often the access review runs. - 4. Specify a **Duration (in days)**. The duration box is where you indicate how many days each occurrence of the access review series runs. - 5. Under **Reviewers**, select **Manager**. + 1. You can leave **Starting on** set to the current date. This date is when the access review starts. After you create an access review, you can't update its start date. + 1. Under **Review frequency**, select **Annually**, because the review occurs once per year. The **Review frequency** box is where you determine how often the access review runs. + 1. Specify a **Duration (in days)**. The duration box is where you indicate how many days each occurrence of the access review series runs. + 1. Under **Reviewers**, select **Manager**. ## Step 6: Review and create your access package For more information, see [License requirements](entitlement-management-overview On this tab, you can review the configuration for your access package before you create it. If there are any problems, you can use the tabs to go to a specific point in the process to make edits. -3. When you're happy with your configuration, select **Create**. After a moment, you should see a notification stating that the access package is created. +1. When you're happy with your configuration, select **Create**. After a moment, you should see a notification stating that the access package is created. -4. After the access package is created, you'll see the **Overview** page for the package. You'll find the **My Access portal link** here. Copy the link and share it with your team so your team members can request the access package to be assigned licenses for Office. +1. After the access package is created, you'll see the **Overview** page for the package. You'll find the **My Access portal link** here. Copy the link and share it with your team so your team members can request the access package to be assigned licenses for Office. ## Step 7: Clean up resources In this step, you can delete the Office Licenses access package. **Prerequisite role:** Global Administrator, Identity Governance Administrator, or Access Package Manager -1. In the Azure portal, on the left pane, select **Azure Active Directory**. --2. Under **Manage**, select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -3. Under **Entitlement Management**, select **Access packages**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -4. Open the **Office Licenses** access package. +1. Open the **Office Licenses** access package. -5. Select **Resource Roles**. +1. Select **Resource Roles**. -6. Select the group you added to the access package. On the details pane, select **Remove resource role**. In the message box that appears, select **Yes**. +1. Select the group you added to the access package. On the details pane, select **Remove resource role**. In the message box that appears, select **Yes**. -7. Open the list of access packages. +1. Open the list of access packages. -8. For **Office Licenses**, select the ellipsis button (...) and then select **Delete**. In the message box that appears, select **Yes**. +1. For **Office Licenses**, select the ellipsis button (...) and then select **Delete**. In the message box that appears, select **Yes**. ## Next steps |
active-directory | Entitlement Management Group Writeback | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-group-writeback.md | |
active-directory | Entitlement Management Logic Apps Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logic-apps-integration.md | These triggers to Logic Apps are controlled in a tab within access package polic ## Create and add a Logic App workflow to a catalog for use in entitlement management - **Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner or Resource Group Owner -1. Sign in to the [Azure portal](https://portal.azure.com). --1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Catalogs**. +1. Browse to **Identity governance** > **Catalogs**. 1. Select the catalog for which you want to add a custom extension and then in the left menu, select **Custom Extensions**. These triggers to Logic Apps are controlled in a tab within access package polic **Prerequisite roles:** Global administrator, Identity Governance administrator, Catalog owner, or Access package manager -1. Sign in to the [Azure portal](https://portal.azure.com). --1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. 1. Select the access package you want to add a custom extension (logic app) to from the list of access packages that have already been created. |
active-directory | Entitlement Management Logs And Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-logs-and-reporting.md | Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub **Prerequisite role**: Global Administrator -1. Sign in to the [Azure portal](https://portal.azure.com) as a user who is a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace. +1. Sign in to the [Microsoft Entra admin center](https://portal.azure.com) as a user who is a Global Administrator. Make sure you have access to the resource group containing the Azure Monitor workspace. -1. Select **Azure Active Directory** then select **Diagnostic settings** under Monitoring in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace. +1. Select **Identity**, then select **Diagnostic settings** under **Monitoring and health** in the left navigation menu. Check if there's already a setting to send the audit logs to that workspace. 1. If there isn't already a setting, select **Add diagnostic setting**. Use the instructions in [Integrate Azure AD logs with Azure Monitor logs](../reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md) to send the Azure AD audit log to the Azure Monitor workspace. Archiving Azure AD audit logs requires you to have Azure Monitor in an Azure sub 1. Later, to see the range of dates held in your workspace, you can use the *Archived Log Date Range* workbook: - 1. Select **Azure Active Directory** then select **Workbooks**. + 1. Select **Identity** then select **Workbooks** under **Monitoring and health**. 1. Expand the section **Azure Active Directory Troubleshooting**, and select on **Archived Log Date Range**. To view events for an access package, you must have access to the underlying Azu Use the following procedure to view events: -1. In the Azure portal, select **Azure Active Directory** then select **Workbooks**. If you only have one subscription, move on to step 3. +1. In the Microsoft Entra admin center, select **Identity** then select **Workbooks**. If you only have one subscription, move on to step 3. 1. If you have multiple subscriptions, select the subscription that contains the workspace. Use the following procedure to view events: ![View app role assignments](./media/entitlement-management-access-package-incompatible/workbook-ara.png) -## Create custom Azure Monitor queries using the Azure portal +## Create custom Azure Monitor queries using the Microsoft Entra admin center You can create your own queries on Azure AD audit events, including entitlement management events. -1. In Azure Active Directory of the Azure portal, select **Logs** under the Monitoring section in the left navigation menu to create a new query page. +1. In Identity of the Microsoft Entra admin center, select **Logs** under the Monitoring section in the left navigation menu to create a new query page. 1. Your workspace should be shown in the upper left of the query page. If you have multiple Azure Monitor workspaces, and the workspace you're using to store Azure AD audit events isn't shown, select **Select Scope**. Then, select the correct subscription and workspace. |
active-directory | Entitlement Management Onboard External User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-onboard-external-user.md | For more information, see [License requirements](entitlement-management-overview **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, or Access package manager -1. In the Azure portal, in the left navigation, select **Azure Active Directory**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -2. In the left menu, select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -3. In the left menu, select **Access packages**. If you see Access denied, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory. +3. When selecting the access package page if you see Access denied, ensure that a Microsoft Azure AD Premium P2 or Microsoft Entra ID Governance license is present in your directory. 4. Select **New access package**. In this step, you can delete the **External user package** access package. **Prerequisite role:** Global administrator, Identity Governance administrator or Access package manager -1. In the **Azure portal**, in the left navigation, select **Azure Active Directory**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -2. In the left menu, select **Identity Governance**. --3. In the left menu, select **Access Packages**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. 4. Open the **External user package** access package. |
active-directory | Entitlement Management Organization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-organization.md | -With entitlement management, you can collaborate with people outside your organization. If you frequently collaborate with users in an external Azure AD directory or domain, you can add them as a connected organization. This article describes how to add a connected organization so that you can allow users outside your organization to request resources in your directory. +With entitlement management, you can collaborate with people outside your organization. If you frequently collaborate with many users from specific external organizations, you can add those organization's identity sources as connected organizations. Having a connected organization simplifies how more people from those organizations can request access. This article describes how to add a connected organization so that you can allow users outside your organization to request resources in your directory. ## What is a connected organization? A connected organization is another organization that you have a relationship with. In order for the users in that organization to be able to access your resources, such as your SharePoint Online sites or apps, you'll need a representation of that organization's users in that directory. Because in most cases the users in that organization aren't already in your Azure AD directory, you can use entitlement management to bring them into your Azure AD directory as needed. +If you want to provide a path for anyone to request access, and you are not sure which organizations those new users might be from, then you can configure an [access package assignment policy for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory). In that policy, select the option of **All users (All connected organizations + any new external users)**. If the requestor is approved, and they don’t belong to a connected organization in your directory, a connected organization will automatically be created for them. ++If you want to only allow individuals from designated organizations to request access, then first create those connected organizations. Second, configure an [access package assignment policy for users not in your directory](entitlement-management-access-package-request-policy.md#for-users-not-in-your-directory), select the option of **Specific connected organizations**, and select the organizations you created. ++ There are four ways that entitlement management lets you specify the users that form a connected organization. It could be * users in another Azure AD directory (from any Microsoft cloud), * users in another non-Azure AD directory that has been configured for direct federation, * users in another non-Azure AD directory, whose email addresses all have the same domain name in common, or-* users with a Microsoft Account, such as from the domain *live.com*, if you have a business need for collaboration with users which have no common organization. +* users with a Microsoft Account, such as from the domain *live.com*, if you have a business need for collaboration with users that have no common organization. For example, suppose you work at Woodgrove Bank and you want to collaborate with two external organizations. You want to give users from both external organizations access to the same resources, but these two organizations have different configurations: -- Graphic Design Institute uses Azure AD, and their users have a user principal name that ends with *graphicdesigninstitute.com*.-- Contoso does not yet use Azure AD. Contoso users have a user principal name that ends with *contoso.com*.+- Contoso does not yet use Azure AD. Contoso users have an email address that ends with *contoso.com*. +- Graphic Design Institute uses Azure AD, and at least some of their users have a user principal name that ends with *graphicdesigninstitute.com*. ++In this case, you can configure two connected organizations, then one access package with one policy. -In this case, you can configure one access package, with one policy, and two connected organizations. You create one connected organization for Graphic Design Institute and one for Contoso. If you then specify the two connected organizations in a policy for **users not yet in your directory**, users from each organization, with a user principal name that matches one of the connected organizations, can request the access package. Users with a user principal name that has a domain of contoso.com would match the Contoso-connected organization and would also be allowed to request the package. Users with a user principal name that has a domain of *graphicdesigninstitute.com* and are using an organizational account would match the Graphic Design Institute-connected organization and be allowed to submit requests. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches another [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to the Graphic Design Institute tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy. If you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, that includes users from those domains that aren't yet part of Azure AD directories who'll authenticate using email OTP when accessing your resources. +1. Ensure that you have [email one-time passcode (OTP) authentication](../external-identities/one-time-passcode.md) turned on, so that users from those domains that aren't yet part of Azure AD directories who'll authenticate using email one-time-passcode when requesting access or later accessing your resources. In addition, you may need to [configure your Azure AD B2B external collaboration settings](entitlement-management-external-users.md?#configure-your-azure-ad-b2b-external-collaboration-settings) to allow external users access. +1. Create a connected organization for Contoso. When you specify the domain *contoso.com*, entitlement management will recognize that there is no existing Azure AD tenant associated with that domain, and that users from that connected organization will be recognized if they authenticate with an email one-time-passcode with a *contoso.com* email address domain. +1. Create another connected organization for Graphic Design Institute. When you specify the domain *graphicdesigninstitute.com*, entitlement management will recognize that there is a tenant associated with that domain. +1. In a catalog that allows external users to request, create an access package. +1. In that access package, create an access package assignment policy for **users not yet in your directory**. In that policy, select the option **Specific connected organizations** and specify the two connected organizations. This will allow users from each organization, with an identity source that matches one of the connected organizations, to request the access package. +1. When external users with a user principal name that has a domain of *contoso.com* request the access package, they will authenticate using email. This email domain will match the Contoso-connected organization and the user will be allowed to request the package. After they request, [how access works for external users](entitlement-management-external-users.md?#how-access-works-for-external-users) describes how the B2B user is then invited and access is assigned for the external user. +1. In addition, external users that are using an organizational account from the Graphic Design Institute tenant would match the Graphic Design Institute-connected organization and be allowed to request the access package. And, because Graphic Design Institute uses Azure AD, any users with a principal name that matches another [verified domain](../fundamentals/add-custom-domain.md#verify-your-custom-domain-name) that's added to the Graphic Design Institute tenant, such as *graphicdesigninstitute.example*, would also be able to request access packages by using the same policy. ![Connected organization example](./media/entitlement-management-organization/connected-organization-example.png) For a demonstration of how to add a connected organization, watch the following **Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator* -1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left pane, select **Connected organizations**. +1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**. 1. In the search box, you can search for a connected organization by the name of the connected organization. However, you cannot search for a domain name. To add an external Azure AD directory or domain as a connected organization, fol **Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator* -1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left pane, select **Connected organizations**, and then select **Add connected organization**. +1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**. ++1. On the **Connected organizations** page select **Add connected organization**. ![The "Add connected organization" button](./media/entitlement-management-organization/connected-organization.png) If the connected organization changes to a different domain, the organization's **Prerequisite role**: *Global administrator* or *User administrator* -1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**. -1. In the left pane, select **Connected organizations**, and then select the connected organization to open it. +1. On the **Connected organizations** page select the connected organization you want to update. 1. In the connected organization's overview pane, select **Edit** to change the organization name, description, or state. If you no longer have a relationship with an external Azure AD directory or doma **Prerequisite role**: *Global administrator* or *User administrator* -1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**. -1. In the left pane, select **Connected organizations**, and then select the connected organization to open it. +1. On the **Connected organizations** page select the connected organization you want to delete to open it. 1. In the connected organization's overview pane, select **Delete** to delete it. foreach ($c in $co) { There are two different states for connected organizations in entitlement management, configured and proposed: -- A configured connected organization is a fully functional connected organization that allows users within that organization access to access packages. When an admin creates a new connected organization in the Azure portal, it will be in the **configured** state by default since the administrator created and wants to use this connected organization. Additionally, when a connected org is created programmatically via the API, the default state should be **configured** unless set to another state explicitly. +- A **configured** connected organization is a fully functional connected organization that allows users within that organization access to access packages. When an admin creates a new connected organization in the Azure portal, it will be in the **configured** state by default since the administrator created and wants to use this connected organization. Additionally, when a connected org is created programmatically via the API, the default state should be **configured** unless set to another state explicitly. Configured connected organizations will show up in the pickers for connected organizations and will be in scope for any policies that target “all configured connected organizations”. -- A proposed connected organization is a connected organization that has been automatically created, but hasn't had an administrator create or approve the organization. When a user signs up for an access package outside of a configured connected organization, any automatically created connected organizations will be in the **proposed** state since no administrator in the tenant set-up that partnership. +- A **proposed** connected organization is a connected organization that has been automatically created, but hasn't had an administrator create or approve the organization. When a user signs up for an access package outside of a configured connected organization, any automatically created connected organizations will be in the **proposed** state since no administrator in the tenant set-up that partnership. Proposed connected organizations are not in scope for the “all configured connected organizations” setting on any policies but can be used in policies only for policies targeting specific organizations. |
active-directory | Entitlement Management Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reports.md | This report enables you to list all of the users who are assigned to an access p **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. Select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package of interest. +1. Browse to **Identity governance** > **Entitlement management** > **Access packages**. ++1. On the **Access packages** page select the access package of interest. 1. In the left menu, select **Assignments**, then select **Download**. This report enables you to list all of the access packages a user can request an **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. Select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Reports**. +1. Browse to **Identity governance** > **Entitlement management** > **Reports**. 1. Select **Access packages for a user**. This report enables you to list the resources currently assigned to a user in en **Prerequisite role:** Global administrator, Identity Governance administrator or User administrator -1. Select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Reports**. +1. Browse to **Identity governance** > **Entitlement management** > **Reports**. 1. Select **Resource assignments for a user**. This report enables you to list the resources currently assigned to a user in en To get additional details on how a user requested and received access to an access package, you can use the Azure AD audit log. In particular, you can use the log records in the `EntitlementManagement` and `UserManagement` categories to get additional details on the processing steps for each request. -1. Select **Azure Active Directory** and then select **Audit logs**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Audit logs**. 1. At the top, change the **Category** to either `EntitlementManagement` or `UserManagement`, depending on the audit record you're looking for. When the user's access package assignment expires, is canceled by the user, or r **Prerequisite role**: *Global administrator*, *Identity Governance administrator*, or *User administrator* -1. In the Azure portal, select **Azure Active Directory**, and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). ++1. Browse to **Identity governance** > **Entitlement management** > **Connected organizations**. -1. In the left pane, select **Connected organizations**, and then select **Download**. +1. On the **Connected organizations** page select **Download**. ## View events for an access package To view events for an access package, you must have access to the underlying Azu - Reports reader - Application administrator -1. In the Azure portal, select **Azure Active Directory** then select **Workbooks**. If you only have one subscription, move on to step 3. +1. In the Microsoft Entra admin center, select **Identity** then select **Workbooks** under **Monitoring & health**. If you only have one subscription, move on to step 3. 1. If you have multiple subscriptions, select the subscription that contains the workspace. |
active-directory | Entitlement Management Reprocess Access Package Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-assignments.md | To use entitlement management and assign users to access packages, you must have If you have users who are in the "Delivered" state but don't have access to resources that are a part of the access package, you'll likely need to reprocess the assignments to reassign those users to the access package's resources. Follow these steps to reprocess assignments for an existing access package: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Select **Azure Active Directory**, and then select **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, select **Access packages** and then open the access package with the user assignment you want to reprocess. +1. On the **Access packages** page open the access package with the user assignment you want to reprocess. 1. Underneath **Manage** on the left side, select **Assignments**. |
active-directory | Entitlement Management Reprocess Access Package Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-reprocess-access-package-requests.md | To use entitlement management and assign users to access packages, you must have If you have a set of users whose requests are in the "Partially Delivered" or "Failed" state, you might need to reprocess some of those requests. Follow these steps to reprocess requests for an existing access package: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Click **Azure Active Directory**, and then click **Identity Governance**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. -1. In the left menu, click **Access packages** and then open the access package. +1. On the **Access packages** open the access package. -1. Underneath **Manage** on the left side, click **Requests**. +1. Underneath **Manage** on the left side, select **Requests**. 1. Select all users whose requests you wish to reprocess. -1. Click **Reprocess**. +1. Select **Reprocess**. ## Next steps |
active-directory | Entitlement Management Request Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-access.md | Title: Request an access package - entitlement management -description: Learn how to use the My Access portal to request access to an access package in Azure Active Directory entitlement management. +description: Learn how to use the My Access portal to request access to an access package in Microsoft Entra entitlement management. documentationCenter: '' |
active-directory | Entitlement Management Request Approve | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-request-approve.md | Title: Approve or deny access requests - entitlement management -description: Learn how to use the My Access portal to approve or deny requests to an access package in Azure Active Directory entitlement management. +description: Learn how to use the My Access portal to approve or deny requests to an access package in Microsoft Entra entitlement management. documentationCenter: '' |
active-directory | Entitlement Management Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-scenarios.md | Title: Common scenarios in entitlement management -description: Learn the high-level steps you should follow for common scenarios in Azure Active Directory entitlement management. +description: Learn the high-level steps you should follow for common scenarios in Microsoft Entra entitlement management. documentationCenter: '' |
active-directory | Entitlement Management Ticketed Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-ticketed-provisioning.md | To add a Logic App workflow to an existing catalog, you use an ARM template for Provide the Azure subscription, resource group details, along with the Logic App name and the Catalog ID to associate the Logic App with and select purchase. For more information on how to create a new catalog, please follow the steps in this document: [Create and manage a catalog of resources in entitlement management](entitlement-management-catalog-create.md). -1. Navigate To Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) +1. Navigate To Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) 1. In the left menu, select **Catalogs**. After setting up custom extensibility in the catalog, administrators can create With Azure, you're able to use [Azure Key Vault](/azure/key-vault/secrets/about-secrets) to store application secrets such as passwords. To register an application with secrets within the Azure portal, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. Search for and select Azure Active Directory. +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Under Manage, select App registrations > New registration. After registering your application, you must add a client secret by following th To authorize the created application to call the [MS Graph resume API](/graph/api/accesspackageassignmentrequest-resume) you'd do the following steps: -1. Navigate to the Entra portal [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) +1. Navigate to the Microsoft Entra admin center [Identity Governance - Microsoft Entra admin center](https://entra.microsoft.com/#view/Microsoft_AAD_ERM/DashboardBlade/~/elmEntitlement) 1. In the left menu, select **Catalogs**. At this point it's time to configure ServiceNow for resuming the entitlement man 1. Sign in to ServiceNow and navigate to the Application Registry. 1. Select ΓÇ£*New*ΓÇ¥ and then select ΓÇ£**Connect to a third party OAuth Provider**ΓÇ¥. 1. Provide a name for the application, and select Client Credentials in the Default Grant type.- 1. Enter the Client Name, ID, Client Secret, Authorization URL, Token URL that were generated when you registered the Azure Active Directory application in the Azure portal. + 1. Enter the Client Name, ID, Client Secret, Authorization URL, Token URL that were generated when you registered the Azure Active Directory application in the Microsoft Entra admin center. 1. Submit the application. :::image type="content" source="media/entitlement-management-servicenow-integration/entitlement-management-servicenow-application-registry.png" alt-text="Screenshot of the application registry within ServiceNow." lightbox="media/entitlement-management-servicenow-integration/entitlement-management-servicenow-application-registry.png"::: 1. Create a System Web Service REST API message by following these steps: |
active-directory | Entitlement Management Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-troubleshoot.md | This article describes some items you should check to help you troubleshoot enti * Roles for applications are defined by the application itself and are managed in Azure AD. If an application doesn't have any resource roles, entitlement management assigns users to a **Default Access** role. - The Azure portal may also show service principals for services that can't be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they can't be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services. + The Microsoft Entra admin center may also show service principals for services that can't be selected as applications. In particular, **Exchange Online** and **SharePoint Online** are services, not applications that have resource roles in the directory, so they can't be included in an access package. Instead, use group-based licensing to establish an appropriate license for a user who needs access to those services. * Applications that only support Personal Microsoft Account users for authentication, and don't support organizational accounts in your directory, don't have application roles and can't be added to access package catalogs. This article describes some items you should check to help you troubleshoot enti * When a user who isn't yet in your directory signs in to the My Access portal to request an access package, be sure they authenticate using their organizational account. The organizational account can be either an account in the resource directory, or in a directory that is included in one of the policies of the access package. If the user's account isn't an organizational account, or the directory where they authenticate isn't included in the policy, then the user won't see the access package. For more information, see [Request access to an access package](entitlement-management-request-access.md). -* If a user is blocked from signing in to the resource directory, they won't be able to request access in the My Access portal. Before the user can request access, you must remove the sign-in block from the user's profile. To remove the sign-in block, in the Azure portal, select **Azure Active Directory**, select **Users**, select the user, and then select **Profile**. Edit the **Settings** section and change **Block sign in** to **No**. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/how-to-manage-user-profile-info.md). You can also check if the user was blocked due to an [Identity Protection policy](../identity-protection/howto-identity-protection-remediate-unblock.md). +* If a user is blocked from signing in to the resource directory, they won't be able to request access in the My Access portal. Before the user can request access, you must remove the sign-in block from the user's profile. To remove the sign-in block, in the Microsoft Entra admin center, select **Identity**, select **Users**, select the user, and then select **Profile**. Edit the **Settings** section and change **Block sign in** to **No**. For more information, see [Add or update a user's profile information using Azure Active Directory](../fundamentals/how-to-manage-user-profile-info.md). You can also check if the user was blocked due to an [Identity Protection policy](../identity-protection/howto-identity-protection-remediate-unblock.md). * In the My Access portal, if a user is both a requestor and an approver, they won't see their request for an access package on the **Approvals** page. This behavior is intentional - a user can't approve their own request. Ensure that the access package they're requesting has additional approvers configured on the policy. For more information, see [Change request and approval settings for an access package](entitlement-management-access-package-request-policy.md). This article describes some items you should check to help you troubleshoot enti **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access packages**. 1. Select **Requests**. You can only reprocess a request that has a status of **Delivery failed** or **P **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access packages** to open an access package. 1. Select **Requests**. You can only cancel a pending request that hasn't yet been delivered or whose de **Prerequisite role:** Global administrator, Identity Governance administrator, User administrator, Catalog owner, Access package manager or Access package assignment manager -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then open the access package. +1. Browse to **Identity governance** > **Entitlement management** > **Access packages** to open an access package. 1. Select **Requests**. |
active-directory | Entitlement Management Verified Id Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-verified-id-settings.md | To add a verified ID requirement to an access package, you must start from the a > [!NOTE] > Identity Governance administrator, User administrator, Catalog owner, or Access package manager will be able to add verified ID requirements to access packages soon. -1. In the Azure portal, select **Azure Active Directory** and then select **Identity Governance**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least an [Identity Governance Administrator](../roles/permissions-reference.md#identity-governance-administrator). -1. In the left menu, select **Access packages** and then select **+ New access package**. +1. Browse to **Identity governance** > **Entitlement management** > **Access package**. ++1. On the **Access packages** page select **+ New access package**. 1. On the **Requests** tab, scroll to the **Required Verified Ids** section. |
active-directory | Manage Workflow Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md | You can update the following basic information without creating a new workflow. If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article. -If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you must manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph). +If done via the Microsoft Entra Admin center, the new version is created automatically. If done using Microsoft Graph, you must manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph). -## Edit the properties of a workflow using the Azure portal +## Edit the properties of a workflow using the Microsoft Entra Admin center +To edit the properties of a workflow using the Microsoft Entra admin center, you do the following steps: -To edit the properties of a workflow using the Azure portal, you do the following steps: +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Type in **Identity Governance** on the search bar near the top of the page and select it. --1. On the left menu, select **Lifecycle workflows**. --1. On the left menu, select **Workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. 1. Here you see a list of all of your current workflows. Select the workflow that you want to edit. |
active-directory | Manage Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md | -Changing a workflow's tasks or execution conditions requires the creation of a new version of that workflow. Tasks within workflows can be added, reordered, and removed at will. Updating a workflow's tasks or execution conditions within the Azure portal will trigger the creation of a new version of the workflow automatically. Making these updates in Microsoft Graph will require the new workflow version to be created manually. +Changing a workflow's tasks or execution conditions requires the creation of a new version of that workflow. Tasks within workflows can be added, reordered, and removed at will. Updating a workflow's tasks or execution conditions within the Microsoft Entra admin center will trigger the creation of a new version of the workflow automatically. Making these updates in Microsoft Graph will require the new workflow version to be created manually. -## Edit the tasks of a workflow using the Azure portal +## Edit the tasks of a workflow using the Microsoft Entra admin center +Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Microsoft Entra admin center, you complete the following steps: +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you complete the following steps: --1. Sign in to the [Azure portal](https://portal.azure.com). --1. Type in **Identity Governance** on the search bar near the top of the page and select it. --1. In the left menu, select **Lifecycle workflows**. --1. In the left menu, select **workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. -1. On the left side of the screen, select **Tasks**. +1. Select the workflow that you want to edit the tasks of and on the left side of the screen, select **Tasks**. 1. You can add a task to the workflow by selecting the **Add task** button. Tasks within workflows can be added, edited, reordered, and removed at will. To 1. After making changes, select **save** to capture changes to the tasks. -## Edit the execution conditions of a workflow using the Azure portal +## Edit the execution conditions of a workflow using the Microsoft Entra admin center -To edit the execution conditions of a workflow using the Azure portal, you do the following steps: +To edit the execution conditions of a workflow using the Microsoft Entra admin center, you do the following steps: 1. On the left menu of Lifecycle Workflows, select **Workflows**. To edit the execution conditions of a workflow using the Azure portal, you do th 1. After making changes, select **save** to capture changes to the execution conditions. -## See versions of a workflow using the Azure portal +## See versions of a workflow using the Microsoft Entra admin center 1. On the left menu of Lifecycle Workflows, select **Workflows**. |
active-directory | On Demand Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md | -## Run a workflow on-demand in the Azure portal +## Run a workflow on-demand in the Microsoft Entra admin center --Use the following steps to run a workflow on-demand. +Use the following steps to run a workflow on-demand: >[!NOTE] >To be run on demand, the workflow must be enabled. -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Type in **Identity Governance** on the search bar near the top of the page and select it. --1. On the left menu, select **Lifecycle workflows**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. select **Workflows** +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. 1. On the workflow screen, select the specific workflow you want to run. |
active-directory | Perform Access Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/perform-access-review.md | If you're the second-stage or third-stage reviewer, you'll also see the decision Approve or deny access as outlined in [Review access for one or more users](#review-access-for-one-or-more-users). > [!NOTE]-> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Azure portal. This action will close the active stage and start the next stage. +> The next stage of the review won't become active until the duration specified during the access review setup has passed. If the administrator believes a stage is done but the review duration for this stage has not expired yet, they can use the **Stop current stage** button in the overview of the access review in the Microsoft Entra admin center. This action will close the active stage and start the next stage. ### Review access for B2B direct connect users in Teams shared channels and Microsoft 365 groups (preview) |
active-directory | Sap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/sap.md | After your users are in Azure AD, you can provision accounts into the various Sa ### Provision identities into on-premises SAP systems that SAP IPS doesn't support -Customers who have yet to transition from applications such as SAP ERP Central Component (SAP ECC) to SAP S/4HANA can still rely on the Azure AD provisioning service to provision user accounts. Within SAP ECC, you expose the necessary Business Application Programming Interfaces (BAPIs) for creating, updating, and deleting users. Within Azure AD, you have two options: +Customers who have yet to transition from applications such as SAP R/3 and SAP ERP Central Component (SAP ECC) to SAP S/4HANA can still rely on the Azure AD provisioning service to provision user accounts. Within SAP R/3 and SAP ECC, you expose the necessary Business Application Programming Interfaces (BAPIs) for creating, updating, and deleting users. Within Azure AD, you have two options: -* Use the lightweight Azure AD provisioning agent and [web services connector](/azure/active-directory/app-provisioning/on-premises-web-services-connector) to [provision users into apps such as SAP ECC](/azure/active-directory/app-provisioning/on-premises-sap-connector-configure?branch=pr-en-us-243167). +* Use the lightweight Azure AD provisioning agent and [web services connector](/azure/active-directory/app-provisioning/on-premises-web-services-connector) to [provision users into apps such as SAP ECC](/azure/active-directory/app-provisioning/on-premises-sap-connector-configure). * In scenarios where you need to do more complex group and role management, use [Microsoft Identity Manager](/microsoft-identity-manager/reference/microsoft-identity-manager-2016-ma-ws) to manage access to your legacy SAP applications. ## Trigger custom workflows |
active-directory | Trigger Custom Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md | Lifecycle Workflows can be used to trigger custom tasks via an extension to Azur For more information about Lifecycle Workflows extensibility, see: [Workflow Extensibility](lifecycle-workflow-extensibility.md). -## Create a custom task extension using the Azure portal +## Create a custom task extension using the Microsoft Entra admin center To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you complete these steps: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). -1. Select **Azure Active Directory** and then select **Identity Governance**. --1. In the left menu, select **Lifecycle Workflows**. +1. Browse to **Identity governance** > **Lifecycle workflows** > **workflows**. 1. On the Lifecycle workflows screen, select **Custom task extension**. |
active-directory | Tutorial Offboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md | Title: Execute employee termination tasks by using lifecycle workflows -description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows in the Azure portal. +description: Learn how to remove users from an organization in real time on their last day of work by using lifecycle workflows in the Microsoft Entra admin center. -This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows in the Azure portal. +This tutorial provides a step-by-step guide on how to execute a real-time employee termination by using lifecycle workflows in the Microsoft Entra admin center. This *leaver* scenario runs a workflow on demand and accomplishes the following tasks: The leaver scenario includes the following steps: ## Create a workflow by using the leaver template +Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Microsoft Entra admin center: -Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination by using lifecycle workflows in the Azure portal: --1. Sign in to the [Azure portal](https://portal.azure.com). -2. On the right, select **Azure Active Directory**. -3. Select **Identity Governance**. -4. Select **Lifecycle workflows**. -5. On the **Overview** tab, select **New workflow**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). +2. Select **Identity Governance**. +3. Select **Lifecycle workflows**. +4. On the **Overview** tab, select **New workflow**. :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of the Overview tab and the button for creating a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: -6. From the collection of templates, choose **Select** under **Real-time employee termination**. +5. From the collection of templates, choose **Select** under **Real-time employee termination**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting a workflow template for real-time employee termination." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: -7. Configure basic information about the workflow, and then select **Next: Review tasks**. +6. Configure basic information about the workflow, and then select **Next: Review tasks**. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of the tab for basic workflow information." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png"::: -8. Inspect the tasks if you want, but no additional configuration is needed. Select **Next: Select users** when you're finished. +7. Inspect the tasks if you want, but no additional configuration is needed. Select **Next: Select users** when you're finished. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of the tab for reviewing template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png"::: -9. Choose the **Select users to run now** option. It allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on demand later at any time, as needed. +8. Choose the **Select users to run now** option. It allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on demand later at any time, as needed. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Screenshot of the option for selecting users to run now." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png"::: -10. Select **Add users** to designate the users for this workflow. +9. Select **Add users** to designate the users for this workflow. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of the button for adding users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png"::: -11. A panel with the list of available users appears on the right side of the window. Choose **Select** when you're done with your selection. +10. A panel with the list of available users appears on the right side of the window. Choose **Select** when you're done with your selection. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of a list of available users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png"::: -12. Select **Next: Review and create** when you're satisfied with your selection of users. +11. Select **Next: Review and create** when you're satisfied with your selection of users. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of added users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png"::: -13. Verify that the information is correct, and then select **Create**. +12. Verify that the information is correct, and then select **Create**. :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of the tab for reviewing workflow choices, along with the button for creating the workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png"::: To run the workflow immediately, you can use the on-demand feature. > [!NOTE] > You currently can't run a workflow on demand if it's set to **Disabled**. You need to set the workflow to **Enabled** to use the on-demand feature. -To run a workflow on demand for users by using the Azure portal: +To run a workflow on demand for users by using the Microsoft Entra admin center: 1. On the workflow screen, select the specific workflow that you want to run. 2. Select **Run on demand**. |
active-directory | Tutorial Onboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md | Title: 'Automate employee onboarding tasks before their first day of work with Azure portal' -description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal. + Title: 'Automate employee onboarding tasks before their first day of work with the Microsoft Entra admin center' +description: Tutorial for onboarding users to an organization using Lifecycle workflows with the Microsoft Entra admin center. -# Automate employee onboarding tasks before their first day of work with Azure portal +# Automate employee onboarding tasks before their first day of work with the Microsoft Entra admin center -This tutorial provides a step-by-step guide on how to automate prehire tasks with Lifecycle workflows using the Azure portal. +This tutorial provides a step-by-step guide on how to automate prehire tasks with Lifecycle workflows using the Microsoft Entra admin center. This prehire scenario generates a temporary access pass for our new employee and sends it via email to the user's new manager. Detailed breakdown of the relevant attributes: The pre-hire scenario can be broken down into the following: - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager- - **Prerequisite:** Editing the attributes required for this scenario in the portal + - **Prerequisite:** Editing the attributes required for this scenario in the admin center - **Prerequisite:** Edit the attributes for this scenario using Microsoft Graph Explorer - **Prerequisite:** Enabling and using Temporary Access Pass (TAP) - Creating the lifecycle management workflow The pre-hire scenario can be broken down into the following: ## Create a workflow using prehire template +Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Microsoft Entra admin center. -Use the following steps to create a pre-hire workflow that generates a TAP and send it via email to the user's manager using the Azure portal. --1. Sign in to the [Azure portal](https://portal.azure.com). -2. On the right, select **Azure Active Directory**. -3. Select **Identity Governance**. -4. Select **Lifecycle workflows**. -5. On the **Overview** page, select **New workflow**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). +2. Select **Identity Governance**. +3. Select **Lifecycle workflows**. +4. On the **Overview** page, select **New workflow**. :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: -6. From the templates, select **select** under **Onboard pre-hire employee**. +5. From the templates, select **select** under **Onboard pre-hire employee**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: -7. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**. +6. Next, you configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow triggers two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png"::: -8. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). +7. Next, you configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters). :::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png"::: -9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished. +8. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you're finished. :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png"::: -10. On the review blade, verify the information is correct and select **Create**. +9. On the review blade, verify the information is correct and select **Create**. :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-create.png" alt-text="Screenshot of creating an onboard workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-create.png"::: ## Run the workflow Now that the workflow is created, it will automatically run the workflow every 3 >[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. -To run a workflow on-demand, for users using the Azure portal, do the following steps: +To run a workflow on-demand, for users using the Microsoft Entra admin center, do the following steps: 1. On the workflow screen, select the specific workflow you want to run. 2. Select **Run on demand**. |
active-directory | Tutorial Prepare User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-user-accounts.md | |
active-directory | Tutorial Scheduled Leaver Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md | Title: Automate employee offboarding tasks after their last day of work with Azure portal -description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Azure portal. + Title: Automate employee offboarding tasks after their last day of work with the Microsoft Entra admin center +description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with the Microsoft Entra admin center. -# Automate employee offboarding tasks after their last day of work with Azure portal +# Automate employee offboarding tasks after their last day of work with the Microsoft Entra admin center -This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal. +This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Microsoft Entra admin center. This post off-boarding scenario runs a scheduled workflow and accomplishes the following tasks: The scheduled leaver scenario can be broken down into the following: ## Create a workflow using scheduled leaver template +Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Microsoft Entra admin center. -Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal. -- 1. Sign in to the [Azure portal](https://portal.azure.com). - 2. On the right, select **Azure Active Directory**. - 3. Select **Identity Governance**. - 4. Select **Lifecycle workflows**. - 5. On the **Overview** page, select **New workflow**. + 1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Lifecycle Workflows Administrator](../roles/permissions-reference.md#lifecycle-workflows-administrator). + 2. Select **Identity Governance**. + 3. Select **Lifecycle workflows**. + 4. On the **Overview** page, select **New workflow**. :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: - 6. From the templates, select **Select** under **Post-offboarding of an employee**. + 5. From the templates, select **Select** under **Post-offboarding of an employee**. :::image type="content" source="media/tutorial-lifecycle-workflows/select-leaver-template.png" alt-text="Screenshot of selecting a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-leaver-template.png"::: - 7. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. + 6. Next, you'll configure the basic information about the workflow. This information includes when the workflow triggers, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png"::: - 8. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters) + 7. Next, you'll configure the scope. The scope determines which users this workflow runs against. In this case, it is on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. For a full list of supported user properties, see [Supported user properties and query parameters](/graph/api/resources/identitygovernance-rulebasedsubjectset?view=graph-rest-beta&preserve-view=true#supported-user-properties-and-query-parameters) :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png"::: - 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished. + 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you're finished. :::image type="content" source="media/tutorial-lifecycle-workflows/review-leaver-tasks.png" alt-text="Screenshot of leaver workflow tasks." lightbox="media/tutorial-lifecycle-workflows/review-leaver-tasks.png"::: -10. On the review blade, verify the information is correct and select **Create**. +9. On the review blade, verify the information is correct and select **Create**. :::image type="content" source="media/tutorial-lifecycle-workflows/create-leaver-workflow.png" alt-text="Screenshot of a leaver workflow being created." lightbox="media/tutorial-lifecycle-workflows/create-leaver-workflow.png"::: >[!NOTE] Now that the workflow is created, it will automatically run the workflow every 3 >[!NOTE] >Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. -To run a workflow on-demand, for users using the Azure portal, do the following steps: +To run a workflow on-demand, for users using the Microsoft Entra admin center, do the following steps: 1. On the workflow screen, select the specific workflow you want to run. 2. Select **Run on demand**. |
active-directory | Understanding Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md | A workflow can be broken down into the following three main parts: ## Templates -Creating a workflow via the Azure portal requires the use of a template. A Lifecycle Workflow template is a framework that is used for predefined tasks, and helps automate the creation of a workflow. +Creating a workflow via the Microsoft Entra admin center requires the use of a template. A Lifecycle Workflow template is a framework that is used for predefined tasks, and helps automate the creation of a workflow. [![Understanding workflow template diagram.](media/understanding-lifecycle-workflows/workflow-3.png)](media/understanding-lifecycle-workflows/workflow-3.png#lightbox) The **My Feed** section of the workflow overview contains a quick peek into when The **Quick Action** section allows you to quickly take action with your workflow. These quick actions can either be making the workflow do something, or used for history or editing purposes. The following actions you can take are: - Run on Demand: Allows you to quickly run the workflow on demand. For more information on this process, see: [Run a workflow on-demand](on-demand-workflow.md)-- Edit tasks: Allows you to add, delete, edit, or reorder tasks within the workflow. For more information on this process, see: [Edit the tasks of a workflow using the Azure portal](manage-workflow-tasks.md#edit-the-tasks-of-a-workflow-using-the-azure-portal)+- Edit tasks: Allows you to add, delete, edit, or reorder tasks within the workflow. For more information on this process, see: [Edit the tasks of a workflow using the MicrosoftEntra admin center](manage-workflow-tasks.md#edit-the-tasks-of-a-workflow-using-the-microsoft-entra-admin-center) - View Workflow History: Allows you to view the history of the workflow. For more information on the three history perspectives, see: [Lifecycle Workflows history](lifecycle-workflow-history.md) Actions taken from the overview of a workflow allow you to quickly complete tasks, which can normally be done via the manage section of a workflow. The offset determines how many days before or after the time-based attribute the > [!NOTE]-> The offsetInDays value in the Azure portal is shown as *Days from event*. When you schedule a workflow to run, this value is used as the baseline for who a workflow will run. Currently there is a 3 day window in processing scheduled workflows. For example, if you schedule a workflow to run for users who joined 7 days ago, a user who meets the execution conditions for the workflow, but joined between 7 to 10 days ago would have the workflow ran for them. +> The offsetInDays value in the Microsoft Entra admin center is shown as *Days from event*. When you schedule a workflow to run, this value is used as the baseline for who a workflow will run. Currently there is a 3 day window in processing scheduled workflows. For example, if you schedule a workflow to run for users who joined 7 days ago, a user who meets the execution conditions for the workflow, but joined between 7 to 10 days ago would have the workflow ran for them. ## Configure scope For a detailed guide on setting the execution conditions for a workflow, see: [C While newly created workflows are enabled by default, scheduling is an option that must be enabled manually. To verify whether the workflow is scheduled, you can view the **Scheduled** column. -Once scheduling is enabled, the workflow is evaluated every three hours to determine whether or not it should run based on the execution conditions. +Once scheduling is enabled, the workflow is evaluated based on the interval that is set within your workflow settings(default of three hours) to determine whether or not it should run based on the execution conditions. [![Workflow template schedule.](media/understanding-lifecycle-workflows/workflow-10.png)](media/understanding-lifecycle-workflows/workflow-10.png#lightbox) For more information, see: [Lifecycle Workflows Versioning](lifecycle-workflow-v ## Next steps-- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)+- [Create a custom workflow using the Microsoft Entra admin center](tutorial-onboard-custom-workflow-portal.md) - [Create a Lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | Custom Attribute Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/custom-attribute-mapping.md | |
active-directory | How To Inbound Synch Ms Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-inbound-synch-ms-graph.md | |
active-directory | How To Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/how-to-install.md | To update an existing agent to use the Group Managed Service Account created dur >[!IMPORTANT] > After you've installed the agent, you must configure and enable it before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md). -## Enable password writeback in Azure AD Connect cloud sync +++## Enable password writeback in cloud sync ++You can enable password writeback in SSPR directly in the portal or through PowerShell. ++### Enable password writeback in the portal +To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, using the portal, complete the following steps: ++ 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global Administrator account. + 2. Search for and select **Azure Active Directory**, select **Password reset**, then choose **On-premises integration**. + 3. Check the option for **Enable password write back for synced users** . + 4. (optional) If Azure AD Connect provisioning agents are detected, you can additionally check the option for **Write back passwords with Azure AD Connect cloud sync**. + 5. Check the option for **Allow users to unlock accounts without resetting their password** to *Yes*. + 6. When ready, select **Save**. ++### Using PowerShell To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, use the `Set-AADCloudSyncPasswordWritebackConfiguration` cmdlet and the tenantΓÇÖs global administrator credentials: |
active-directory | Migrate Azure Ad Connect To Cloud Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/migrate-azure-ad-connect-to-cloud-sync.md | |
active-directory | Reference Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/cloud-sync/reference-powershell.md | |
active-directory | How To Bypassdirsyncoverrides | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-bypassdirsyncoverrides.md | |
active-directory | How To Connect Emergency Ad Fs Certificate Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-emergency-ad-fs-certificate-rotation.md | |
active-directory | How To Connect Fed O365 Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-o365-certs.md | ms.assetid: 543b7dc1-ccc9-407f-85a1-a9944c0ba1be na+ Last updated 01/26/2023 |
active-directory | How To Connect Fed Saml Idp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-saml-idp.md | description: This document describes using a SAML 2.0 compliant Idp for single s -+ na |
active-directory | How To Connect Fed Single Adfs Multitenant Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-fed-single-adfs-multitenant-federation.md | ms.assetid: na+ Last updated 01/26/2023 |
active-directory | How To Connect Install Existing Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-existing-tenant.md | description: This topic describes how to use Connect when you have an existing A + Last updated 01/26/2023 |
active-directory | How To Connect Install Multiple Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-multiple-domains.md | ms.assetid: 5595fb2f-2131-4304-8a31-c52559128ea4 na+ Last updated 01/26/2023 |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-install-prerequisites.md | ms.assetid: 91b88fda-bca6-49a8-898f-8d906a661f07 na+ Last updated 05/02/2023 |
active-directory | How To Connect Password Hash Synchronization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-password-hash-synchronization.md | |
active-directory | How To Connect Sync Change The Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-change-the-configuration.md | |
active-directory | How To Connect Sync Feature Preferreddatalocation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-sync-feature-preferreddatalocation.md | description: Describes how to put your Microsoft 365 user resources close to the + Last updated 01/26/2023 |
active-directory | How To Connect Syncservice Duplicate Attribute Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-duplicate-attribute-resiliency.md | ms.assetid: 537a92b7-7a84-4c89-88b0-9bce0eacd931 na+ Last updated 01/26/2023 |
active-directory | How To Connect Syncservice Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/how-to-connect-syncservice-features.md | ms.assetid: 213aab20-0a61-434a-9545-c4637628da81 na+ Last updated 01/26/2023 |
active-directory | Migrate From Federation To Cloud Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/migrate-from-federation-to-cloud-authentication.md | description: This article has information about moving your hybrid identity envi + Last updated 04/04/2023 |
active-directory | Reference Connect Accounts Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-accounts-permissions.md | |
active-directory | Reference Connect Adsynctools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-adsynctools.md | Accept wildcard characters: False ``` #### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters).+ ## Export-ADSyncToolsAadDisconnectors ### SYNOPSIS Export Azure AD Disconnector objects This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable Use ObjectType argument in case you want to export Disconnectors for a given object type only ### OUTPUTS Exports a CSV file with Disconnector objects containing: - UserPrincipalName, Mail, SourceAnchor, DistinguishedName, CsObjectId, ObjectType, ConnectorId and CloudAnchor +## Export-ADSyncToolsAadPublicFolders +### SYNOPSIS +Exports all synchronized Mail-Enabled Public Folder objects from AzureAD to a CSV file +### SYNTAX +``` +Export-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-Path] <Object> [<CommonParameters>] +``` +### DESCRIPTION +This function exports to a CSV file all the synchronized Mail-Enabled Public Folders (MEPF) present in Azure AD. +It can be used in conjunction with Remove-ADSyncToolsAadPublicFolders to identify and remove orphaned Mail-Enabled Public Folders in Azure AD. +This function requires the credentials of a Global Administrator in Azure AD and authentication with MFA is not supported. +NOTE: If DirSync has been disabled on the tenant, you will need to temporarily re-enabled DirSync in order to remove orphaned Mail Enabled Public Folders from Azure AD. +### EXAMPLES +#### EXAMPLE 1 +``` +Export-ADSyncToolsAadPublicFolders -Credential $(Get-Credential) -Path <file_name> +``` +### PARAMETERS +#### -Credential +Azure AD Global Admin Credential +```yaml +Type: PSCredential +Parameter Sets: (All) +Aliases: +Required: true +Position: 1 +Default value: None +Accept pipeline input: True (ByPropertyName) +Accept wildcard characters: False +``` +#### -Path +Path for output file +```yaml +Type: String +Parameter Sets: (All) +Aliases: +Required: true +Position: 2 +Default value: None +Accept pipeline input: false (ByPropertyName) +Accept wildcard characters: false +``` +#### CommonParameters +This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters). +### INPUTS ++### OUTPUTS +This cmdlet creates the <filename> containing all synced Mail-Enabled PublicFolder objects in CSV format. + ## Export-ADSyncToolsHybridAadJoinReport ### SYNOPSIS Generates a report of certificates stored in Active Directory Computer objects, specifically, InputCsvFilename must point to a CSV file with at least 2 columns: SourceAnchor, ### OUTPUTS Shows results from ExportDeletions operation DISCLAIMER: Other than User objects that have a Recycle Bin, any other object types DELETED with this function cannot be RECOVERED!++## Remove-ADSyncToolsAadPublicFolders +### SYNOPSIS +Removes synchronized Mail-Enabled Public Folders (MEPF) present from AzureAD. +You can specify one SourceAnchor/ImmutableID for the target MEPF object to delete, or provide a CSV list with a batch of objects to delete when used in conjunction with Export-ADSyncToolsAadPublicFolders. +This function requires the credentials of a Global Administrator in Azure AD and authentication with MFA is not supported. +NOTE: If DirSync has been disabled on the tenant, you'll need to temporary re-enabled DirSync in order to remove orphaned Mail Enabled Public Folders from Azure AD. +### SYNTAX +``` +Export-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-Path] <Object> [<CommonParameters>] +``` +### DESCRIPTION +This function exports to a CSV file all the synchronized Mail-Enabled Public Folders (MEPF) present in Azure AD. +It can be used in conjunction with Remove-ADSyncToolsAadPublicFolders to identify and remove orphaned Mail-Enabled Public Folders in Azure AD. +This function requires the credentials of a Global Administrator in Azure AD and authentication with MFA is not supported. +NOTE: If DirSync has been disabled on the tenant, you will need to temporarily re-enabled DirSync in order to remove orphaned Mail Enabled Public Folders from Azure AD. +### EXAMPLES +#### EXAMPLE 1 +``` +Remove-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-InputCsvFilename] <Object> [-WhatIf] [-Confirm] [<CommonParameters>] +``` +#### EXAMPLE 2 +``` +Remove-ADSyncToolsAadPublicFolders [-Credential] <PSCredential> [-SourceAnchor] <Object> [-WhatIf] [-Confirm] [<CommonParameters>] +``` +### PARAMETERS +#### -Credential +Azure AD Global Admin Credential +```yaml +Type: PSCredential +Parameter Sets: (All) +Aliases: +Required: true +Position: 1 +Default value: None +Accept pipeline input: True (ByPropertyName) +Accept wildcard characters: False +``` +#### -InputCsvFilename +Path for input CSV file +```yaml +Type: String +Parameter Sets: InputCsv +Aliases: +Required: true +Position: 2 +Default value: None +Accept pipeline input: true (ByPropertyName) +Accept wildcard characters: false +``` +#### -SourceAnchor +Target SourceAnchor/ImmutableID +```yaml +Type: String +Parameter Sets: SourceAnchor +Aliases: +Required: true +Position: 2 +Default value: None +Accept pipeline input: true (ByPropertyName) +Accept wildcard characters: false +``` +#### CommonParameters +This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](/powershell/module/microsoft.powershell.core/about/about_commonparameters). +### INPUTS +The CSV input file can be generated using Export-ADSyncToolsAadPublicFolders. +Path parameters must point to a CSV file with at least 2 columns: SourceAnchor, SyncObjectType. +### OUTPUTS +Shows results from ExportDeletions operation. + ## Remove-ADSyncToolsExpiredCertificates ### SYNOPSIS Script to Remove Expired Certificates from UserCertificate Attribute |
active-directory | Reference Connect Version History Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history-archive.md | Last updated 01/19/2023 -+ # Azure AD Connect: Version release history archive |
active-directory | Reference Connect Version History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/reference-connect-version-history.md | To read more about autoupgrade, see [Azure AD Connect: Automatic upgrade](how-to - We have enabled Auto Upgrade for tenants with custom synchronization rules. Note that deleted (not disabled) default rules will be re-created and enabled upon Auto Upgrade. - We have added Microsoft Azure AD Connect Agent Updater service to the install. This new service will be used for future auto upgrades. - We have removed the Synchronization Service WebService Connector Config program from the install.+ - Default sync rule ΓÇ£In from AD ΓÇô User CommonΓÇ¥ was updated to flow the employeeType attribute. ### Bug Fixes - We have made improvements to accessibility. |
active-directory | Tshoot Connect Connectivity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-connectivity.md | |
active-directory | Tshoot Connect Object Not Syncing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-object-not-syncing.md | ms.assetid: na+ Last updated 01/19/2023 |
active-directory | Tshoot Connect Recover From Localdb 10Gb Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-recover-from-localdb-10gb-limit.md | By default, Azure AD Connect retains up to seven days’ worth of run history da 2. Go to the **Operations** tab. -3. Under **Actions**, select **Clear Runs**… +3. Under **Actions**, select **Clear Runs**. -4. You can either choose **Clear all runs** or **Clear runs before… \<date>** option. It is recommended that you start by clearing run history data that are older than two days. If you continue to run into DB size issue, then choose the **Clear all runs** option. +4. You can either choose **Clear all runs** or **Clear runs before... \<date>** option. It is recommended that you start by clearing run history data that are older than two days. If you continue to run into DB size issue, then choose the **Clear all runs** option. ### Shorten retention period for run history data This step is to reduce the likelihood of running into the 10-GB limit issue after multiple sync cycles. |
active-directory | Tshoot Connect Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sso.md | |
active-directory | Tshoot Connect Sync Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/connect/tshoot-connect-sync-errors.md | |
active-directory | Verify Sync Tool Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/verify-sync-tool-version.md | + + Title: 'Verify your version of cloud sync or connect sync' +description: This article describes the steps to verify the version of the provisioning agent or connect sync. ++documentationcenter: '' +++editor: '' +++ na + Last updated : 08/17/2023++++++# Verify your version of the provisioning agent or connect sync +This article describes the steps to verify the installed version of the provisioning agent and connect sync. ++## Verify the provisioning agent +To see what version of the provisioning agent your using, use the following steps: +++## Verfiy connect sync +To see what version of connect sync your using, use the following steps: ++### On the local server ++To verify that the agent is running, follow these steps: ++ 1. Sign in to the server with an administrator account. + 2. Open **Services** either by navigating to it or by going to *Start/Run/Services.msc*. + 3. Under **Services**, make sure that **Microsoft Azure AD Sync** is present and the status is **Running**. +++### Verify the connect sync version ++To verify that the version of the agent running, follow these steps: ++1. Navigate to 'C:\Program Files\Microsoft Azure AD Connect' +2. Right-click on **AzureADConnect.exe** and select **properties**. +3. Click the **details** tab and the version number ID next to the Product version. ++## Next steps +- [Common scenarios](common-scenarios.md) +- [Choosing the right sync tool](https://setup.microsoft.com/azure/add-or-sync-users-to-azure-ad) +- [Steps to start](get-started.md) +- [Prerequisites](prerequisites.md) |
active-directory | Concept Identity Protection B2b | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-b2b.md | From the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ ### Manually dismiss user's risk -If password reset isn't an option for you from the Azure portal, you can choose to manually dismiss user risk. Dismissing user risk doesn't have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It's important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state. +If password reset isn't an option for you, you can choose to manually dismiss user risk. Dismissing user risk doesn't have any impact on the user's existing password, but this process will change the user's Risk State from At Risk to Dismissed. It's important that you change the user's password using whatever means are available to you in order to bring the identity back to a safe state. To dismiss user risk, go to the [Risky users report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/SecurityMenuBlade/RiskyUsers) in the Azure AD Security menu. Search for the impacted user using the 'User' filter and select the user. Select the "dismiss user risk" option from the top toolbar. This action may take a few minutes to complete and update the user risk state in the report. |
active-directory | Concept Identity Protection Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-identity-protection-security-overview.md | Title: Azure Active Directory Identity Protection security overview -description: Learn how the Security overview gives you an insight into your organizationΓÇÖs security posture. +description: Learn how the security overview gives you an insight into your organizationΓÇÖs security posture. Previously updated : 07/07/2023 Last updated : 08/23/2023 -The [Security overview](https://aka.ms/IdentityProtectionRefresh) in the Azure portal gives you an insight into your organizationΓÇÖs security posture. It helps identify potential attacks and understand the effectiveness of your policies. +The Security overview gives insight into your organizationΓÇÖs security posture. It helps identify potential attacks and understand the effectiveness of your policies. The ΓÇÿSecurity overviewΓÇÖ is broadly divided into two sections: -- Trends, on the left, provide a timeline of risk in your organization.-- Tiles, on the right, highlight the key ongoing issues in your organization and suggest how to quickly take action.+- Trend graphs, provide a timeline of risk in your organization. +- Tiles, highlight the key ongoing issues in your organization and suggest how to quickly take action. -You can find the security overview page in the **Azure portal** > **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. +You can find the security overview page in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Identity Protection** > **Overview**. -## Trends --### New risky users detected --This chart shows the number of new risky users that were detected over the chosen time period. You can filter the view of this chart by user risk level (low, medium, high). Hover over the UTC date increments to see the number of risky users detected for that day. Selecting this chart will bring you to the ΓÇÿRisky usersΓÇÖ report. To remediate users that are at risk, consider changing their password. --### New risky sign-ins detected --This chart shows the number of risky sign-ins detected over the chosen time period. You can filter the view of this chart by the sign-in risk type (real-time or aggregate) and the sign-in risk level (low, medium, high). Unprotected sign-ins are successful real-time risk sign-ins that weren't MFA challenged. (Note: Sign-ins that are risky because of offline detections can't be protected in real-time by sign-in risk policies). Hover over the UTC date increments to see the number of sign-ins detected at risk for that day. Selecting this chart will bring you to the ΓÇÿRisky sign-insΓÇÖ report. --## Tiles - -### High risk users --The ΓÇÿHigh risk usersΓÇÖ tile shows the latest count of users with high probability of identity compromise. These users should be a top priority for investigation. Selecting the ΓÇÿHigh risk usersΓÇÖ tile will redirect to a filtered view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of high. Using this report, you can learn more and remediate these users with a password reset. ---### Medium risk users -The ΓÇÿMedium risk usersΓÇÖ tile shows the latest count of users with medium probability of identity compromise. Selecting the ΓÇÿMedium risk usersΓÇÖ tile will take you to a view of the ΓÇÿRisky usersΓÇÖ report showing only users with a risk level of medium. Using this report, you can further investigate and remediate these users. --### Unprotected risky sign-ins --The ΓÇÿUnprotected risky sign-ins' tile shows the last weekΓÇÖs count of successful, real-time risky sign-ins that weren't blocked or MFA challenged by a Conditional Access policy, Identity Protection risk policy, or per-user MFA. These successful sign-ins are potentially compromised and not challenged for MFA. To protect such sign-ins in future, apply a sign-in risk policy. Selecting the ΓÇÿUnprotected risky sign-ins' tile will take you to the sign-in risk policy configuration blade where you can configure the sign-in risk policy. --### Legacy authentication --The ΓÇÿLegacy authenticationΓÇÖ tile shows the last weekΓÇÖs count of legacy authentications with risk present in your organization. Legacy authentication protocols don't support modern security methods such as an MFA. To prevent legacy authentication, you can apply a Conditional Access policy. Selecting the ΓÇÿLegacy authenticationΓÇÖ tile will redirect you to the ΓÇÿIdentity Secure ScoreΓÇÖ. --### Identity Secure Score --The Identity Secure Score measures and compares your security posture to industry patterns. If you select the **Identity Secure Score** tile, it will redirect to [Identity Secure Score](../fundamentals/identity-secure-score.md) where you can learn more about improving your security posture. +The security overview page is being replaced by the [Microsoft Entra ID Protection dashboard](id-protection-dashboard.md) ## Next steps - [What is risk](concept-identity-protection-risks.md) - [Policies available to mitigate risks](concept-identity-protection-policies.md)+- [Identity Secure Score](../fundamentals/identity-secure-score.md) |
active-directory | Concept Workload Identity Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/concept-workload-identity-risk.md | These differences make workload identities harder to manage and put them at high To make use of workload identity risk, including the new **Risky workload identities** blade and the **Workload identity detections** tab in the **Risk detections** blade in the portal, you must have the following. -- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.+- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade). - One of the following administrator roles assigned- - Global Administrator - Security Administrator - Security Operator - Security Reader Users assigned the Conditional Access administrator role can create policies that use risk as a condition.+ - Global Administrator ## Workload identity risk detections We detect risk on workload identities across sign-in behavior and offline indica Organizations can find workload identities that have been flagged for risk in one of two locations: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Security** > **Risky workload identities**. -1. Or browse to **Azure Active Directory** > **Security** > **Risk detections**. - 1. Select the **Workload identity detections** tab.' +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Reader](../roles/permissions-reference.md#security-reader). +1. Browse to **Protection** > **Identity Protection** > **Risky workload identities**. :::image type="content" source="media/concept-workload-identity-risk/workload-identity-detections-in-risk-detections-report.png" alt-text="Screenshot showing risks detected against workload identities in the report." lightbox="media/concept-workload-identity-risk/workload-identity-detections-in-risk-detections-report.png"::: For improved security and resilience of your workload identities, Continuous Acc ## Investigate risky workload identities -Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal. +Identity Protection provides organizations with two reports they can use to investigate workload identity risk. These reports are the risky workload identities, and risk detections for workload identities. All reports allow for downloading of events in .CSV format for further analysis. Some of the key questions to answer during your investigation include: The [Azure Active Directory security operations guide for Applications](../archi Once you determine if the workload identity was compromised, dismiss the accountΓÇÖs risk, or confirm the account as compromised in the Risky workload identities report. You can also select ΓÇ£Disable service principalΓÇ¥ if you want to block the account from further sign-ins. ## Remediate risky workload identities |
active-directory | Howto Export Risk Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-export-risk-data.md | Azure AD stores reports and security signals for a defined period of time. When | Azure AD MFA usage | 30 days | 30 days | 30 days | | Risky sign-ins | 7 days | 30 days | 30 days | -Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the **Azure portal** > **Azure Active Directory**, **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one. +Organizations can choose to store data for longer periods by changing diagnostic settings in Azure AD to send **RiskyUsers**, **UserRiskEvents**, **RiskyServicePrincipals**, and **ServicePrincipalRiskEvents** data to a Log Analytics workspace, archive data to a storage account, stream data to an event hub, or send data to a partner solution. Find these options in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Diagnostic settings** > **Edit setting**. If you don't have a diagnostic setting, follow the instructions in the article [Create diagnostic settings to send platform logs and metrics to different destinations](../../azure-monitor/essentials/diagnostic-settings.md) to create one. [ ![Diagnostic settings screen in Azure AD showing existing configuration](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png) ](./media/howto-export-risk-data/change-diagnostic-setting-in-portal.png#lightbox) Organizations can choose to store data for longer periods by changing diagnostic Log Analytics allows organizations to query data using built in queries or custom created Kusto queries, for more information, see [Get started with log queries in Azure Monitor](../../azure-monitor/logs/get-started-queries.md). -Once enabled you'll find access to Log Analytics in the **Azure portal** > **Azure AD** > **Log Analytics**. The following tables are of most interest to Identity Protection administrators: +Once enabled you'll find access to Log Analytics in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Identity** > **Monitoring & health** > **Log Analytics**. The following tables are of most interest to Identity Protection administrators: - AADRiskyUsers - Provides data like the **Risky users** report in Identity Protection. - AADUserRiskEvents - Provides data like the **Risk detections** report in Identity Protection. |
active-directory | Howto Identity Protection Configure Mfa Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-mfa-policy.md | For more information on Azure AD multifactor authentication, see [What is Azure ## Policy configuration --1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **MFA registration policy**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator) +1. Browse to **Protection** > **Identity Protection** > **MFA registration policy**. 1. Under **Assignments** > **Users** 1. Under **Include**, select **All users** or **Select individuals and groups** if limiting your rollout. 1. Under **Exclude**, select **Users and groups** and choose your organization's emergency access or break-glass accounts. |
active-directory | Howto Identity Protection Configure Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-notifications.md | As an administrator, you can set: - **The user risk level that triggers the generation of this email** - By default, the risk level is set to ΓÇ£HighΓÇ¥ risk. - **The recipients of this email** - Users in the Global Administrator, Security Administrator, or Security Reader roles are automatically added to this list. We attempt to send emails to the first 20 members of each role. If a user is enrolled in PIM to elevate to one of these roles on demand, then **they will only receive emails if they are elevated at the time the email is sent**.- - Optionally you can **Add custom email here** users defined must have the appropriate permissions to view the linked reports in the Azure portal. + - Optionally you can **Add custom email here** users defined must have the appropriate permissions to view the linked reports. -Configure the users at risk email in the **Azure portal** under **Azure Active Directory** > **Security** > **Identity Protection** > **Users at risk detected alerts**. +Configure the users at risk email in the [Microsoft Entra admin center](https://entra.microsoft.com) under **Protection** > **Identity Protection** > **Users at risk detected alerts**. ## Weekly digest email Users in the Global Administrator, Security Administrator, or Security Reader ro As an administrator, you can switch sending a weekly digest email on or off and choose the users assigned to receive the email. -Configure the weekly digest email in the **Azure portal** under **Azure Active Directory** > **Security** > **Identity Protection** > **Weekly digest**. +Configure the weekly digest email in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Identity Protection** > **Weekly digest**. ## See also |
active-directory | Howto Identity Protection Configure Risk Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-configure-risk-policies.md | Before organizations enable remediation policies, they may want to [investigate] ### User risk policy in Conditional Access -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. After confirming your settings using [report-only mode](../conditional-access/ho ### Sign-in risk policy in Conditional Access -1. Sign in to the **Azure portal** as a Conditional Access Administrator, Security Administrator, or Global Administrator. -1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Conditional Access Administrator](../roles/permissions-reference.md#conditional-access-administrator). +1. Browse to **Protection** > **Conditional Access**. 1. Select **New policy**. 1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies. 1. Under **Assignments**, select **Users or workload identities**. If you already have risk policies enabled in Identity Protection, we highly reco 1. **Create an equivalent** [user risk-based](#user-risk-policy-in-conditional-access) and [sign-in risk-based ](#sign-in-risk-policy-in-conditional-access) policy in Conditional Access in report-only mode. You can create a policy with the steps above or using [Conditional Access templates](../conditional-access/concept-conditional-access-policy-common.md) based on Microsoft's recommendations and your organizational requirements. 1. Ensure that the new Conditional Access risk policy works as expected by testing it in [report-only mode](../conditional-access/howto-conditional-access-insights-reporting.md). 1. **Enable** the new Conditional Access risk policy. You can choose to have both policies running side-by-side to confirm the new policies are working as expected before turning off the Identity Protection risk policies.- 1. Browse back to **Azure Active Directory** > **Security** > **Conditional Access**. + 1. Browse back to **Protection** > **Conditional Access**. 1. Select this new policy to edit it. 1. Set **Enable policy** to **On** to enable the policy 1. **Disable** the old risk policies in Identity Protection.- 1. Browse to **Azure Active Directory** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy. + 1. Browse to **Protection** > **Identity Protection** > Select the **User risk** or **Sign-in risk** policy. 1. Set **Enforce policy** to **Off** 1. Create other risk policies if needed in [Conditional Access](../conditional-access/concept-conditional-access-policy-common.md). |
active-directory | Howto Identity Protection Investigate Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-investigate-risk.md | -All three reports allow for downloading of events in .CSV format for further analysis outside of the Azure portal. The risky users and risky sign-ins reports allow for downloading the most recent 2500 entries, while the risk detections report allows for downloading the most recent 5000 records. +All three reports allow for downloading of events in .CSV format for further analysis. The risky users and risky sign-ins reports allow for downloading the most recent 2500 entries, while the risk detections report allows for downloading the most recent 5000 records. Organizations can take advantage of the Microsoft Graph API integrations to aggregate data with other sources they may have access to as an organization. -The three reports are found in the **Azure portal** > **Azure Active Directory** > **Security**. +The three reports are found in the [Microsoft Entra admin center](https://entra.microsoft.com) > **Protection** > **Identity Protection**. ## Navigating the reports To view and investigate risks on a userΓÇÖs account, select the ΓÇ£Detections no The Risk history tab also shows all the events that have led to a user risk change in the last 90 days. This list includes risk detections that increased the userΓÇÖs risk and admin remediation actions that lowered the userΓÇÖs risk. View it to understand how the userΓÇÖs risk has changed. With the information provided by the risky users report, administrators can find: Administrators can then choose to take action on these events. Administrators ca ## Risky sign-ins The risky sign-ins report contains filterable data for up to the past 30 days (one month). Administrators can then choose to take action on these events. Administrators ca ## Risk detections The risk detections report contains filterable data for up to the past 90 days (three months). |
active-directory | Howto Identity Protection Remediate Unblock | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md | Administrators are given two options when resetting a password for their users: If after investigation and confirming that the user account isn't at risk of being compromised, then you can choose to dismiss the risky user. -To **Dismiss user risk**, search for and select **Azure AD Risky users** in the Azure portal or the Entra portal, select the affected user, and select **Dismiss user(s) risk**. +To Dismiss user risk in the [Microsoft Entra admin center](https://entra.microsoft.com), browse to **Protection** > **Identity Protection** > **Risky users**, select the affected user, and select **Dismiss user(s) risk**. When you select **Dismiss user risk**, the user is no longer at risk, and all the risky sign-ins of this user and corresponding risk detections are dismissed as well. |
active-directory | Howto Identity Protection Simulate Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-simulate-risk.md | Simulating the atypical travel condition is difficult because the algorithm uses **To simulate an atypical travel risk detection, perform the following steps**: 1. Using your standard browser, navigate to [https://myapps.microsoft.com](https://myapps.microsoft.com). -2. Enter the credentials of the account you want to generate an atypical travel risk detection for. -3. Change your user agent. You can change user agent in Microsoft Edge from Developer Tools (F12). -4. Change your IP address. You can change your IP address by using a VPN, a Tor add-on, or creating a new virtual machine in Azure in a different data center. -5. Sign-in to [https://myapps.microsoft.com](https://myapps.microsoft.com) using the same credentials as before and within a few minutes after the previous sign-in. +1. Enter the credentials of the account you want to generate an atypical travel risk detection for. +1. Change your user agent. You can change user agent in Microsoft Edge from Developer Tools (F12). +1. Change your IP address. You can change your IP address by using a VPN, a Tor add-on, or creating a new virtual machine in Azure in a different data center. +1. Sign-in to [https://myapps.microsoft.com](https://myapps.microsoft.com) using the same credentials as before and within a few minutes after the previous sign-in. The sign-in shows up in the Identity Protection dashboard within 2-4 hours. ## Leaked Credentials for Workload Identities - This risk detection indicates that the application's valid credentials have been leaked. This leak can occur when someone checks in the credentials in a public code artifact on GitHub. Therefore, to simulate this detection, you need a GitHub account and can [sign up a GitHub account](https://docs.github.com/get-started/signing-up-for-github) if you don't have one already. -**To simulate Leaked Credentials in GitHub for Workload Identities, perform the following steps**: -1. Sign in to the [Azure portal](https://portal.azure.com). -2. Browse to **Azure Active Directory** > **App registrations**. -3. Select **New registration** to register a new application or reuse an existing stale application. -4. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit. +### Simulate Leaked Credentials in GitHub for Workload Identities ++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator). +1. Browse to **Identity** > **Applications** > **App registrations**. +1. Select **New registration** to register a new application or reuse an existing stale application. +1. Select **Certificates & Secrets** > **New client Secret** , add a description of your client secret and set an expiration for the secret or specify a custom lifetime and select **Add**. Record the secret's value for later use for your GitHub Commit. > [!Note] > **You can not retrieve the secret again after you leave this page**. -5. Get the TenantID and Application(Client)ID in the **Overview** page. -6. Ensure you disable the application via **Azure Active Directory** > **Enterprise Application** > **Properties** > Set **Enabled for users to sign-in** to **No**. -7. Create a **public** GitHub Repository, add the following config and commit the change as a file with the .txt extension. +1. Get the TenantID and Application(Client)ID in the **Overview** page. +1. Ensure you disable the application via **Identity** > **Applications** > **Enterprise Application** > **Properties** > Set **Enabled for users to sign-in** to **No**. +1. Create a **public** GitHub Repository, add the following config and commit the change as a file with the .txt extension. ```GitHub file "AadClientId": "XXXX-2dd4-4645-98c2-960cf76a4357", "AadSecret": "p3n7Q~XXXX", "AadTenantDomain": "XXXX.onmicrosoft.com", "AadTenantId": "99d4947b-XXX-XXXX-9ace-abceab54bcd4", ```-7. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit. +1. In about 8 hours, you'll be able to view a leaked credential detection under **Azure Active Directory** > **Security** > **Risk Detection** > **Workload identity detections** where the additional info will contain the URL of your GitHub commit. ## Testing risk policies This section provides you with steps for testing the user and the sign-in risk p To test a user risk security policy, perform the following steps: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. -1. Select **Configure user risk policy**. - 1. Under **Assignments** - 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout. - 1. Optionally you can choose to exclude users from the policy. - 1. **Conditions** - **User risk** Microsoft's recommendation is to set this option to **High**. - 1. Under **Controls** - 1. **Access** - Microsoft's recommendation is to **Allow access** and **Require password change**. - 1. **Enforce Policy** - **Off** - 1. **Save** - This action will return you to the **Overview** page. +1. Configure a [user risk policy](howto-identity-protection-configure-risk-policies.md#user-risk-policy-in-conditional-access) targeting the users you plan to test with. 1. Elevate the user risk of a test account by, for example, simulating one of the risk detections a few times. 1. Wait a few minutes, and then verify that risk has elevated for your user. If not, simulate more risk detections for the user. 1. Return to your risk policy and set **Enforce Policy** to **On** and **Save** your policy change. To test a user risk security policy, perform the following steps: To test a sign-in risk policy, perform the following steps: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Browse to **Azure Active Directory** > **Security** > **Identity Protection** > **Overview**. -1. Select **Configure sign-in risk policy**. - 1. Under **Assignments** - 1. **Users** - Choose **All users** or **Select individuals and groups** if limiting your rollout. - 1. Optionally you can choose to exclude users from the policy. - 1. **Conditions** - **Sign-in risk** Microsoft's recommendation is to set this option to **Medium and above**. - 1. Under **Controls** - 1. **Access** - Microsoft's recommendation is to **Allow access** and **Require multifactor authentication**. - 1. **Enforce Policy** - **On** - 1. **Save** - This action will return you to the **Overview** page. +1. Configure a [sign-in risk policy](howto-identity-protection-configure-risk-policies.md#sign-in-risk-policy-in-conditional-access) targeting the users you plan to test with. 1. You can now test Sign-in Risk-based Conditional Access by signing in using a risky session (for example, by using the Tor browser). ## Next steps |
active-directory | Id Protection Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/id-protection-dashboard.md | To access this new dashboard, you need: Organizations can access the new dashboard by: 1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)**.-1. Browse to **Identity** > **Protection** > **Identity Protection** > **Dashboard (Preview)**. +1. Browse to **Protection** > **Identity Protection** > **Dashboard (Preview)**. ### Metric cards |
active-directory | App Management Powershell Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/app-management-powershell-samples.md | |
active-directory | Assign User Or Group Access Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/assign-user-or-group-access-portal.md | |
active-directory | Certificate Signing Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/certificate-signing-options.md | Title: Advanced certificate signing options in a SAML token -description: Learn how to use advanced certificate signing options in the SAML token for pre-integrated apps in Azure Active Directory +description: Learn how to use advanced certificate signing options in the SAML token for preintegrated apps in Azure Active Directory -Today Azure Active Directory (Azure AD) supports thousands of pre-integrated applications in the Azure Active Directory App Gallery. Over 500 of the applications support single sign-on by using the [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) 2.0 protocol, such as the [NetSuite](https://azuremarketplace.microsoft.com/marketplace/apps/aad.netsuite) application. When a customer authenticates to an application through Azure AD by using SAML, Azure AD sends a token to the application (via an HTTP POST). The application then validates and uses the token to sign in the customer instead of prompting for a username and password. These SAML tokens are signed with the unique certificate that's generated in Azure AD and by specific standard algorithms. +Today Azure Active Directory (Azure AD) supports thousands of preintegrated applications in the Azure Active Directory App Gallery. Over 500 of the applications support single sign-on by using the [Security Assertion Markup Language](https://wikipedia.org/wiki/Security_Assertion_Markup_Language) (SAML) 2.0 protocol, such as the [NetSuite](https://azuremarketplace.microsoft.com/marketplace/apps/aad.netsuite) application. When a customer authenticates to an application through Azure AD by using SAML, Azure AD sends a token to the application (via an HTTP POST). The application then validates and uses the token to sign in the customer instead of prompting for a username and password. These SAML tokens are signed with the unique certificate that's generated in Azure AD and by specific standard algorithms. Azure AD uses some of the default settings for the gallery applications. The default values are set up based on the application's requirements. |
active-directory | Configure Authentication For Federated Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-authentication-for-federated-users-portal.md | |
active-directory | Configure Permission Classifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-permission-classifications.md | |
active-directory | Configure Risk Based Step Up Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-risk-based-step-up-consent.md | |
active-directory | Configure User Consent Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent-groups.md | Title: Configure group owner consent to apps accessing group data -description: Learn manage whether group and team owners can consent to applications that will have access to the group or team's data. +description: Manage group and team owners consent to applications that should be granted access to the group or team's data. +zone_pivot_groups: enterprise-apps-minus-former-powershell #customer intent: As an admin, I want to configure group owner consent to apps accessing group data using Azure AD -# Configure group owner consent to applications +# Configure group and team owner consent to applications ++In this article, you'll learn how to configure the way group and team owners consent to applications and how to disable all future group and team owners' consent operations to applications. Group and team owners can authorize applications, such as applications published by third-party vendors, to access your organization's data associated with a group. For example, a team owner in Microsoft Teams can allow an app to read all Teams messages in the team, or list the basic profile of a group's members. See [Resource-specific consent in Microsoft Teams](/microsoftteams/resource-specific-consent) to learn more. +Group owner consent can be managed in two separate ways: through *directory settings* and *app consent policy*. In the directory settings, you can enable all groups owner, enable selected group owner, or disable group owners' ability to give consent to applications. On the other hand, by utilizing the app consent policy, you can specify which app consent policy governs the group owner consent for applications. You then have the flexibility to assign either a Microsoft built-in policy or create your own custom policy to effectively manage the consent process for group owners. ++Before utilizing the app consent policy to manage your group owner consent, you need to disable the group owner consent setting that is managed by directory settings. Disabling this setting allows for group owner consent subject to app consent policies. You can learn how to disable the group owner consent setting in various ways in this article. Learn more about [managing group owner consent by app consent policies](manage-group-owner-consent-policies.md) tailored to your needs. ++ ## Prerequisites -To complete the tasks in this guide, you need the following: +To configure group and team owner consent, you need: -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A Global Administrator role.-- Set up Azure AD PowerShell. See [Azure AD PowerShell](/powershell/azure/)+- A user account. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A Global Administrator or Privileged Administrator role. -## Manage group owner consent to apps +## Manage group owner consent to apps by directory settings [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -You can configure which users are allowed to consent to apps accessing their groups' or teams' data, or you can disable this for all users. +You can configure which users are allowed to consent to apps accessing their groups' or teams' data, or you can disable the setting for all users. + -# [Portal](#tab/azure-portal) +To configure group and team owner consent settings through the Azure portal: Follow these steps to manage group owner consent to apps accessing group data: Follow these steps to manage group owner consent to apps accessing group data: In this example, all group owners are allowed to consent to apps accessing their groups' data: :::image type="content" source="media/configure-user-consent-groups/group-owner-consent.png" alt-text="Group owner consent settings":::+++To manage group and team owner consent settings through directory setting by Microsoft Graph PowerShell: ++You can use the [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) module to enable or disable group owners' ability to consent to applications accessing your organization's data for the groups they own. The cmdlets used here are included in the [Microsoft.Graph.Identity.SignIns](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.SignIns) module. ++### Connect to Microsoft Graph PowerShell ++Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*. ++change the profile to beta by using the `Select-MgProfile` command +```powershell +Select-MgProfile -Name "beta" +``` +Use the least-privilege permission +```powershell +Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization" ++# If you need to create a new setting based on the templates, please use this permission +Connect-MgGraph -Scopes "Directory.ReadWrite.All" +``` ++### Retrieve the current setting through directory settings ++Retrieve the current value for the **Consent Policy Settings** directory settings in your tenant. This requires checking if the directory settings for this feature have been created, and if not, using the values from the corresponding directory settings template. ++```powershell +$consentSettingsTemplateId = "dffd5d46-495d-40a9-8e21-954ff55e198a" # Consent Policy Settings +$settings = Get-MgDirectorySetting | ?{ $_.TemplateId -eq $consentSettingsTemplateId } ++if (-not $settings) { + $template = Get-MgDirectorySettingTemplate -DirectorySettingTemplateId $consentSettingsTemplateId + $body = @{ + "templateId" = $template.Id + "values" = @( + @{ + "name" = "EnableGroupSpecificConsent" + "value" = $true + }, + @{ + "name" = "BlockUserConsentForRiskyApps" + "value" = $true + }, + @{ + "name" = "EnableAdminConsentRequests" + "value" = $true + }, + @{ + "name" = "ConstrainGroupSpecificConsentToMembersOfGroupId" + "value" = "" + } + ) + } + $settings = New-MgDirectorySetting -BodyParameter $body +} ++$enabledValue = $settings.Values | ? { $_.Name -eq "EnableGroupSpecificConsent" } +$limitedToValue = $settings.Values | ? { $_.Name -eq "ConstrainGroupSpecificConsentToMembersOfGroupId" } +``` + +### Understand the setting values ++There are two settings values that define which users would be able to allow an app to access their group's data: ++| Setting | Type | Description | +| - | | | +| _EnableGroupSpecificConsent_ | Boolean | Flag indicating if groups owners are allowed to grant group-specific permissions. | +| _ConstrainGroupSpecificConsentToMembersOfGroupId_ | Guid | If _EnableGroupSpecificConsent_ is set to "True" and this value set to a group's object ID, members of the identified group will be authorized to grant group-specific permissions to the groups they own. | ++### Update settings values for the desired configuration ++```powershell +# Disable group-specific consent entirely +$enabledValue.Value = "false" +$limitedToValue.Value = "" +``` ++```powershell +# Enable group-specific consent for all users +$enabledValue.Value = "true" +$limitedToValue.Value = "" +``` ++```powershell +# Enable group-specific consent for users in a given group +$enabledValue.Value = "true" +$limitedToValue.Value = "{group-object-id}" +``` ++### Save your settings ++```powershell +# Update an existing directory settings +Update-MgDirectorySetting -DirectorySettingId $settings.Id -Values $settings.Values +``` ++++To manage group and team owner consent settings through directory setting by [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) : ++### Retrieve the current setting through directory settings ++Retrieve the current value for the **Consent Policy Settings** from directory settings in your tenant. This requires checking if the directory settings for this feature have been created, and if not, using the second MS Graph call to create the corresponding directory settings. +```http +GET https://graph.microsoft.com/beta/settings +``` +Response ++``` http +{ + "@odata.context": "https://graph.microsoft.com/beta/$metadata#settings", + "value": [ + { + "id": "{ directorySettingId }", + "displayName": "Consent Policy Settings", + "templateId": "dffd5d46-495d-40a9-8e21-954ff55e198a", + "values": [ + { + "name": "EnableGroupSpecificConsent", + "value": "true" + }, + { + "name": "BlockUserConsentForRiskyApps", + "value": "true" + }, + { + "name": "EnableAdminConsentRequests", + "value": "true" + }, + { + "name": "ConstrainGroupSpecificConsentToMembersOfGroupId", + "value": "" + } + ] + } + ] +} +``` +++create the corresponding directory settings if the `value` is empty (see below as an example). +```http +GET https://graph.microsoft.com/beta/settings ++{ + "@odata.context": "https://graph.microsoft.com/beta/$metadata#settings", + "value": [] +} +``` +++```http +POST https://graph.microsoft.com/beta/settings +{ + "templateId": "dffd5d46-495d-40a9-8e21-954ff55e198a", + "values": [ + { + "name": "EnableGroupSpecificConsent", + "value": "true" + }, + { + "name": "BlockUserConsentForRiskyApps", + "value": "true" + }, + { + "name": "EnableAdminConsentRequests", + "value": "true" + }, + { + "name": "ConstrainGroupSpecificConsentToMembersOfGroupId", + "value": "" + } + ] +} +``` +### Understand the setting values ++There are two settings values that define which users would be able to allow an app to access their group's data: ++| Setting | Type | Description | +| - | | | +| _EnableGroupSpecificConsent_ | Boolean | Flag indicating if groups owners are allowed to grant group-specific permissions. | +| _ConstrainGroupSpecificConsentToMembersOfGroupId_ | Guid | If _EnableGroupSpecificConsent_ is set to "True" and this value set to a group's object ID, members of the identified group will be authorized to grant group-specific permissions to the groups they own. | ++### Update settings values for the desired configuration ++Replace `{directorySettingId}` with the actual ID in the `value` collection when retrieving the current setting ++Disable group-specific consent entirely +```http +PATCH https://graph.microsoft.com/beta/settings/{directorySettingId} +{ + "values": [ + { + "name": "EnableGroupSpecificConsent", + "value": "false" + }, + { + "name": "BlockUserConsentForRiskyApps", + "value": "true" + }, + { + "name": "EnableAdminConsentRequests", + "value": "true" + }, + { + "name": "ConstrainGroupSpecificConsentToMembersOfGroupId", + "value": "" + } + ] +} +``` ++Enable group-specific consent for all users +```http +PATCH https://graph.microsoft.com/beta/settings/{directorySettingId} +{ + "values": [ + { + "name": "EnableGroupSpecificConsent", + "value": "true" + }, + { + "name": "BlockUserConsentForRiskyApps", + "value": "true" + }, + { + "name": "EnableAdminConsentRequests", + "value": "true" + }, + { + "name": "ConstrainGroupSpecificConsentToMembersOfGroupId", + "value": "" + } + ] +} +``` +Enable group-specific consent for users in a given group +```http +PATCH https://graph.microsoft.com/beta/settings/{directorySettingId} +{ + "values": [ + { + "name": "EnableGroupSpecificConsent", + "value": "true" + }, + { + "name": "BlockUserConsentForRiskyApps", + "value": "true" + }, + { + "name": "EnableAdminConsentRequests", + "value": "true" + }, + { + "name": "ConstrainGroupSpecificConsentToMembersOfGroupId", + "value": "{group-object-id}" + } + ] +} +``` ++> [!NOTE] +> **User can consent to apps accessing company data on their behalf** setting, when turned off, doesn't disable the **Users can consent to apps accessing company data for groups they own** option. ++## Manage group owner consent to apps by app consent policy ++You can configure which users are allowed to consent to apps accessing their groups' or teams' data through app consent policies. To allow group owner consent subject to app consent policies, the group owner consent setting must be disabled. Once disabled, your current policy is read from app consent policies. + -# [PowerShell](#tab/azure-powershell) +To choose which app consent policy governs user consent for applications, you can use the [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) module. The cmdlets used here are included in the [Microsoft.Graph.Identity.SignIns](https://www.powershellgallery.com/packages/Microsoft.Graph.Identity.SignIns) module. -You can use the Azure AD PowerShell Preview module, [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview), to enable or disable group owners' ability to consent to applications accessing your organization's data for the groups they own. +### Connect to Microsoft Graph PowerShell -1. Make sure you're using the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module. This step is important if you have installed both the [AzureAD](/powershell/module/azuread/) module and the [AzureADPreview](/powershell/module/azuread/?preserve-view=true&view=azureadps-2.0-preview) module). +Connect to Microsoft Graph PowerShell using the least-privilege permission needed. For reading the current user consent settings, use *Policy.Read.All*. For reading and changing the user consent settings, use *Policy.ReadWrite.Authorization*. +```powershell +# change the profile to beta by using the `Select-MgProfile` command +Select-MgProfile -Name "beta" +``` +```powershell +Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization" +``` ++### Disable group owner consent to use app consent policies ++1. Check if the `ManagePermissionGrantPoliciesForOwnedResource` is scoped in `group` ++ 1. Retrieve the current value for the group owner consent setting ```powershell- Remove-Module AzureAD - Import-Module AzureADPreview + Get-MgPolicyAuthorizationPolicy | select -ExpandProperty DefaultUserRolePermissions | ft PermissionGrantPoliciesAssigned ```+ If `ManagePermissionGrantPoliciesForOwnedResource` is returned in `PermissionGrantPoliciesAssigned`, your group owner consent setting **might** have been governed by the app consent policy. -1. Connect to Azure AD PowerShell. + 1. Check if the policy is scoped to `group` + ```powershell + Get-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | ft AdditionalProperties + ``` + If `resourceScopeType` == `group`, your group owner consent setting **has been** governed by the app consent policy. - ```powershell - Connect-AzureAD - ``` +1. To disable group owner consent to utilize app consent policies, ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other current `ManagePermissionGrantsForOwnedResource.*` policies if any that aren't applicable to groups while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. -1. Retrieve the current value for the **Consent Policy Settings** directory settings in your tenant. This requires checking if the directory settings for this feature have been created, and if not, using the values from the corresponding directory settings template. +```powershell +# only exclude policies that are scoped in group +$body = @{ + "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @( + "managePermissionGrantsForSelf.{current-policy-for-user-consent}", + "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}" + ) +} +Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body - ```powershell - $consentSettingsTemplateId = "dffd5d46-495d-40a9-8e21-954ff55e198a" # Consent Policy Settings - $settings = Get-AzureADDirectorySetting -All $true | Where-Object { $_.TemplateId -eq $consentSettingsTemplateId } +``` - if (-not $settings) { - $template = Get-AzureADDirectorySettingTemplate -Id $consentSettingsTemplateId - $settings = $template.CreateDirectorySetting() - } +### Assign an app consent policy to group owners - $enabledValue = $settings.Values | ? { $_.Name -eq "EnableGroupSpecificConsent" } - $limitedToValue = $settings.Values | ? { $_.Name -eq "ConstrainGroupSpecificConsentToMembersOfGroupId" } - ``` +To allow group owner consent subject to an app consent policy, choose which app consent policy should govern group owners' authorization to grant consent to apps. Ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. -1. Understand the setting values. There are two settings values that define which users would be able to allow an app to access their group's data: +```powershell +$body = @{ + "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @( + "managePermissionGrantsForSelf.{current-policy-for-user-consent}", + "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}", + "managePermissionGrantsForOwnedResource.{app-consent-policy-id-for-group}" #new app consent policy for groups + ) +} +Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body +``` - | Setting | Type | Description | - | - | | | - | _EnableGroupSpecificConsent_ | Boolean | Flag indicating if groups owners are allowed to grant group-specific permissions. | - | _ConstrainGroupSpecificConsentToMembersOfGroupId_ | Guid | If _EnableGroupSpecificConsent_ is set to "True" and this value set to a group's object ID, members of the identified group will be authorized to grant group-specific permissions to the groups they own. | +Replace `{app-consent-policy-id-for-group}` with the ID of the policy you want to apply. You can choose a [custom app consent policy](manage-group-owner-consent-policies.md#create-a-custom-group-owner-consent-policy) that you've created, or you can choose from the following built-in policies: -1. Update settings values for the desired configuration: +| ID | Description | +|:|:| +| microsoft-pre-approval-apps-for-group | **Allow group owner consent to pre-approved apps only**<br/> Allow group owners consent only for apps preapproved by admins for the groups they own. | +| microsoft-all-application-permissions-for-group | **Allow group owner consent to apps**<br/> This option allows all group owners to consent to any permission that doesn't require admin consent, for any application, for the groups they own. It includes apps that have been preapproved by permission grant preapproval policy for group resource-specific-consent. | - ```powershell - # Disable group-specific consent entirely - $enabledValue.Value = "False" - $limitedToValue.Value = "" - ``` +For example, to enable group owner consent subject to the built-in policy `microsoft-all-application-permissions-for-group`, run the following commands: - ```powershell - # Enable group-specific consent for all users - $enabledValue.Value = "True" - $limitedToValue.Value = "" +```powershell +$body = @{ + "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @( + "managePermissionGrantsForSelf.{current-policy-for-user-consent}", + "managePermissionGrantsForOwnedResource.{all-policies-that-are-not-applicable-to-groups}", + "managePermissionGrantsForOwnedResource.{microsoft-all-application-permissions-for-group}" # policy that is be scoped to group + ) +} +Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body +``` ++++Use the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to choose which group owner consent policy governs user consent group owners' ability to consent to applications accessing your organization's data for the groups they own. ++### Disable group owner consent to use app consent policies ++1. Check if the `ManagePermissionGrantPoliciesForOwnedResource` is scoped in `group` ++ 1. Retrieve the current value for the group owner consent setting + ```http + GET https://graph.microsoft.com/v1.0/policies/authorizationPolicy ```+ If `ManagePermissionGrantsForOwnedResource` is returned in `permissionGrantPolicyIdsAssignedToDefaultUserRole`, your group owner consent setting might have been governed by the app consent policy. - ```powershell - # Enable group-specific consent for users in a given group - $enabledValue.Value = "True" - $limitedToValue.Value = "{group-object-id}" + 2.Check if the policy is scoped to `group` + ```http + GET https://graph.microsoft.com/beta/policies/permissionGrantPolicies/{microsoft-all-application-permissions-for-group} ```+ If `resourceScopeType` == `group`, your group owner consent setting has been governed by the app consent policy. ++2. To disable group owner consent to utilize app consent policies, ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other current `ManagePermissionGrantsForOwnedResource.*` policies if any that aren't applicable to groups. This way, you can maintain your current configuration for user consent settings and other resource consent settings. + ```http + PATCH https://graph.microsoft.com/beta/policies/authorizationPolicy + { + "defaultUserRolePermissions": { + "permissionGrantPoliciesAssigned": [ + "managePermissionGrantsForSelf.{current-policy-for-user-consent}", + "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}" + ] + } + } + ``` -1. Save your settings. +### Assign an app consent policy to group owners - ```powershell - if ($settings.Id) { - # Update an existing directory settings - Set-AzureADDirectorySetting -Id $settings.Id -DirectorySetting $settings - } else { - # Create a new directory settings to override the default setting - New-AzureADDirectorySetting -DirectorySetting $settings - } - ``` +To allow group owner consent subject to an app consent policy, choose which app consent policy should govern group owners' authorization to grant consent to apps. Ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include the current `ManagePermissionGrantsForSelf.*` policy and other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. -+```http +PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy -> [!NOTE] -> "User can consent to apps accessing company data on their behalf" setting, when turned off, does not disable the "Users can consent to apps accessing company data for groups they own" option +{ + "defaultUserRolePermissions": { + "managePermissionGrantsForSelf.{current-policy-for-user-consent}", + "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}", + "managePermissionGrantsForOwnedResource.{app-consent-policy-id-for-group}" + } +} +``` -## Next steps +Replace `{app-consent-policy-id-for-group}` with the ID of the policy you want to apply for groups. You can choose a [custom app consent policy for groups](manage-group-owner-consent-policies.md) that you've created, or you can choose from the following built-in policies: ++| ID | Description | +|:|:| +| microsoft-pre-approval-apps-for-group | **Allow group owner consent to pre-approved apps only**<br/> Allow group owners consent only for apps preapproved by admins for the groups they own. | +| microsoft-all-application-permissions-for-group | **Allow group owner consent to apps**<br/> This option allows all group owners to consent to any permission that doesn't require admin consent, for any application, for the groups they own. It includes apps that have been preapproved by permission grant preapproval policy for group resource-specific-consent. | -To learn more: +For example, to enable group owner consent subject to the built-in policy `microsoft-pre-approval-apps-for-group`, use the following PATCH command: ++```http +PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy ++{ + "defaultUserRolePermissions": { + "permissionGrantPoliciesAssigned": [ + "managePermissionGrantsForSelf.{current-policy-for-user-consent}", + "managePermissionGrantsForOwnedResource.{other-policies-that-are-not-applicable-to-groups}", + "managePermissionGrantsForOwnedResource.microsoft-pre-approval-apps-for-group" + ] + } +} +``` ++## Next steps -* [Configure user consent settings](configure-user-consent.md) -* [Configure the admin consent workflow](configure-admin-consent-workflow.md) -* [Learn how to manage consent to applications and evaluate consent requests](manage-consent-requests.md) -* [Grant tenant-wide admin consent to an application](grant-admin-consent.md) -* [Permissions and consent in the Microsoft identity platform](../develop/permissions-consent-overview.md) +- [Manage group owner consent policies](manage-group-owner-consent-policies.md) To get help or find answers to your questions: -* [Azure AD on Microsoft Q&A](/answers/topics/azure-active-directory.html) +- [Azure AD on Microsoft Q&A](/answers/topics/azure-active-directory.html) |
active-directory | Configure User Consent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/configure-user-consent.md | Connect-MgGraph -Scopes "Policy.ReadWrite.Authorization" ``` ### Disable user consent--To disable user consent, set the consent policies that govern user consent to empty: +To disable user consent, ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. ```powershell-Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{ - "PermissionGrantPoliciesAssigned" = @() } +# only exclude user consent policy +$body = @{ + "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @( + "managePermissionGrantsForOwnedResource.{other-current-policies}" + ) +} +Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body + ``` ### Allow user consent subject to an app consent policy--To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps: +To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps. Please ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. ```powershell-Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{ - "PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.{consent-policy-id}") } +$body = @{ + "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @( + "managePermissionGrantsForSelf.{consent-policy-id}", + "managePermissionGrantsForOwnedResource.{other-current-policies}" + ) +} +Update-MgPolicyAuthorizationPolicy -AuthorizationPolicyId authorizationPolicy -BodyParameter $body ``` Replace `{consent-policy-id}` with the ID of the policy you want to apply. You can choose a [custom app consent policy](manage-app-consent-policies.md#create-a-custom-app-consent-policy) that you've created, or you can choose from the following built-in policies: Replace `{consent-policy-id}` with the ID of the policy you want to apply. You c For example, to enable user consent subject to the built-in policy `microsoft-user-default-low`, run the following commands: ```powershell-Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{ - "PermissionGrantPoliciesAssigned" = @("managePermissionGrantsForSelf.microsoft-user-default-low") } +$body = @{ + "permissionGrantPolicyIdsAssignedToDefaultUserRole" = @( + "managePermissionGrantsForSelf.managePermissionGrantsForSelf.microsoft-user-default-low", + "managePermissionGrantsForOwnedResource.{other-current-policies}" + ) +} ``` :::zone-end Update-MgPolicyAuthorizationPolicy -DefaultUserRolePermissions @{ Use the [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) to choose which app consent policy governs user consent for applications. -To disable user consent, set the consent policies that govern user consent to empty: +To disable user consent, please ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. ```http PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy { "defaultUserRolePermissions": {- "permissionGrantPoliciesAssigned": [] - } + "permissionGrantPoliciesAssigned": [ + "managePermissionGrantsForOwnedResource.{other-current-policies}" + ] + } } ``` ### Allow user consent subject to an app consent policy -To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps: +To allow user consent, choose which app consent policy should govern users' authorization to grant consent to apps. Ensure that the consent policies (`PermissionGrantPoliciesAssigned`) include other current `ManagePermissionGrantsForOwnedResource.*` policies if any while updating the collection. This way, you can maintain your current configuration for user consent settings and other resource consent settings. ```http PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy {- "defaultUserRolePermissions": { - "permissionGrantPoliciesAssigned": ["ManagePermissionGrantsForSelf.microsoft-user-default-legacy"] + "defaultUserRolePermissions": { + "managePermissionGrantsForSelf.{consent-policy-id}", + "managePermissionGrantsForOwnedResource.{other-current-policies}" } } ``` PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy { "defaultUserRolePermissions": { "permissionGrantPoliciesAssigned": [- "managePermissionGrantsForSelf.microsoft-user-default-low" + "managePermissionGrantsForSelf.microsoft-user-default-low", + "managePermissionGrantsForOwnedResource.{other-current-policies}" ] } } PATCH https://graph.microsoft.com/v1.0/policies/authorizationPolicy > [!TIP] > To allow users to request an administrator's review and approval of an application that the user isn't allowed to consent to, [enable the admin consent workflow](configure-admin-consent-workflow.md). For example, you might do this when user consent has been disabled or when an application is requesting permissions that the user isn't allowed to grant.+ ## Next steps - [Manage app consent policies](manage-app-consent-policies.md)-- [Configure the admin consent workflow](configure-admin-consent-workflow.md)+- [Configure the admin consent workflow](configure-admin-consent-workflow.md) |
active-directory | Custom Security Attributes Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/custom-security-attributes-apps.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). [Custom security attributes](../fundamentals/custom-security-attributes-overview.md) in Azure Active Directory (Azure AD) are business-specific attributes (key-value pairs) that you can define and assign to Azure AD objects. For example, you can assign custom security attribute to filter your applications or to help determine who gets access. This article describes how to assign, update, list, or remove custom security attributes for Azure AD enterprise applications. |
active-directory | Delete Application Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md | Last updated 06/21/2023 zone_pivot_groups: enterprise-apps-all-+ #Customer intent: As an administrator of an Azure AD tenant, I want to delete an enterprise application. |
active-directory | Disable User Sign In Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/disable-user-sign-in-portal.md | |
active-directory | Hide Application From User Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md | |
active-directory | Home Realm Discovery Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/home-realm-discovery-policy.md | |
active-directory | Howto Enforce Signed Saml Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-enforce-signed-saml-authentication.md | If enabled Azure Active Directory will validate the requests against the public - Protocol not allowed for signed requests. Only SAML protocol is supported. - Request not signed, but verification is enabled. -- No verification certificate configured for SAML request signature verification. +- No verification certificate configured for SAML request signature verification. For more information about the certificate requirements, see [Certificate signing options](certificate-signing-options.md). - Signature verification failed. - Key identifier in request is missing and two most recently added certificates don't match with the request signature. - Request signed but algorithm missing. -- No certificate matching with provided key identifier. +- No certificate matching with provided key identifier. - Signature algorithm not allowed. Only RSA-SHA256 is supported. > [!NOTE] |
active-directory | Howto Saml Token Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/howto-saml-token-encryption.md | Last updated 06/15/2023 -+ # Configure Azure Active Directory SAML token encryption |
active-directory | Manage App Consent Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-app-consent-policies.md | zone_pivot_groups: enterprise-apps-minus-portal-aad # Manage app consent policies +App consent policies are a way to manage the permissions that apps have to access data in your organization. They're used to control what apps users can consent to and to ensure that apps meet certain criteria before they can access data. These policies help organizations maintain control over their data and ensure they only grant access to trusted apps. ++In this article, you learn how to manage built-in and custom app consent policies to control when consent can be granted. ++ With [Microsoft Graph](/graph/overview) and [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage app consent policies. An app consent policy consists of one or more "include" condition sets and zero or more "exclude" condition sets. For an event to be considered in an app consent policy, it must match *at least* one "include" condition set, and must not match *any* "exclude" condition set. Each condition set consists of several conditions. For an event to match a condi App consent policies where the ID begins with "microsoft-" are built-in policies. Some of these built-in policies are used in existing built-in directory roles. For example, the `microsoft-application-admin` app consent policy describes the conditions under which the Application Administrator and Cloud Application Administrator roles are allowed to grant tenant-wide admin consent. Built-in policies can be used in custom directory roles and to configure user consent settings, but can't be edited or deleted. -## Pre-requisites +## Prerequisites -1. A user or service with one of the following roles: +- A user or service with one of the following roles: - Global Administrator directory role - Privileged Role Administrator directory role - A custom directory role with the necessary [permissions to manage app consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies) App consent policies where the ID begins with "microsoft-" are built-in policies :::zone pivot="ms-powershell" -2. Connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true). -+To manage app consent policies for applications with Microsoft Graph PowerShell, connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true). ```powershell Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant" ``` Once the app consent policy has been created, you can [allow user consent](confi ## Delete a custom app consent policy -1. The following shows how you can delete a custom app consent policy. **This action cannot be undone.** -- ```powershell +The following cmdlet shows how you can delete a custom app consent policy. + +```powershell Remove-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId "my-custom-policy"- ``` +``` :::zone-end Follow these steps to create a custom app consent policy: 1. Add "include" condition sets. - Include delegated permissions classified "low", for apps from verified publishers + Include delegated permissions classified "low" for apps from verified publishers ```http POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-policy }/includes Follow these steps to create a custom app consent policy: { "permissionType": "delegated",- ΓÇ£PermissionClassification: "low", + "PermissionClassification": "low", "clientApplicationsFromVerifiedPublisherOnly": true } ``` Once the app consent policy has been created, you can [allow user consent](confi ## Delete a custom app consent policy -1. The following shows how you can delete a custom app consent policy. **This action canΓÇÖt be undone.** +1. The following shows how you can delete a custom app consent policy. -```http -DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy -``` + ```http + DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy + ``` :::zone-end > [!WARNING] > Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy.+ ### Supported conditions The following table provides the list of supported conditions for app consent policies. The following table provides the list of supported conditions for app consent po | PermissionClassification | The [permission classification](configure-permission-classifications.md) for the permission being granted, or "all" to match with any permission classification (including permissions that aren't classified). Default is "all". | | PermissionType | The permission type of the permission being granted. Use "application" for application permissions (for example, app roles) or "delegated" for delegated permissions. <br><br>**Note**: The value "delegatedUserConsentable" indicates delegated permissions that haven't been configured by the API publisher to require admin consent. This value may be used in built-in permission grant policies, but can't be used in custom permission grant policies. Required. | | ResourceApplication | The **AppId** of the resource application (for example, the API) for which a permission is being granted, or "any" to match with any resource application or API. Default is "any". |-| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <ul><li>Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object.</li><li>Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object.</li></ol> | +| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <br> - Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object. <br> - Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object. | | ClientApplicationIds | A list of **AppId** values for the client applications to match with, or a list with the single value "all" to match any client application. Default is the single value "all". | | ClientApplicationTenantIds | A list of Azure Active Directory tenant IDs in which the client application is registered, or a list with the single value "all" to match with client apps registered in any tenant. Default is the single value "all". | | ClientApplicationPublisherIds | A list of Microsoft Partner Network (MPN) IDs for [verified publishers](../develop/publisher-verification-overview.md) of the client application, or a list with the single value "all" to match with client apps from any publisher. Default is the single value "all". | | ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it doesn't have a verified publisher. Default is `$false`. |+|scopeType| The resource scope type the preapproval applies to. Possible values: `group` for [groups](/graph/api/resources/group) and [teams](/graph/api/resources/team), `chat` for [chats](/graph/api/resources/chat?view=graph-rest-1.0&preserve-view=true), or `tenant` for tenant-wide access. Required.| +| sensitivityLabels| The sensitivity labels that are applicable to the scope type and have been preapproved. It allows you to protect sensitive organizational data. Learn about [sensitivity labels](/microsoft-365/compliance/sensitivity-labels). **Note:** Chat resource **does not** support sensitivityLabels yet. -> [!WARNING] -> Deleted app consent policies cannot be restored. If you accidentally delete a custom app consent policy, you will need to re-create the policy. ## Next steps -To learn more: --* [Manage app consent policies using Microsoft Graph](/graph/api/resources/permissiongrantpolicy) -* [Configure user consent settings](configure-user-consent.md) -* [Configure the admin consent workflow](configure-admin-consent-workflow.md) -* [Learn how to manage consent to applications and evaluate consent requests](manage-consent-requests.md) -* [Grant tenant-wide admin consent to an application](grant-admin-consent.md) -* [Permissions and consent in the Microsoft identity platform](../develop/permissions-consent-overview.md) +- [Manage group owner consent policies](manage-group-owner-consent-policies.md) To get help or find answers to your questions: -* [Azure AD on Microsoft Q&A](/answers/products/) +* [Azure AD on Microsoft Q&A](/answers/products/) |
active-directory | Manage Application Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-application-permissions.md | |
active-directory | Manage Group Owner Consent Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-group-owner-consent-policies.md | + + Title: Manage app consent policies for group owners +description: Learn how to manage built-in and custom app consent policies for group owner to control when consent can be granted. +++++++ Last updated : 08/25/2023++++zone_pivot_groups: enterprise-apps-minus-portal-aad ++#customer intent: As an admin, I want to manage app consent policies for group owner for enterprise applications in Azure AD +++# Manage app consent policies for group owners ++App consent policies are a way to manage the permissions that apps have to access data in your organization. They're used to control what apps users can consent to and to ensure that apps meet certain criteria before they can access data. These policies help organizations maintain control over their data and ensure that it's being accessed only by trusted apps. ++In this article, you learn how to manage built-in and custom app consent policies to control when group owner consent can be granted. ++With [Microsoft Graph](/graph/overview) and [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true), you can view and manage group owner consent policies. ++A group owner consent policy consists of zero or more "include" condition sets and zero or more "exclude" condition sets. For an event to be considered in a group owner consent policy, the "include" condition set must not match *any* "exclude" condition set. ++Each condition set consists of several conditions. For an event to match a condition set, *all* conditions in the condition set must be met. ++Group owner consent policies where the ID begins with "microsoft-" are built-in policies. For example, the `microsoft-pre-approval-apps-for-group` group owner consent policy describes the conditions under which the group owners are allowed to grant consent to applications from the preapproved list by the admin to access data for the groups they own. Built-in policies can be used in custom directory roles and to configure user consent settings, but can't be edited or deleted. ++## Prerequisites ++- A user or service with one of the following roles: + - Global Administrator directory role + - Privileged Role Administrator directory role + - A custom directory role with the necessary [permissions to manage group owner consent policies](../roles/custom-consent-permissions.md#managing-app-consent-policies) + - The Microsoft Graph app role (application permission) Policy.ReadWrite.PermissionGrant (when connecting as an app or a service) +- To allow group owner consent subject to app consent policies, the group owner consent setting must be disabled. Once disabled, your current policy is read from the app consent policy. To learn how to disable group owner consent, see [Disable group owner consent setting](configure-user-consent-groups.md) + ++To manage group owner consent policies for applications with Microsoft Graph PowerShell, connect to [Microsoft Graph PowerShell](/powershell/microsoftgraph/get-started?view=graph-powershell-1.0&preserve-view=true) and sign in with one of the roles listed in the prerequisites section. You also need to consent to the `Policy.ReadWrite.PermissionGrant` permission. ++ ```powershell + # change the profile to beta by using the `Select-MgProfile` command + Select-MgProfile -Name "beta" + ``` + ```powershell + Connect-MgGraph -Scopes "Policy.ReadWrite.PermissionGrant" + ``` ++## Retrieve the current value for the group owner consent policy ++Learn how to verify if your group owner consent setting has been authorized in other ways. ++1. Retrieve the current value for the group owner consent setting ++ ```powershell + Get-MgPolicyAuthorizationPolicy | select -ExpandProperty DefaultUserRolePermissions | ft PermissionGrantPoliciesAssigned + ``` +If `ManagePermissionGrantPoliciesForOwnedResource` is returned in `PermissionGrantPoliciesAssigned`, your group owner consent setting might have been authorized in other ways. ++1. Check if the policy is scoped to `group` +```powershell + Get-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | Select -ExpandProperty AdditionalProperties +``` +If `ResourceScopeType` == `group`, your group owner consent setting has been authorized in other ways. In addition, if the app consent policy for groups has been assigned `microsoft-pre-approval-apps-for-group`, it means the preapproval feature is enabled for your tenant. +++## List existing group owner consent policies ++It's a good idea to start by getting familiar with the existing group owner consent policies in your organization: ++1. List all group owner consent policies: ++ ```powershell + Get-MgPolicyPermissionGrantPolicy | ft Id, DisplayName, Description + ``` ++1. View the "include" condition sets of a policy: ++ ```powershell + Get-MgPolicyPermissionGrantPolicyInclude -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | fl + ``` ++1. View the "exclude" condition sets: ++ ```powershell + Get-MgPolicyPermissionGrantPolicyExclude -PermissionGrantPolicyId {"microsoft-all-application-permissions-for-group"} | fl + ``` ++## Create a custom group owner consent policy ++Follow these steps to create a custom group owner consent policy: ++1. Create a new empty group owner consent policy. ++ ```powershell + New-MgPolicyPermissionGrantPolicy ` + -Id "my-custom-app-consent-policy-for-group" ` + -DisplayName "My first custom app consent policy for group" ` + -Description "This is a sample custom app consent policy for group." ` + -AdditionalProperties @{includeAllPreApprovedApplications = $false; resourceScopeType = "group"} + ``` +1. Add "include" condition sets. ++ ```powershell + # Include delegated permissions classified "low", for apps from verified publishers + New-MgPolicyPermissionGrantPolicyInclude ` + -PermissionGrantPolicyId "my-custom-app-consent-policy-for-group" ` + -PermissionType "delegated" ` + -PermissionClassification "low" ` + -ClientApplicationsFromVerifiedPublisherOnly + ``` ++ Repeat this step to add more "include" condition sets. ++1. Optionally, add "exclude" condition sets. ++ ```powershell + # Retrieve the service principal for the Azure Management API + $azureApi = Get-MgServicePrincipal -Filter "servicePrincipalNames/any(n:n eq 'https://management.azure.com/')" ++ # Exclude delegated permissions for the Azure Management API + New-MgPolicyPermissionGrantPolicyExclude ` + -PermissionGrantPolicyId "my-custom-app-consent-policy-for-group" ` + -PermissionType "delegated" ` + -ResourceApplication $azureApi.AppId + ``` ++ Repeat this step to add more "exclude" condition sets. ++Once the app consent policy for group has been created, you can [allow group owners consent](configure-user-consent-groups.md) subject to this policy. ++## Delete a custom group owner consent policy ++1. The following shows how you can delete a custom group owner consent policy. ++ ```powershell + Remove-MgPolicyPermissionGrantPolicy -PermissionGrantPolicyId "my-custom-app-consent-policy-for-group" + ``` ++++To manage group owner consent policies, sign in to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer) with one of the roles listed in the prerequisite section. You also need to consent to the `Policy.ReadWrite.PermissionGrant` permission. ++## Retrieve the current value for the group owner consent policy ++Learn how to verify if your group owner consent setting has been authorized in other ways. +1. Retrieve the current policy value + ```http + GET /policies/authorizationPolicy + ``` + If `ManagePermissionGrantPoliciesForOwnedResource` appears, your group owner consent setting might have been authorized in other ways. ++1. Check if the policy is scoped to `group` + ```http + GET /policies/permissionGrantPolicies/{ microsoft-all-application-permissions-for-group } + ``` + If `resourceScopeType` == `group`, your group owner consent setting has been authorized in other ways. In addition, if the app consent policy for groups has been assigned `microsoft-pre-approval-apps-for-group`, it means the preapproval feature is enabled for your tenant. ++## List existing group owner consent policies ++It's a good idea to start by getting familiar with the existing group owner consent policies in your organization: ++1. List all app consent policies: ++ ```http + GET /policies/permissionGrantPolicies + ``` ++1. View the "include" condition sets of a policy: ++ ```http + GET /policies/permissionGrantPolicies/{ microsoft-all-application-permissions-for-group }/includes + ``` ++1. View the "exclude" condition sets: ++ ```http + GET /policies/permissionGrantPolicies/{ microsoft-all-application-permissions-for-group }/excludes + ``` ++## Create a custom group owner consent policy ++Follow these steps to create a custom group owner consent policy: ++1. Create a new empty group owner consent policy. ++ ```http + POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies ++ { + "id": "my-custom-app-consent-policy-for-group", + "displayName": "My first custom app consent policy for group", + "description": "This is a sample custom app consent policy for group", + "includeAllPreApprovedApplications": false, + "resourceScopeType": "group" + } + ``` ++1. Add "include" condition sets. ++ Include delegated permissions classified "low" for apps from verified publishers ++ ```http + POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-app-consent-policy-for-group }/includes + + { + "permissionType": "delegated", + "permissionClassification": "low", + "clientApplicationsFromVerifiedPublisherOnly": true + } + ``` ++ Repeat this step to add more "include" condition sets. ++1. Optionally, add "exclude" condition sets. + Exclude delegated permissions for the Azure Management API (appId 46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b) + ```http + POST https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/{ my-custom-app-consent-policy-for-group }/excludes + + { + "permissionType": "delegated", + "resourceApplication": "46e6adf4-a9cf-4b60-9390-0ba6fb00bf6b " + } + ``` ++ Repeat this step to add more "exclude" condition sets. ++Once the group owner consent policy has been created, you can [allow group owners consent](configure-user-consent.md?tabs=azure-powershell#allow-user-consent-subject-to-an-app-consent-policy) subject to this policy. ++## Delete a custom group owner consent policy ++1. The following shows how you can delete a custom group owner consent policy. ++ ```http + DELETE https://graph.microsoft.com/v1.0/policies/permissionGrantPolicies/ my-custom-policy + ``` +++> [!WARNING] +> Deleted group owner consent policies cannot be restored. If you accidentally delete a custom group owner consent policy, you will need to re-create the policy. ++### Supported conditions ++The following table provides the list of supported conditions for group owner consent policies. ++| Condition | Description| +|:|:-| +| PermissionClassification | The [permission classification](configure-permission-classifications.md) for the permission being granted, or "all" to match with any permission classification (including permissions that aren't classified). Default is "all". | +| PermissionType | The permission type of the permission being granted. Use "application" for application permissions (for example, app roles) or "delegated" for delegated permissions. <br><br>**Note**: The value "delegatedUserConsentable" indicates delegated permissions that haven't been configured by the API publisher to require admin consent. This value may be used in built-in permission grant policies, but can't be used in custom permission grant policies. Required. | +| ResourceApplication | The **AppId** of the resource application (for example, the API) for which a permission is being granted, or "any" to match with any resource application or API. Default is "any". | +| Permissions | The list of permission IDs for the specific permissions to match with, or a list with the single value "all" to match with any permission. Default is the single value "all". <br> - Delegated permission IDs can be found in the **OAuth2Permissions** property of the API's ServicePrincipal object.<br> - Application permission IDs can be found in the **AppRoles** property of the API's ServicePrincipal object. | +| ClientApplicationIds | A list of **AppId** values for the client applications to match with, or a list with the single value "all" to match any client application. Default is the single value "all". | +| ClientApplicationTenantIds | A list of Azure Active Directory tenant IDs in which the client application is registered, or a list with the single value "all" to match with client apps registered in any tenant. Default is the single value "all". | +| ClientApplicationPublisherIds | A list of Microsoft Partner Network (MPN) IDs for [verified publishers](../develop/publisher-verification-overview.md) of the client application, or a list with the single value "all" to match with client apps from any publisher. Default is the single value "all". | +| ClientApplicationsFromVerifiedPublisherOnly | Set this switch to only match on client applications with a [verified publishers](../develop/publisher-verification-overview.md). Disable this switch (`-ClientApplicationsFromVerifiedPublisherOnly:$false`) to match on any client app, even if it doesn't have a verified publisher. Default is `$false`. | ++> [!WARNING] +> Deleted group owner consent policies can't be restored. If you accidentally delete a custom group owner consent policy, you will need to re-create the policy. ++To get help or find answers to your questions: ++- [Azure AD on Microsoft Q&A](/answers/products/) |
active-directory | Migrate Adfs Application Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-application-activity.md | -The AD FS application activity report in the [Entra portal](https://entra.microsoft.com) lets you quickly identify which of your applications are capable of being migrated to Azure AD. It assesses all AD FS applications for compatibility with Azure AD, checks for any issues, and gives guidance on preparing individual applications for migration. With the AD FS application activity report, you can: +The AD FS application activity report in the [Microsoft Entra admin center](https://entra.microsoft.com) lets you quickly identify which of your applications are capable of being migrated to Azure AD. It assesses all AD FS applications for compatibility with Azure AD, checks for any issues, and gives guidance on preparing individual applications for migration. With the AD FS application activity report, you can: * **Discover AD FS applications and scope your migration.** The AD FS application activity report lists all AD FS applications in your organization that have had an active user login in the last 30 days. The report indicates an apps readiness for migration to Azure AD. The report doesn't display Microsoft related relying parties in AD FS such as Office 365. For example, relying parties with name 'urn:federation:MicrosoftOnline'. The AD FS application activity data is available to users who are assigned any o ## Discover AD FS applications that can be migrated -The AD FS application activity report is available in the [Entra portal](https://entra.microsoft.com) under Azure AD **Usage & insights** reporting. The AD FS application activity report analyzes each AD FS application to determine if it can be migrated as-is, or if additional review is needed. +The AD FS application activity report is available in the [Microsoft Entra admin center](https://entra.microsoft.com) under Azure AD **Usage & insights** reporting. The AD FS application activity report analyzes each AD FS application to determine if it can be migrated as-is, or if additional review is needed. -1. Sign in to the [Entra portal](https://entra.microsoft.com) with an admin role that has access to AD FS application activity data (global administrator, reports reader, security reader, application administrator, or cloud application administrator). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with an admin role that has access to AD FS application activity data (global administrator, reports reader, security reader, application administrator, or cloud application administrator). 2. Select **Azure Active Directory**, and then select **Enterprise applications**. |
active-directory | Migrate Adfs Apps Stages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-apps-stages.md | Update the configuration of your production app to point to your production Azur Your line-of-business apps are those that your organization developed or those that are a standard packaged product. -Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Entra portal](https://entra.microsoft.com/#home). +Line-of-business apps that use OAuth 2.0, OpenID Connect, or WS-Federation can be integrated with Azure AD as [app registrations](../develop/quickstart-register-app.md). Integrate custom apps that use SAML 2.0 or WS-Federation as [non-gallery applications](add-application-portal.md) on the enterprise applications page in the [Microsoft Entra admin center](https://entra.microsoft.com/#home). ## Next steps |
active-directory | Migrate Adfs Represent Security Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-represent-security-policies.md | Explicit group authorization in AD FS: To map this rule to Azure AD: -1. In the [Entra portal](https://entra.microsoft.com/#home), [create a user group](../fundamentals/how-to-manage-groups.md) that corresponds to the group of users from AD FS. +1. In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), [create a user group](../fundamentals/how-to-manage-groups.md) that corresponds to the group of users from AD FS. 1. Assign app permissions to the group: :::image type="content" source="media/migrate-adfs-represent-security-policies/allow-a-group-explicitly-2.png" alt-text="Screenshot shows how to add a user assignment to the app."::: Explicit user authorization in AD FS: To map this rule to Azure AD: -* In the [Entra portal](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below: +* In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), add a user to the app through the Add Assignment tab of the app as shown below: :::image type="content" source="media/migrate-adfs-represent-security-policies/authorize-a-specific-user-2.png" alt-text="Screenshot shows My SaaS apps in Azure."::: The following are examples of types of MFA rules in AD FS, and how you can map t MFA rule settings in AD FS: ### Example 1: Enforce MFA based on users/groups Emit attributes as Claims rule in AD FS: To map the rule to Azure AD: -1. In the [Entra portal](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration: +1. In the [Microsoft Entra admin center](https://entra.microsoft.com/#home), select **Enterprise Applications** and then **Single sign-on** to view the SAML-based sign-on configuration: :::image type="content" source="media/migrate-adfs-represent-security-policies/map-emit-attributes-as-claims-rule-2.png" alt-text="Screenshot shows the Single sign-on page for your Enterprise Application."::: In this table, we've listed some useful Permit and Except options and how they m | From Devices with Specific Trust Level| Set this from the **Device State** control under Assignments -> Conditions| Use the **Exclude** option under Device State Condition and Include **All devices** | | With Specific Claims in the Request| This setting can't be migrated| This setting can't be migrated | -Here's an example of how to configure the Exclude option for trusted locations in the Entra portal: +Here's an example of how to configure the Exclude option for trusted locations in the Microsoft Entra admin center: :::image type="content" source="media/migrate-adfs-represent-security-policies/map-built-in-access-control-policies-3.png" alt-text="Screenshot of mapping access control policies."::: Your existing external users can be set up in these two ways in AD FS: As you progress with your migration, you can take advantage of the benefits that [Azure AD B2B](../external-identities/what-is-b2b.md) offers by migrating these users to use their own corporate identity when such an identity is available. This streamlines the process of signing in for those users, as they're often signed in with their own corporate sign-in. Your organization's administration is easier as well, by not having to manage accounts for external users. - **Federated external Identities**ΓÇöIf you're currently federating with an external organization, you have a few approaches to take:- - [Add Azure Active Directory B2B collaboration users in the Entra portal](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to. + - [Add Azure Active Directory B2B collaboration users in the Microsoft Entra admin center](../external-identities/add-users-administrator.md). You can proactively send B2B collaboration invitations from the Azure AD administrative portal to the partner organization for individual members to continue using the apps and assets they're used to. - [Create a self-service B2B sign-up workflow](../external-identities/self-service-portal.md) that generates a request for individual users at your partner organization using the B2B invitation API. No matter how your existing external users are configured, they likely have permissions that are associated with their account, either in group membership or specific permissions. Evaluate whether these permissions need to be migrated or cleaned up. |
active-directory | Migrate Adfs Saml Based Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-adfs-saml-based-sso.md | Apps that you can move easily today include SAML 2.0 apps that use the standard The following require more configuration steps to migrate to Azure AD: * Custom authorization or multi-factor authentication (MFA) rules in AD FS. You configure them using the [Azure AD Conditional Access](../conditional-access/overview.md) feature.-* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Entra portal interface. +* Apps with multiple Reply URL endpoints. You configure them in Azure AD using PowerShell or the Microsoft Entra admin center interface. * WS-Federation apps such as SharePoint apps that require SAML version 1.1 tokens. You can configure them manually using PowerShell. You can also add a preintegrated generic template for SharePoint and SAML 1.1 applications from the gallery. We support the SAML 2.0 protocol. * Complex claims issuance transforms rules. For information about supported claims mappings, see: * [Claims mapping in Azure Active Directory](../develop/saml-claims-customization.md). Migration requires assessing how the application is configured on-premises, and The following table describes some of the most common mapping of settings between an AD FS Relying Party Trust to Azure AD Enterprise Application: * AD FSΓÇöFind the setting in the AD FS Relying Party Trust for the app. Right-click the relying party and select Properties.-* Azure ADΓÇöThe setting is configured within [Entra portal](https://entra.microsoft.com/#home) in each application's SSO properties. +* Azure ADΓÇöThe setting is configured within [Microsoft Entra admin center](https://entra.microsoft.com/#home) in each application's SSO properties. | Configuration setting| AD FS| How to configure in Azure AD| SAML Token | | - | - | - | - | The following table describes some of the most common mapping of settings betwee Configure your applications to point to Azure AD versus AD FS for SSO. Here, we're focusing on SaaS apps that use the SAML protocol. However, this concept extends to custom line-of-business apps as well. > [!NOTE]-> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Entra portal](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**: +> The configuration values for Azure AD follows the pattern where your Azure Tenant ID replaces {tenant-id} and the Application ID replaces {application-id}. You find this information in the [Microsoft Entra admin center](https://entra.microsoft.com/#home) under **Azure Active Directory > Properties**: * Select Directory ID to see your Tenant ID. * Select Application ID to see your Application ID. SaaS apps need to know where to send authentication requests and how to validate | - | - | - | | **IdP Sign-on URL** <p>Sign-on URL of the IdP from the app's perspective (where the user is redirected for sign-in).| The AD FS sign-on URL is the AD FS federation service name followed by "/adfs/ls/." <p>For example: `https://fs.contoso.com/adfs/ls/`| Replace {tenant-id} with your tenant ID. <p> ΓÇÄFor apps that use the SAML-P protocol: [https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p>ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/{tenant-id}/wsfed](https://login.microsoftonline.com/{tenant-id}/wsfed) | | **IdP sign-out URL**<p>Sign-out URL of the IdP from the app's perspective (where the user is redirected when they choose to sign out of the app).| The sign-out URL is either the same as the sign-on URL, or the same URL with "wa=wsignout1.0" appended. For example: `https://fs.contoso.com/adfs/ls/?wa=wsignout1.0`| Replace {tenant-id} with your tenant ID.<p>For apps that use the SAML-P protocol:<p>[https://login.microsoftonline.com/{tenant-id}/saml2](https://login.microsoftonline.com/{tenant-id}/saml2) <p> ΓÇÄFor apps that use the WS-Federation protocol: [https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0](https://login.microsoftonline.com/common/wsfederation?wa=wsignout1.0) |-| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Entra portal in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. | +| **Token signing certificate**<p>The IdP uses the private key of the certificate to sign issued tokens. It verifies that the token came from the same IdP that the app is configured to trust.| Find the AD FS token signing certificate in AD FS Management under **Certificates**.| Find it in the Microsoft Entra admin center in the application's **Single sign-on properties** under the header **SAML Signing Certificate**. There, you can download the certificate for upload to the app. <p>ΓÇÄIf the application has more than one certificate, you can find all certificates in the federation metadata XML file. | | **Identifier/ "issuer"**<p>Identifier of the IdP from the app's perspective (sometimes called the "issuer ID").<p>ΓÇÄIn the SAML token, the value appears as the Issuer element.| The identifier for AD FS is usually the federation service identifier in AD FS Management under **Service > Edit Federation Service Properties**. For example: `http://fs.contoso.com/adfs/services/trust`| Replace {tenant-id} with your tenant ID.<p>https:\//sts.windows.net/{tenant-id}/ | | **IdP federation metadata**<p>Location of the IdP's publicly available federation metadata. (Some apps use federation metadata as an alternative to the administrator configuring URLs, identifier, and token signing certificate individually.)| Find the AD FS federation metadata URL in AD FS Management under **Service > Endpoints > Metadata > Type: Federation Metadata**. For example: `https://fs.contoso.com/FederationMetadat). | |
active-directory | Migrate Okta Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-federation.md | |
active-directory | Migrate Okta Sync Provisioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/migrate-okta-sync-provisioning.md | |
active-directory | Prevent Domain Hints With Home Realm Discovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/prevent-domain-hints-with-home-realm-discovery.md | Last updated 03/16/2023 zone_pivot_groups: home-realm-discovery--+ #customer intent: As an admin, I want to disable auto-acceleration to federated IDP during sign in using Home Realm Discovery policy # Disable auto-acceleration sign-in |
active-directory | Restore Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/restore-application.md | |
active-directory | Powershell Export Apps With Secrets Beyond Required | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/scripts/powershell-export-apps-with-secrets-beyond-required.md | |
active-directory | What Is Access Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/what-is-access-management.md | |
active-directory | How To Assign App Role Managed Identity Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md | |
active-directory | Qs Configure Powershell Windows Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md | |
active-directory | Services Id Authentication Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-id-authentication-support.md | The following services support Azure AD authentication. New services are added t | Azure App Services | [Configure your App Service or Azure Functions app to use Azure AD login](../../app-service/configure-authentication-provider-aad.md) | | Azure Batch | [Authenticate Batch service solutions with Active Directory](../../batch/batch-aad-auth.md) | | Azure Container Registry | [Authenticate with an Azure container registry](../../container-registry/container-registry-authentication.md) |-| Azure Cognitive Services | [Authenticate requests to Azure Cognitive Services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) | +| Azure AI services | [Authenticate requests to Azure AI services](../../ai-services/authentication.md?tabs=powershell#authenticate-with-azure-active-directory) | | Azure Communication Services | [Authenticate to Azure Communication Services](../../communication-services/concepts/authentication.md) | | Azure Cosmos DB | [Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account](../../cosmos-db/how-to-setup-rbac.md) | | Azure Databricks | [Authenticate using Azure Active Directory tokens](/azure/databricks/dev-tools/api/latest/aad/) |
active-directory | Tutorial Windows Vm Access Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md | SQL DB requires unique Azure AD display names. With this, the Azure AD accounts > [!NOTE] > `VMName` in the following command is the name of the VM that you enabled system assigned identity on in the prerequisites section.+ > + > If you encounter the error "Principal `VMName` has a duplicate display name", append the CREATE USER statement with WITH OBJECT_ID='xxx'. ```sql ALTER ROLE db_datareader ADD MEMBER [VMName] |
active-directory | Cross Tenant Synchronization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/cross-tenant-synchronization-overview.md | The following diagram shows how you can use cross-tenant synchronization to enab :::image type="content" source="./media/cross-tenant-synchronization-overview/cross-tenant-synchronization-diagram.png" alt-text="Diagram that shows synchronization of users for multiple tenants." lightbox="./media/cross-tenant-synchronization-overview/cross-tenant-synchronization-diagram.png"::: ## Who should use?- - Organizations that own multiple Azure AD tenants and want to streamline intra-organization cross-tenant application access. - Cross-tenant synchronization is **not** currently suitable for use across organizational boundaries. Does cross-tenant synchronization support deprovisioning users? - Remove the user from a group that is assigned to the cross-tenant synchronization configuration - An attribute on the user changes such that they do not meet the scoping filter conditions defined on the cross-tenant synchronization configuration anymore -- Currently only regular users, Helpdesk Admins and User Account Admins can be deleted. Users with other Azure AD roles such as directory reader currently cannot be deleted by cross-tenant synchronization. This is subject to change in the future.- - If the user is blocked from sign-in in the source tenant (accountEnabled = false) they will be blocked from sign-in in the target. This is not a deletion, but an updated to the accountEnabled property. Does cross-tenant synchronization support restoring users? |
active-directory | Multi Tenant Organization Configure Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-configure-graph.md | + + Title: Configure a multi-tenant organization using the Microsoft Graph API (Preview) +description: Learn how to configure a multi-tenant organization in Azure Active Directory using the Microsoft Graph API. +++++++ Last updated : 08/22/2023++++#Customer intent: As a dev, devops, or it admin, I want to +++# Configure a multi-tenant organization using the Microsoft Graph API (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++This article describes the key steps to configure a multi-tenant organization using the Microsoft Graph API. This article uses an example owner tenant named *Cairo* and two member tenants named *Berlin* and *Athens*. ++If you instead want to use the Microsoft 365 admin center to configure a multi-tenant organization, see [Set up a multi-tenant org in Microsoft 365 (Preview)](/microsoft-365/enterprise/set-up-multi-tenant-org) and [Join or leave a multi-tenant organization in Microsoft 365 (Preview)](/microsoft-365/enterprise/join-leave-multi-tenant-org). ++## Prerequisites ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++- Azure AD Premium P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements). +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization. +- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions. ++![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant** ++- Azure AD Premium P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements). +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization. +- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions. ++## Step 1: Sign in to the owner tenant ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++These steps describe how to use Microsoft Graph Explorer (recommended), but you can also use Postman, or another REST API client. ++1. Start [Microsoft Graph Explorer tool](https://aka.ms/ge). ++1. Sign in to the owner tenant. ++1. Select your profile and then select **Consent to permissions**. ++1. Consent to the following required permissions. ++ - `MultiTenantOrganization.ReadWrite.All` + - `Policy.Read.All` + - `Policy.ReadWrite.CrossTenantAccess` + - `Application.ReadWrite.All` + - `Directory.ReadWrite.All` ++## Step 2: Create a multi-tenant organization ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++1. In the owner tenant, use the [Create multiTenantOrganization](/graph/api/tenantrelationship-put-multitenantorganization) API to create your multi-tenant organization. This operation can take a few minutes. ++ **Request** ++ ```http + PUT https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization + { + "displayName": "Cairo" + } + ``` ++1. Use the [Get multiTenantOrganization](/graph/api/multitenantorganization-get) API to check that the operation has completed before proceeding. ++ **Request** ++ ```http + GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization + ``` + + **Response** ++ ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/$entity", + "id": "{mtoId}", + "createdDateTime": "2023-04-05T08:27:10Z", + "displayName": "Cairo", + "description": null + } + ``` ++## Step 3: Add tenants ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++1. In the owner tenant, use the [Add multiTenantOrganizationMember](/graph/api/multitenantorganization-post-tenants) API to add tenants to your multi-tenant organization. ++ **Request** ++ ```http + POST https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants + { + "tenantId": "{memberTenantIdB}", + "displayName": "Berlin" + } + ``` ++ **Request** ++ ```http + POST https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants + { + "tenantId": "{memberTenantIdA}", + "displayName": "Athens" + } + ``` ++1. Use the [List multiTenantOrganizationMembers](/graph/api/multitenantorganization-list-tenants) API to verify that the operation has completed before proceeding. ++ **Request** ++ ```http + GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants + ``` ++ **Response** ++ ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants" + "value": [ + { + "tenantId": "{ownerTenantId}", + "displayName": "Cairo", + "addedDateTime": "2023-04-05T08:27:10Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "owner", + "state": "active", + "transitionDetails": null + }, + { + "tenantId": "{memberTenantIdB}", + "displayName": "Berlin", + "addedDateTime": "2023-04-05T08:30:44Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "member", + "state": "pending", + "transitionDetails": { + "desiredState": "active", + "desiredRole": "member", + "status": "notStarted", + "details": null + } + }, + { + "tenantId": "{memberTenantIdA}", + "displayName": "Athens", + "addedDateTime": "2023-04-05T08:31:03Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "member", + "state": "pending", + "transitionDetails": { + "desiredState": "active", + "desiredRole": "member", + "status": "notStarted", + "details": null + } + } + ] + } + ``` ++## Step 4: (Optional) Change the role of a tenant ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++By default, tenants added to the multi-tenant organization are member tenants. Optionally, you can change them to owner tenants, which allow them to add other tenants to the multi-tenant organization. You can also change an owner tenant to a member tenant. ++1. In the owner tenant, use the [Update multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-update) API to change a member tenant to an owner tenant. ++ **Request** ++ ```http + PATCH https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdB} + { + "role": "owner" + } + ``` ++1. Use the [Get multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-get) API to verify the change. ++ **Request** ++ ```http + GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdB} + ``` ++ **Response** ++ ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants/$entity", + "tenantId": "{memberTenantIdB}", + "displayName": "Berlin", + "addedDateTime": "2023-04-05T08:30:44Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "member", + "state": "pending", + "transitionDetails": { + "desiredState": "active", + "desiredRole": "owner", + "status": "notStarted", + "details": null + } + } + ``` ++1. Use the [Update multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-update) API to change an owner tenant to a member tenant. ++ **Request** ++ ```http + PATCH https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdB} + { + "role": "member" + } + ``` ++## Step 5: (Optional) Remove a member tenant ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++You can remove any member tenant, including your own. You can't remove owner tenants. Also, you can't remove the original creator tenant, even if it has been changed from owner to member. ++1. In the owner tenant, use the [Remove multiTenantOrganizationMember](/graph/api/multitenantorganization-delete-tenants) API to remove any member tenant. This operation takes a few minutes. ++ **Request** ++ ```http + DELETE https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD} + ``` + +1. Use the [Get multiTenantOrganizationMember](/graph/api/multitenantorganizationmember-get) API to verify the change. ++ **Request** ++ ```http + GET beta https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD} + ``` + + If you check immediately after calling the remove API, it will show a response similar to the following. ++ **Response** ++ ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants/$entity", + "tenantId": "{memberTenantIdD}", + "displayName": "Denver", + "addedDateTime": "2023-04-05T08:40:52Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "member", + "state": "pending", + "transitionDetails": { + "desiredState": "removed", + "desiredRole": "member", + "status": "notStarted", + "details": null + } + } + ``` ++ After the remove operation completes, the response is similar to the following. This is an expected error message. It indicates that the tenant has been removed from the multi-tenant organization. ++ **Response** ++ ```http + { + "error": { + "code": "Directory_ObjectNotFound", + "message": "Unable to read the company information from the directory.", + "innerError": { + "date": "2023-04-05T08:44:07", + "request-id": "75216961-c21d-49ed-8c1f-2cfe51f920f1", + "client-request-id": "30129b19-51e8-41ed-8ba0-1501bac03802" + } + } + } + ``` +## Step 6: Wait ++![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant** ++- To allow for asynchronous processing, wait a **minimum of 2 hours** between creation and joining a multi-tenant organization. ++## Step 7: Sign in to a member tenant ++![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant** ++The Cairo tenant created a multi-tenant organization and added the Berlin and Athens tenants. In these steps you sign in to the Berlin tenant and join the multi-tenant organization created by Cairo. ++1. Start [Microsoft Graph Explorer tool](https://aka.ms/ge). ++1. Sign in to the member tenant. ++1. Select your profile and then select **Consent to permissions**. ++1. Consent to the following required permissions. ++ - `MultiTenantOrganization.ReadWrite.All` + - `Policy.Read.All` + - `Policy.ReadWrite.CrossTenantAccess` + - `Application.ReadWrite.All` + - `Directory.ReadWrite.All` ++## Step 8: Join the multi-tenant organization ++![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant** ++1. In the member tenant, use the [Update multiTenantOrganizationJoinRequestRecord](/graph/api/multitenantorganizationjoinrequestrecord-update) API to join the multi-tenant organization. ++ **Request** ++ ```http + PATCH beta https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/joinRequest + { + "addedByTenantId": "{ownerTenantId}" + } + ``` ++1. Use the [Get multiTenantOrganizationJoinRequestRecord](/graph/api/multitenantorganizationjoinrequestrecord-get) API to verify the join. ++ **Request** ++ ```http + GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/joinRequest + ``` ++ This operation takes a few minutes. If you check immediately after calling the API to join, the response will be similar to the following. ++ **Response** ++ ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/joinRequest/$entity", + "id": "aa87e8a4-9c88-4e67-971d-79c9e43319a3", + "addedByTenantId": "{ownerTenantId}", + "memberState": "active", + "role": "member", + "transitionDetails": { + "desiredMemberState": "active", + "status": "notStarted", + "details": "" + } + } + ``` ++ After the join operation completes, the response is similar to the following. ++ **Response** ++ ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/joinRequest/$entity", + "id": "aa87e8a4-9c88-4e67-971d-79c9e43319a3", + "addedByTenantId": "{ownerTenantId}", + "memberState": "active", + "role": "member", + "transitionDetails": null + } + ``` ++1. Use the [List multiTenantOrganizationMembers](/graph/api/multitenantorganization-list-tenants) API to check the multi-tenant organization itself. It should reflect the join operation. ++ **Request** ++ ```http + GET https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants + ``` ++ **Response** + + ```http + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#tenantRelationships/multiTenantOrganization/tenants", + "value": [ + { + "tenantId": "{memberTenantIdA}", + "displayName": "Athens", + "addedDateTime": "2023-04-05T10:14:35Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "member", + "state": "active", + "transitionDetails": null + }, + { + "tenantId": "{memberTenantIdB}", + "displayName": "Berlin", + "addedDateTime": "2023-04-05T08:30:44Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "member", + "state": "active", + "transitionDetails": null + }, + { + "tenantId": "{ownerTenantId}", + "displayName": "Cairo", + "addedDateTime": "2023-04-05T08:27:10Z", + "joinedDateTime": null, + "addedByTenantId": "{ownerTenantId}", + "role": "owner", + "state": "active", + "transitionDetails": null + } + ] + } + ``` ++1. To allow for asynchronous processing, wait **up to 4 hours** before joining a multi-tenant organization is completed. ++## Step 9: (Optional) Leave the multi-tenant organization ++![Icon for the member tenant.](./media/common/icon-tenant-member.png)<br/>**Member tenant** ++You can leave a multi-tenant organization that you have joined. The process for removing your own tenant from the multi-tenant organization is the same as the process for removing another tenant from the multi-tenant organization. ++If your tenant is the only multi-tenant organization owner, you must designate a new tenant to be the multi-tenant organization owner. For steps, see [Step 4: (Optional) Change the role of a tenant](#step-4-optional-change-the-role-of-a-tenant) ++- In the tenant, use the [Remove multiTenantOrganizationMember](/graph/api/multitenantorganization-delete-tenants) API to remove the tenant. This operation takes a few minutes. ++ **Request** ++ ```http + DELETE https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD} + ``` ++## Step 10: (Optional) Delete the multi-tenant organization ++![Icon for the owner tenant.](./media/common/icon-tenant-owner.png)<br/>**Owner tenant** ++You delete a multi-tenant organization by removing all tenants. The process for removing the final owner tenant is the same as the process for removing all other member tenants. ++- In the final owner tenant, use the [Remove multiTenantOrganizationMember](/graph/api/multitenantorganization-delete-tenants) API to remove the tenant. This operation takes a few minutes. ++ **Request** ++ ```http + DELETE https://graph.microsoft.com/beta/tenantRelationships/multiTenantOrganization/tenants/{memberTenantIdD} + ``` ++## Next steps ++- [Set up a multi-tenant org in Microsoft 365 (Preview)](/microsoft-365/enterprise/set-up-multi-tenant-org) +- [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs) +- [Configure multi-tenant organization templates using the Microsoft Graph API (Preview)](./multi-tenant-organization-configure-templates.md) |
active-directory | Multi Tenant Organization Configure Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-configure-templates.md | + + Title: Configure multi-tenant organization templates using Microsoft Graph API (Preview) +description: Learn how to configure multi-tenant organization templates in Azure Active Directory using the Microsoft Graph API. +++++++ Last updated : 08/22/2023++++#Customer intent: As a dev, devops, or it admin, I want to +++# Configure multi-tenant organization templates using the Microsoft Graph API (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++This article describes how to configure a policy template for your multi-tenant organization. ++## Prerequisites ++- Azure AD Premium P1 or P2 license. For more information, see [License requirements](./multi-tenant-organization-overview.md#license-requirements). +- [Security Administrator](../roles/permissions-reference.md#security-administrator) role to configure cross-tenant access settings and templates for the multi-tenant organization. +- [Global Administrator](../roles/permissions-reference.md#global-administrator) role to consent to required permissions. ++## Cross-tenant access policy partner template ++The [cross-tenant access partner configuration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) handles trust settings and automatic user consent settings between partner tenants. For example, you can use these settings to trust multi-factor authentication claims for inbound users from the target partner tenant. With the template in an unconfigured state, partner configurations for partner tenants in the multi-tenant organization won't be amended, with all trust settings passed through from default settings. However, if you configure the template, then partner configurations will be amended corresponding to the policy template. ++### Configure inbound and outbound automatic redemption ++To specify which trust settings and automatic user consent settings to apply to your policy template, use the [Update multiTenantOrganizationPartnerConfigurationTemplate](/graph/api/multitenantorganizationpartnerconfigurationtemplate-update) API. If you create or join a multi-tenant organization using the Microsoft 365 admin center, this configuration is handled automatically. ++**Request** ++```http +PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration ++{ + "inboundTrust": { + "isMfaAccepted": true, + "isCompliantDeviceAccepted": true, + "isHybridAzureADJoinedDeviceAccepted": true + }, + "automaticUserConsentSettings": { + "inboundAllowed": true, + "outboundAllowed": true + }, + "templateApplicationLevel": "newPartners,existingPartners" +} +``` ++### Disable the template for existing partners ++To apply this template only to new multi-tenant organization members and exclude existing partners, set the `templateApplicationLevel` parameter to new partners only. ++**Request** ++```http +PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration ++{ + "inboundTrust": { + "isMfaAccepted": true, + "isCompliantDeviceAccepted": true, + "isHybridAzureADJoinedDeviceAccepted": true + }, + "automaticUserConsentSettings": { + "inboundAllowed": true, + "outboundAllowed": true + }, + "templateApplicationLevel": "newPartners" +} +``` ++### Disable the template completely ++To disable the template completely, set the `templateApplicationLevel` parameter to null. ++**Request** ++```http +PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration ++{ + "inboundTrust": { + "isMfaAccepted": true, + "isCompliantDeviceAccepted": true, + "isHybridAzureADJoinedDeviceAccepted": true + }, + "automaticUserConsentSettings": { + "inboundAllowed": true, + "outboundAllowed": true + }, + "templateApplicationLevel": "" +} +``` ++### Reset the template ++To reset the template to its default state (decline all trust and automatic user consent), use the [multiTenantOrganizationPartnerConfigurationTemplate: resetToDefaultSettings](/graph/api/multitenantorganizationpartnerconfigurationtemplate-resettodefaultsettings) API. ++```http +POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationPartnerConfiguration/resetToDefaultSettings +``` ++## Cross-tenant synchronization template ++The identity synchronization policy governs [cross-tenant synchronization](cross-tenant-synchronization-overview.md), which allows you to share users and groups across tenants in your organization. You can use these settings to allow inbound user synchronization. With the template in an unconfigured state, the identity synchronization policy for partner tenants in the multi-tenant organization won't be amended. However, if you configure the template, then the identity synchronization policy will be amended corresponding to the policy template. ++### Configure inbound user synchronization ++To allow inbound user synchronization in the policy template, use the [Update multiTenantOrganizationIdentitySyncPolicyTemplate](/graph/api/multitenantorganizationidentitysyncpolicytemplate-update) API. If you create or join a multi-tenant organization using the Microsoft 365 admin center, this configuration is handled automatically. ++**Request** ++```http +PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization ++{ + "userSyncInbound": { + "isSyncAllowed": true + }, + "templateApplicationLevel": "newPartners,existingPartners" +} +``` ++### Disable the template for existing partners ++To apply this template only to new multi-tenant organization members and exclude existing partners, set the `templateApplicationLevel` parameter to new partners only. ++**Request** ++```http +PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization ++{ + "userSyncInbound": { + "isSyncAllowed": true + }, + "templateApplicationLevel": "newPartners" +} +``` ++### Disable the template completely ++To disable the template completely, set the `templateApplicationLevel` parameter to null. ++**Request** ++```http +PATCH https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization ++{ + "userSyncInbound": { + "isSyncAllowed": true + }, + "templateApplicationLevel": "" +} +``` ++### Reset the template ++To reset the template to its default state (decline inbound synchronization), use the [multiTenantOrganizationIdentitySyncPolicyTemplate: resetToDefaultSettings](/graph/api/multitenantorganizationidentitysyncpolicytemplate-resettodefaultsettings) API. ++**Request** ++```http +POST https://graph.microsoft.com/beta/policies/crossTenantAccessPolicy/templates/multiTenantOrganizationIdentitySynchronization/resetToDefaultSettings +``` + +## Next steps ++- [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md) |
active-directory | Multi Tenant Organization Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-known-issues.md | + + Title: Known issues for multi-tenant organizations (Preview) +description: Learn about known issues when you work with multi-tenant organizations in Azure Active Directory. +++++++ Last updated : 08/22/2023++++#Customer intent: As a dev, devops, or it admin, I want to +++# Known issues for multi-tenant organizations (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++This article discusses known issues to be aware of when you work with multi-tenant organization functionality across Azure AD and Microsoft 365. To provide feedback about the multi-tenant organization functionality on UserVoice, see [Azure Active Directory (Azure AD) UserVoice](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789?category_id=360892). We watch UserVoice closely so that we can improve the service. ++## Scope ++The experiences and issues described in this article have the following scope. ++| Scope | Description | +| | | +| In scope | - Azure AD administrator experiences and issues related to multi-tenant organizations to support seamless collaboration experiences in new Teams, with reciprocally provisioned B2B members | +| Related scope | - Microsoft 365 admin center experiences and issues related to multi-tenant organizations<br/>- Microsoft 365 multi-tenant organization people search experiences and issues<br/>- Cross-tenant synchronization issues related to Microsoft 365 | +| Out of scope | - Cross-tenant synchronization unrelated to Microsoft 365<br/>- End user experiences in new Teams<br/>- End user experiences in Power BI<br/>- Tenant migration or consolidation | +| Unsupported scenarios | - Seamless collaboration experience across multi-tenant organizations in classic Teams<br/>- Self-service for multi-tenant organizations larger than 5 tenants or 100,000 internal users per tenant<br/>- Using provisioning or synchronization engines other than Azure AD cross-tenant synchronization<br/>- Multi-tenant organizations in Azure Government or Microsoft Azure operated by 21Vianet<br/>- Cross-cloud multi-tenant organizations | ++## Multi-tenant organization related issues ++- Allow for at least 2 hours between the creation of a multi-tenant organization and any tenant joining the multi-tenant organization. ++- Allow for up to 4 hours between submission of a multi-tenant organization join request and the same join request to succeed and finish. ++- Self-service of multi-tenant organization functionality is limited to a maximum of 5 tenants and 100,000 internal users per tenant. To request a raise in these limits, submit an Azure AD or Microsoft 365 admin center support request. ++- In the Microsoft Graph APIs, the default limits of 5 tenants and 100,000 internal users per tenant are only enforced at the time of joining. In Microsoft 365 admin center, the default limits are enforced at multi-tenant organization creation time and at time of joining. ++- There are multiple reasons why a join request might fail. If Microsoft 365 admin center doesn't indicate why a join request isn't succeeding, try examining the join request response by using the Microsoft Graph APIs or Microsoft Graph Explorer. ++- If you followed the correct sequence of creating a multi-tenant organization, adding a tenant to the multi-tenant organization, and the added tenant's join request keeps failing, submit a support request to Azure AD or Microsoft 365 admin center. ++- As part of a multi-tenant organization, newly invited B2B users receive an additional user property that includes the home tenant identifier of the B2B user. Already redeemed B2B users don't have this additional user property. Currently, Microsoft 365 admin center share users functionality or Azure AD cross-tenant synchronization are currently the only accepted methods to get this additional user property populated. ++- As part of a multi-tenant organization, [reset redemption status for a B2B user](../external-identities/reset-redemption-status.md) is currently unavailable and disabled. ++## B2B user or B2B member related issues ++- The promotion of B2B guests to B2B members represents a strategic decision by multi-tenant organizations to consider B2B members as trusted users of the organization. Review the [default permissions](../fundamentals/users-default-permissions.md) for B2B members. ++- To promote B2B guests to B2B members, a source tenant administrator can amend the [attribute mappings](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings), or a target tenant administrator can [change the userType](../fundamentals/how-to-manage-user-profile-info.md#add-or-change-profile-information) if the property is not recurringly synchronized. +++- In [SharePoint OneDrive](/sharepoint/), the promotion of B2B guests to B2B members may not happen automatically. If faced with a user type mismatch between Azure AD and SharePoint OneDrive, try [Set-SPUser [-SyncFromAD]](/powershell/module/sharepoint-server/set-spuser). ++- In [SharePoint OneDrive](/sharepoint/) user interfaces, when sharing a file with *People in Fabrikam*, the current user interfaces might be counterintuitive, because B2B members in Fabrikam from Contoso count towards *People in Fabrikam*. ++- In [Microsoft Forms](/office365/servicedescriptions/microsoft-forms-service-description), B2B member users may not be able to access forms. ++- In [Microsoft Power BI](/power-bi/enterprise/service-admin-azure-ad-b2b#who-can-you-invite), B2B member users may require license assignment in addition to having an untested experience. Power BI preview for B2B members as part of a multi-tenant organization is expected. ++- In [Microsoft Power Apps](/power-platform/), [Microsoft Dynamics 365](/dynamics365/), and other workloads, B2B member users may require license assignment. Experiences for B2B members are untested. ++## User synchronization issues ++- When to use Microsoft 365 admin center to share users: If you haven't previously used Azure AD cross-tenant synchronization, and you intend to establish a [collaborating user set](multi-tenant-organization-microsoft-365.md#collaborating-user-set) topology where the same set of users is shared to all multi-tenant organization tenants, you may want to use the Microsoft 365 admin center share users functionality. ++- When to use Azure AD cross-tenant synchronization: If you're already using Azure AD cross-tenant synchronization, for various [multi-hub multi-spoke topologies](cross-tenant-synchronization-topology.md), you don't need to use the Microsoft 365 admin center share users functionality. Instead, you may want to continue using your existing Azure AD cross-tenant synchronization jobs. ++- Contact objects: The at-scale provisioning of B2B users may collide with contact objects. The handling or conversion of contact objects is currently not supported. ++- Microsoft 365 admin center / Azure AD: Whether you use the Microsoft 365 admin center share users functionality or Azure AD cross-tenant synchronization, the following items apply: ++ - In the identity platform, both methods are represented as Azure AD cross-tenant synchronization jobs. + - You may adjust the attribute mappings to match your organizations' needs. + - By default, new B2B users are provisioned as B2B members, while existing B2B guests remain B2B guests. + - You can opt to convert B2B guests into B2B members by setting [**Apply this mapping** to **Always**](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings). ++- Microsoft 365 admin center / Azure AD: If you're using Azure AD cross-tenant synchronization to provision your users, rather than the Microsoft 365 admin center share users functionality, Microsoft 365 admin center indicates an **Outbound sync status** of **Not configured**. This is expected preview behavior. Currently, Microsoft 365 admin center only shows the status of Azure AD cross-tenant synchronization jobs created and managed by Microsoft 365 admin center and doesn't display Azure AD cross-tenant synchronizations created and managed in Azure AD. ++- Microsoft 365 admin center / Azure AD: If you view Azure AD cross-tenant synchronization in Azure portal, after adding tenants to or after joining a multi-tenant organization in Microsoft 365 admin center, you'll see a cross-tenant synchronization configuration with the name MTO_Sync_<TenantID>. Refrain from editing or changing the name if you want Microsoft 365 admin center to recognize the configuration as created and managed by Microsoft 365 admin center. ++- Microsoft 365 admin center / Azure AD: There's no established or supported pattern for Microsoft 365 admin center to take control of pre-existing Azure AD cross-tenant synchronization configurations and jobs. ++- Advantage of using cross-tenant access settings template for identity synchronization: Azure AD cross-tenant synchronization doesn't support establishing a cross-tenant synchronization configuration before the tenant in question allows inbound synchronization in their cross-tenant access settings for identity synchronization. Hence the usage of the cross-tenant access settings template for identity synchronization is encouraged, with `userSyncInbound` set to true, as facilitated by Microsoft 365 admin center. ++- Source of Authority Conflict: Using Azure AD cross-tenant synchronization to target hybrid identities that have been converted to B2B users has not been tested and is not supported. ++- Syncing B2B guests versus B2B members: As your organization rolls out the multi-tenant organization functionality including provisioning of B2B users across multi-tenant organization tenants, you might want to provision some users as B2B guests, while provision others users as B2B members. To achieve this, you may want to establish two Azure AD cross-tenant synchronization configurations in the source tenant, one with userType attribute mappings configured to B2B guest, and another with userType attribute mappings configured to B2B member, each with [**Apply this mapping** set to **Always**](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings). By moving a user from one configuration's scope to the other, you can easily control who will be a B2B guest or a B2B member in the target tenant. ++- Cross-tenant synchronization deprovisioning: By default, when provisioning scope is reduced while a synchronization job is running, users fall out of scope and are soft deleted, unless Target Object Actions for Delete is disabled. For more information, see [Deprovisioning](cross-tenant-synchronization-overview.md#deprovisioning) and [Define who is in scope for provisioning](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters). ++- Cross-tenant synchronization deprovisioning: Currently, [SkipOutOfScopeDeletions](../app-provisioning/skip-out-of-scope-deletions.md?toc=%2Fazure%2Factive-directory%2Fmulti-tenant-organizations%2Ftoc.json&pivots=cross-tenant-synchronization) works for application provisioning jobs, but not for Azure AD cross-tenant synchronization. To avoid soft deletion of users taken out of scope of cross-tenant synchronization, set [Target Object Actions for Delete](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters) to disabled. ++## Next steps ++- [Known issues for provisioning in Azure Active Directory](../app-provisioning/known-issues.md?toc=/azure/active-directory/multi-tenant-organizations/toc.json&pivots=cross-tenant-synchronization) |
active-directory | Multi Tenant Organization Microsoft 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-microsoft-365.md | + + Title: Multi-tenant organization identity provisioning for Microsoft 365 (Preview) +description: Learn how multi-tenant organizations identity provisioning and Microsoft 365 work together. +++++++ Last updated : 08/22/2023++++#Customer intent: As a dev, devops, or it admin, I want to +++# Multi-tenant organization identity provisioning for Microsoft 365 (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++The multi-tenant organization capability is designed for organizations that own multiple Azure Active Directory (Azure AD) tenants and want to streamline intra-organization cross-tenant collaboration in Microsoft 365. It's built on the premise of reciprocal provisioning of B2B member users across multi-tenant organization tenants. ++## Microsoft 365 people search ++[Teams external access](/microsoftteams/communicate-with-users-from-other-organizations) and [Teams shared channels](/microsoftteams/shared-channels#getting-started-with-shared-channels) excluded, [Microsoft 365 people search](/microsoft-365/enterprise/multi-tenant-people-search) is typically scoped to within local tenant boundaries. In multi-tenant organizations with increased need for cross-tenant coworker collaboration, it's recommended to reciprocally provision users from their home tenants into the resource tenants of collaborating coworkers. ++## New Microsoft Teams ++The [new Microsoft Teams](/microsoftteams/new-teams-desktop-admin) experience improves upon Microsoft 365 people search and Teams external access for a unified seamless collaboration experience. For this improved experience to light up, the multi-tenant organization representation in Azure AD is required and collaborating users shall be provisioned as B2B members. ++## Collaborating user set ++Collaboration in Microsoft 365 is built on the premise of reciprocal provisioning of B2B identities across multi-tenant organization tenants. ++For example, say Annie in tenant A, Bob and Barbara in tenant B, and Charlie in tenant C want to collaborate. Conceptually, these four users represent a collaborating user set of four internal identities across three tenants. +++For people search to succeed, while scoped to local tenant boundaries, the entire collaborating user set must be represented within the scope of each multi-tenant organization tenant A, B, and C, in the form of either internal or B2B identities. +++Depending on your organizationΓÇÖs needs, the collaborating user set may contain a subset of collaborating employees, or eventually all employees. ++## Sharing your users ++One of the simpler ways to achieve a collaborating user set in each multi-tenant organization tenant is for each tenant administrator to define their user contribution and synchronization them outbound. Tenant administrators on the receiving end should accept the shared users inbound. ++- Administrator A contributes or shares Annie +- Administrator B contributes or shares Bob and Barbara +- Administrator C contributes or shares Charles +++Microsoft 365 admin center facilitates orchestration of such a collaborating user set across multi-tenant organization tenants. For more information, see [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs). ++Alternatively, pair-wise configuration of inbound and outbound cross-tenant synchronization can be used to orchestrate such collating user set across multi-tenant organization tenants. For more information, see [What is a cross-tenant synchronization](cross-tenant-synchronization-overview.md). ++## B2B member users ++To ensure a seamless collaboration experience across the multi-tenant organization in new Microsoft Teams, B2B identities are provisioned as B2B users of [Member userType](../external-identities/user-properties.md#user-type). ++| User synchronization method | Default userType property | +| | | +| [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs) | **Member**<br/> Remains Guest, if the B2B identity already existed as Guest | +| [Cross-tenant synchronization in Azure AD](./cross-tenant-synchronization-overview.md) | **Member**<br/> Remains Guest, if the B2B identity already existed as Guest | ++From a security perspective, you should review the default permissions granted to B2B member users. For more information, see [Compare member and guest default permissions](../fundamentals/users-default-permissions.md#compare-member-and-guest-default-permissions). ++To change the userType from **Guest** to **Member** (or vice versa), a source tenant administrator can amend the [attribute mappings](cross-tenant-synchronization-configure.md#step-9-review-attribute-mappings), or a target tenant administrator can [change the userType](../fundamentals/how-to-manage-user-profile-info.md#add-or-change-profile-information) if the property is not recurringly synchronized. ++## Unsharing your users ++To unshare users, you deprovision users by using the user deprovisioning capabilities available in Azure AD cross-tenant synchronization. By default, when provisioning scope is reduced while a synchronization job is running, users fall out of scope and are soft deleted, unless Target Object Actions for Delete is disabled. For more information, see [Deprovisioning](cross-tenant-synchronization-overview.md#deprovisioning) and [Define who is in scope for provisioning](cross-tenant-synchronization-configure.md#step-8-optional-define-who-is-in-scope-for-provisioning-with-scoping-filters). ++## Next steps ++- [Plan for multi-tenant organizations in Microsoft 365](/microsoft-365/enterprise/plan-multi-tenant-org-overview) +- [Set up a multi-tenant org in Microsoft 365](/microsoft-365/enterprise/set-up-multi-tenant-org) |
active-directory | Multi Tenant Organization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-overview.md | + + Title: What is a multi-tenant organization in Azure Active Directory? (Preview) +description: Learn about multi-tenant organizations in Azure Active Directory and Microsoft 365. +++++++ Last updated : 08/22/2023++++#Customer intent: As a dev, devops, or it admin, I want to +++# What is a multi-tenant organization in Azure Active Directory? (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Multi-tenant organization is a feature in Azure Active Directory (Azure AD) and Microsoft 365 that enables you to form a tenant group within your organization. Each pair of tenants in the group is governed by cross-tenant access settings that you can use to configure B2B or cross-tenant synchronization. ++## Why use multi-tenant organization? ++Here are the primary goals of multi-tenant organization: ++- Define a group of tenants belonging to your organization +- Collaborate across your tenants in new Microsoft Teams +- Enable search and discovery of user profiles across your tenants through Microsoft 365 people search ++## Who should use it? ++Organizations that own multiple Azure AD tenants and want to streamline intra-organization cross-tenant collaboration in Microsoft 365. ++The multi-tenant organization capability is built on the assumption of reciprocal provisioning of B2B member users across multi-tenant organization tenants. ++As such, the multi-tenant organization capability assumes the simultaneous use of Azure AD cross-tenant synchronization or an alternative bulk provisioning engine for [external identities](../external-identities/user-properties.md). ++## Benefits ++Here are the primary benefits of a multi-tenant organization: ++- Differentiate in-organization and out-of-organization external users ++ In Azure AD, external users originating from within a multi-tenant organization can be differentiated from external users originating from outside the multi-tenant organization. This differentiation facilitates the application of different policies for in-organization and out-of-organization external users. +- Improved collaborative experience in Microsoft Teams ++ In new Microsoft Teams, multi-tenant organization users can expect an improved collaborative experience across tenants with chat, calling, and meeting start notifications from all connected tenants across the multi-tenant organization. Tenant switching is more seamless and faster. For more information, see [Microsoft Teams: Advantages of the new architecture](https://techcommunity.microsoft.com/t5/microsoft-teams-blog/microsoft-teams-advantages-of-the-new-architecture/ba-p/3775704). ++- Improved people search experience across tenants ++ Across Microsoft 365 services, the multi-tenant organization people search experience is a collaboration feature that enables search and discovery of people across multiple tenants. Once enabled, users are able to search and discover synced user profiles in a tenant's global address list and view their corresponding people cards. For more information, see [Microsoft 365 multi-tenant organization people search (public preview)](/microsoft-365/enterprise/multi-tenant-people-search). ++## How does a multi-tenant organization work? ++The multi-tenant organization capability enables you to form a tenant group within your organization. The following list describes the basic lifecycle of a multi-tenant organization. ++- Define a multi-tenant organization ++ One tenant administrator defines a multi-tenant organization as a grouping of tenants. The grouping of tenants isn't reciprocal until each listed tenant takes action to join the multi-tenant organization. The objective is a reciprocal agreement between all listed tenants. ++- Join a multi-tenant organization ++ Tenant administrators of listed tenants take action to join the multi-tenant organization. After joining, the multi-tenant organization relationship is reciprocal between each and every tenant that joined the multi-tenant organization. ++- Leave a multi-tenant organization ++ Tenant administrators of listed tenants can leave a multi-tenant organization at any time. While a tenant administrator who defined the multi-tenant organization can add and remove listed tenants they don't control the other tenants. ++A multi-tenant organization is established as a collaboration of equals. Each tenant administrator stays in control of their tenant and their membership in the multi-tenant organization. ++## Cross-tenant access settings ++Administrators staying in control of their resources is a guiding principle for multi-tenant organization collaboration. Cross-tenant access settings are required for each tenant-to-tenant relationship. Tenant administrators explicitly configure, as needed, the following policies: ++- Cross-tenant access partner configurations ++ For more information, see [Configure cross-tenant access settings for B2B collaboration](../external-identities/cross-tenant-access-settings-b2b-collaboration.md) and [crossTenantAccessPolicyConfigurationPartner resource type](/graph/api/resources/crosstenantaccesspolicyconfigurationpartner?view=graph-rest-beta&preserve-view=true). ++- Cross-tenant access identity synchronization ++ For more information, see [Configure cross-tenant synchronization](cross-tenant-synchronization-configure.md) and [crossTenantIdentitySyncPolicyPartner resource type](/graph/api/resources/crosstenantidentitysyncpolicypartner). ++## Multi-tenant organization example ++The following diagram shows three tenants A, B, and C that form a multi-tenant organization. +++| Tenant | Description | +| :: | | +| A | Administrators see a multi-tenant organization consisting of A, B, C.<br/>They also see cross-tenant access settings for B and C. | +| B | Administrators see a multi-tenant organization consisting of A, B, C.<br/>They also see cross-tenant access settings for A and C. | +| C | Administrators see a multi-tenant organization consisting of A, B, C.<br/>They also see cross-tenant access settings for A and B. | ++## Templates for cross-tenant access settings ++To ease the setup of homogenous cross-tenant access settings applied to partner tenants in the multi-tenant organization, the administrator of each multi-tenant organization tenant can configure optional cross-tenant access settings templates dedicated to the multi-tenant organization. These templates can be used to preconfigure cross-tenant access settings that are applied to any partner tenant newly joining the multi-tenant organization. ++## Tenant role and state ++To facilitate the management of a multi-tenant organization, any given multi-tenant organization tenant has an associated role and state. ++| Tenant role | Description | +| | | +| Owner | One tenant creates the multi-tenant organization. The multi-tenant organization creating tenant receives the role of owner. The privilege of the owner tenant is to add tenants into a pending state as well as to remove tenants from the multi-tenant organization. Also, an owner tenant can change the role of other multi-tenant organization tenants. | +| Member | Following the addition of pending tenants to the multi-tenant organization, pending tenants need to join the multi-tenant organization to turn their state from pending to active. Joined tenants typically start in the member role. Any member tenant has the privilege to leave the multi-tenant organization. | ++| Tenant state | Description | +| | | +| Pending | A pending tenant has yet to join a multi-tenant organization. While listed in an administratorΓÇÖs view of the multi-tenant organization, a pending tenant isn't yet part of the multi-tenant organization, and as such is hidden from an end userΓÇÖs view of a multi-tenant organization. | +| Active | Following the addition of pending tenants to the multi-tenant organization, pending tenants need to join the multi-tenant organization to turn their state from pending to active. Joined tenants typically start in the member role. Any member tenant has the privilege to leave the multi-tenant organization. | ++## Constraints ++The multi-tenant organization capability has been designed with the following constraints: ++- Any given tenant can only create or join a single multi-tenant organization. +- Any multi-tenant organization must have at least one active owner tenant. +- Each active tenant must have cross-tenant access settings for all active tenants. +- Any active tenant may leave a multi-tenant organization by removing themselves from it. +- A multi-tenant organization is deleted when the only remaining active (owner) tenant leaves. ++## External user segmentation ++By defining a multi-tenant organization, as well as pivoting on the Azure AD user property of userType, [external identities](../external-identities/user-properties.md) are segmented as follows: ++- External members originating from within a multi-tenant organization +- External guests originating from within a multi-tenant organization +- External members originating from outside of your organization +- External guests originating from outside of your organization ++This segmentation of external users, due to the definition of a multi-tenant organization, enables administrators to better differentiate in-organization from out-of-organization external users. ++External members originating from within a multi-tenant organization are called multi-tenant organization members. ++Multi-tenant collaboration capabilities in Microsoft 365 aim to provide a seamless collaboration experience across tenant boundaries when collaborating with multi-tenant organization member users. ++## Get started ++Here are the basic steps to get started using multi-tenant organization. ++### Step 1: Plan your deployment ++For more information, see [Plan for multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/plan-multi-tenant-org-overview). ++### Step 2: Create your multi-tenant organization ++Create your multi-tenant organization using [Microsoft 365 admin center](/microsoft-365/enterprise/set-up-multi-tenant-org) or [Microsoft Graph API](multi-tenant-organization-configure-graph.md): ++- First tenant, soon-to-be owner tenant, creates a multi-tenant organization. +- Owner tenant adds one or more joiner tenants. +- To allow for asynchronous processing, wait a **minimum of 2 hours**. ++### Step 3: Join a multi-tenant organization ++Join a multi-tenant organization using [Microsoft 365 admin center](/microsoft-365/enterprise/join-leave-multi-tenant-org) or [Microsoft Graph API](multi-tenant-organization-configure-graph.md): ++- Joiner tenants submit a join request to join the multi-tenant organization of owner tenant. +- To allow for asynchronous processing, wait **up to 4 hours**. ++Your multi-tenant organization is formed. ++### Step 4: Synchronize users ++Depending on your use case, you may want to synchronize users using one of the following methods: ++- [Synchronize users in multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/sync-users-multi-tenant-orgs) +- [Configure cross-tenant synchronization using the Azure portal](cross-tenant-synchronization-configure.md) +- [Configure cross-tenant synchronization using Microsoft Graph API](cross-tenant-synchronization-configure-graph.md) +- Your alternative bulk provisioning engine ++## Limits ++Multi-tenant organizations have the following limits: ++- A maximum of five active tenants per multi-tenant organization +- A maximum of 100,000 internal users per active tenant at the time of joining ++If you want to add more than five tenants or 100,000 internal users per tenant, contact Microsoft support. ++## License requirements ++The multi-tenant organization capability is in preview, and you can start using it if you have Azure AD Premium P1 licenses or above in all multi-tenant organization tenants. Licensing terms will be released at general availability. To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). ++## Next steps ++- [Plan for multi-tenant organizations in Microsoft 365 (Preview)](/microsoft-365/enterprise/plan-multi-tenant-org-overview) +- [What is cross-tenant synchronization?](cross-tenant-synchronization-overview.md) |
active-directory | Multi Tenant Organization Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/multi-tenant-organization-templates.md | + + Title: Multi-tenant organization templates (Preview) +description: Learn about multi-tenant organization templates in Azure Active Directory. +++++++ Last updated : 08/22/2023++++#Customer intent: As a dev, devops, or it admin, I want to +++# Multi-tenant organization templates (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Administrators staying in control of their resources is a guiding principle for multi-tenant organization collaboration. Cross-tenant access settings are required for each tenant-to-tenant relationship. Tenant administrators explicitly configure cross-tenant access partner configurations and identity synchronization settings for partner tenants inside the multi-tenant organization. ++To help apply homogenous cross-tenant access settings to partner tenants in the multi-tenant organization, the administrator of each tenant can configure optional cross-tenant access settings templates dedicated to the multi-tenant organization. This article describes how to use templates to preconfigure cross-tenant access settings that are applied to any partner tenant newly joining the multi-tenant organization. ++## Autogeneration of cross-tenant access settings ++Within a multi-tenant organization, each pair of tenants must have bi-directional [cross-tenant access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md), for both, partner configuration and identity synchronization. These settings provide the underlying policy framework for enabling trust and for sharing users and applications. ++When your tenant joins a new multi-tenant organization, or when a partner tenant joins your existing multi-tenant organization, cross-tenant access settings to other partner tenants in the enlarged multi-tenant organization, if they don't already exist, are automatically generated in an unconfigured state. In an unconfigured state, these cross-tenant access settings pass through the [default settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md#configure-default-settings). ++Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. Typically, these settings are configured to be nontrusting. For example, cross-tenant trusts for multi-factor authentication and compliant device claims might be disabled and user and group sharing in B2B direct connect or B2B collaboration might be disallowed. ++In multi-tenant organizations, on the other hand, cross-tenant access settings are typically expected to be trusting. For example, cross-tenant trusts for multi-factor authentication and compliant device claims might be enabled and user and group sharing in B2B direct connect or B2B collaboration might be allowed. ++While the autogeneration of cross-tenant access settings for multi-tenant organization partner tenants in and of itself doesn't change any authentication or authorization policy behavior, it allows your organization to easily customize the cross-tenant access settings for partner tenants in the multi-tenant organization on a per-tenant basis. ++## Policy templates at multi-tenant organization formation ++As previously described, in multi-tenant organizations, cross-tenant access settings are typically expected to be trusting. For example, cross-tenant trusts for multi-factor authentication and compliant device claims might be enabled and user and group sharing in B2B direct connect or B2B collaboration might be allowed. ++While autogeneration of cross-tenant access settings, per previous section, guarantees the existence of cross-tenant access settings for every multi-tenant organization partner tenant, further maintenance of the cross-tenant access settings for multi-tenant organization partner tenants is conducted individually, on a per-tenant basis. ++To reduce the workload for administrators at the time of multi-tenant organization formation, you can optionally use policy templates for preemptive configuration of cross-tenant access settings. These template settings are applied at the time of your tenant joins a multi-tenant organization to all external multi-tenant organization partner tenants as well as at the time of any partner tenant joins your existing multi-tenant organization to such new partner tenant. ++[Enablement or configuration of the optional policy templates](multi-tenant-organization-configure-templates.md), at the time of a partner tenant joins a multi-tenant organization, preemptively amend the corresponding [cross-tenant access settings](../external-identities/cross-tenant-access-settings-b2b-collaboration.md), for both partner configuration and identity synchronization. ++As an example, consider the actions of the administrators for an anticipated multi-tenant organization with three tenants, A, B, and C. ++- The administrators of all three tenants enable and configure their respective optional policy templates to enable cross-tenant trusts for multi-factor authentication and compliant device claims and to allow user and group sharing in B2B direct connect and B2B collaboration. +- Administrator A creates the multi-tenant organization and adds tenants B and C as pending tenants to the multi-tenant organization. +- Administrator B joins the multi-tenant organization. Cross-tenant access settings in tenant A for partner tenant B are amended, according to tenant A policy template settings. Vice versa, cross-tenant access settings in tenant B for partner tenant A are amended, according to tenant B policy template settings. +- Administrator C joins the multi-tenant organization. Cross-tenant access settings in tenants A (and B) for partner tenant C are amended, according to tenant A (and B) policy template settings. Similarly, cross-tenant access settings in tenant C for partner tenants A and B are amended, according to tenant C policy template settings. +- Following the formation of this multi-tenant organization of three tenants, the cross-tenant access settings of all tenant pairs in the multi-tenant organization have preemptively been configured. ++In summary, configuration of the optional policy templates enable you to homogeneously initialize cross-tenant access settings across your multi-tenant organization, while maintaining maximum flexibility to customize your cross-tenant access settings as needed on a per-tenant basis. ++To stop using the policy templates, you can reset them to their default state. For more information, see [Configure multi-tenant organization templates](multi-tenant-organization-configure-templates.md). ++## Policy template scoping and additional properties ++To provide administrators with further configurability, you can choose when cross-tenant access settings are to be amended according to the policy templates. For example, you can choose to apply the policy templates for the following tenants when a tenant joins a multi-tenant organization: ++| Tenant | Description | +| | | +| Only new partner tenants | Tenants whose cross-tenant access settings are autogenerated | +| Only existing partner tenants | Tenants who already have cross-tenant access settings | +| All partner tenants | Both new partner tenants and existing partner tenants | +| No partner tenants | Policy templates are effectively disabled | ++In this context, *new* partners refer to tenants for which you haven't yet configured cross-tenant access settings, while *existing* partners refer to tenants for which you have already configured cross-tenant access settings. This scoping is specified with the `templateApplicationLevel` property on the cross-tenant access [partner configuration template](/graph/api/resources/multitenantorganizationpartnerconfigurationtemplate) and the `templateApplicationLevel` property on the cross-tenant access [identity synchronization template](/graph/api/resources/multitenantorganizationidentitysyncpolicytemplate). ++Finally, in terms of interpretation of template property values, any template property value of `null` has no effect on the corresponding property value in the targeted cross-tenant access settings, while a defined template property value causes the corresponding property value in the targeted cross-tenant access settings to be amended in accordance with the template. The following table illustrates how template property values are being applied to corresponding cross-tenant access setting values. ++| Template Value | Initial Partner Settings Value<br/>(Before joining multi-tenant org) | Final Partner Settings Value<br/>(After joining multi-tenant org) | +| | | | +| `null` | <Partner Settings Value> | <Partner Settings Value> | +| <Template Value> | <any value> | <Template Value> | ++## Policy templates used by Microsoft 365 admin center ++When a multi-tenant organization is formed in Microsoft 365 admin center, an administrator agrees to the following multi-tenant organization template settings: ++- Identity synchronization is set to allow users to synchronize into this tenant +- Cross-tenant access is set to automatically redeem user invitations for both inbound and outbound ++This is achieved by setting the corresponding three template property values to `true`: ++- `automaticUserConsentSettings.inboundAllowed` +- `automaticUserConsentSettings.outboundAllowed` +- `userSyncInbound` ++For more information, see [Join or leave a multi-tenant organization in Microsoft 365](/microsoft-365/enterprise/join-leave-multi-tenant-org). ++## Cross-tenant access settings at time of multi-tenant organization disassembly ++Currently, there's no equivalent policy template feature supporting the disassembly of a multi-tenant organization. When a partner tenant leaves the multi-tenant organization, each tenant administrator must re-examine and amend accordingly the cross-tenant access settings for the partner tenant that left the multi-tenant organization. ++The partner tenant that left the multi-tenant organization must re-examine and amend accordingly the cross-tenant access settings for all former multi-tenant organization partner tenants as well as consider resetting the two policy templates for cross-tenant access settings. ++## Next steps ++- [Configure multi-tenant organization templates using the Microsoft Graph API (Preview)](./multi-tenant-organization-configure-templates.md) |
active-directory | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/multi-tenant-organizations/overview.md | Title: What is a multi-tenant organization in Azure Active Directory? -description: Learn about multi-tenant organizations in Azure Active Directory. + Title: Multi-tenant organization scenario and Azure AD capabilities +description: Learn about the multi-tenant organization scenario and capabilities in Azure Active Directory. -# What is a multi-tenant organization in Azure Active Directory? +# Multi-tenant organization scenario and Azure AD capabilities -This article provides an overview of multi-tenant organizations. +This article provides an overview of the multi-tenant organization scenario and the related capabilities in Azure Active Directory (Azure AD). ## What is a tenant? -A *tenant* is an instance of Azure Active Directory (Azure AD) in which information about a single organization resides including organizational objects such as users, groups, and devices and also application registrations, such as Microsoft 365 and third-party applications. A tenant also contains access and compliance policies for resources, such as applications registered in the directory. The primary functions served by a tenant include identity authentication as well as resource access management. +A *tenant* is an instance of Azure AD in which information about a single organization resides including organizational objects such as users, groups, and devices and also application registrations, such as Microsoft 365 and third-party applications. A tenant also contains access and compliance policies for resources, such as applications registered in the directory. The primary functions served by a tenant include identity authentication as well as resource access management. From an Azure AD perspective, a tenant forms an identity and access management scope. For example, a tenant administrator makes an application available to some or all the users in the tenant and enforces access policies on that application for users in that tenant. In addition, a tenant contains organizational branding data that drives end-user experiences, such as the organizations email domains and SharePoint URLs used by employees in that organization. From a Microsoft 365 perspective, a tenant forms the default collaboration and licensing boundary. For example, users in Microsoft Teams or Microsoft Outlook can easily find and collaborate with other users in their tenant, but don't have the ability to find or see users in other tenants. The following diagram shows how users in other tenants might not be able to acce As your organization evolves, your IT team must adapt to meet the changing needs. This often includes integrating with an existing tenant or forming a new one. Regardless of how the identity infrastructure is managed, it's critical that users have a seamless experience accessing resources and collaborating. Today, you may be using custom scripts or on-premises solutions to bring the tenants together to provide a seamless experience across tenants. +## B2B direct connect ++To enable users across tenants to collaborate in [Teams Connect shared channels](/microsoftteams/platform/concepts/build-and-test/shared-channels), you can use [Azure AD B2B direct connect](../external-identities/b2b-direct-connect-overview.md). B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration in Teams. When the trust is established, the B2B direct connect user has single sign-on access using credentials from their home tenant. ++Here's the primary constraint with using B2B direct connect across multiple tenants: ++- Currently, B2B direct connect works only with Teams Connect shared channels. +++For more information, see [B2B direct connect overview](../external-identities/b2b-direct-connect-overview.md). + ## B2B collaboration To enable users across tenants to collaborate, you can use [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. Once the external user has redeemed their invitation or completed sign-up, they're represented in your tenant as a user object. With B2B collaboration, you can securely share your company's applications and services with external users, while maintaining control over your own corporate data. Here are the primary constraints with using B2B collaboration across multiple te :::image type="content" source="./media/overview/multi-tenant-b2b-collaboration.png" alt-text="Diagram that shows using B2B collaboration across tenants." lightbox="./media/overview/multi-tenant-b2b-collaboration.png"::: -## B2B direct connect --To enable users across tenants to collaborate in [Teams Connect shared channels](/microsoftteams/platform/concepts/build-and-test/shared-channels), you can use [Azure AD B2B direct connect](../external-identities/b2b-direct-connect-overview.md). B2B direct connect is a feature of External Identities that lets you set up a mutual trust relationship with another Azure AD organization for seamless collaboration in Teams. When the trust is established, the B2B direct connect user has single sign-on access using credentials from their home tenant. --Here's the primary constraint with using B2B direct connect across multiple tenants: --- Currently, B2B direct connect works only with Teams Connect shared channels.-+For more information, see [B2B collaboration overview](../external-identities/what-is-b2b.md). ## Cross-tenant synchronization Here are the primary constraints with using cross-tenant synchronization across :::image type="content" source="./media/overview/multi-tenant-cross-tenant-sync.png" alt-text="Diagram that shows using cross-tenant synchronization across tenants." lightbox="./media/overview/multi-tenant-cross-tenant-sync.png"::: +For more information, see [What is cross-tenant synchronization?](./cross-tenant-synchronization-overview.md). ++## Multi-tenant organization (Preview) ++> [!IMPORTANT] +> Multi-tenant organization is currently in PREVIEW. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++[Multi-tenant organization](./multi-tenant-organization-overview.md) is a feature in Azure AD and Microsoft 365 that enables you to form a tenant group within your organization. Each pair of tenants in the group is governed by cross-tenant access settings that you can use to configure B2B or cross-tenant synchronization. ++Here are the primary benefits of a multi-tenant organization: ++- Differentiate in-organization and out-of-organization external users +- Improved collaborative experience in new Microsoft Teams +- Improved people search experience across tenants +++For more information, see [What is a multi-tenant organization in Azure Active Directory?](./multi-tenant-organization-overview.md). + ## Compare multi-tenant capabilities -Depending on the needs of your organization, you can use any combination of cross-tenant synchronization, B2B collaboration, and B2B direct connect. The following table compares the capabilities of each feature. For more information about different external identity scenarios, see [Comparing External Identities feature sets](../external-identities/external-identities-overview.md#comparing-external-identities-feature-sets). +Depending on the needs of your organization, you can use any combination of B2B direct connect, B2B collaboration, cross-tenant synchronization, and multi-tenant organization capabilities. B2B direct connect and B2B collaboration are independent capabilities, while cross-tenant synchronization and multi-tenant organization capabilities are independent of each other, though both rely on underlying B2B collaboration. ++The following table compares the capabilities of each feature. For more information about different external identity scenarios, see [Comparing External Identities feature sets](../external-identities/external-identities-overview.md#comparing-external-identities-feature-sets). -| | Cross-tenant synchronization<br/>(internal) | B2B collaboration<br/>(Org-to-org external) | B2B direct connect<br/>(Org-to-org external) | -| | | | | -| **Purpose** | Users can seamlessly access apps/resources across the same organization, even if they're hosted in different tenants. | Users can access apps/resources hosted in external tenants, usually with limited guest privileges. Depending on automatic redemption settings, users might need to accept a consent prompt in each tenant. | Users can access Teams Connect shared channels hosted in external tenants. | -| **Value** | Enables collaboration across organizational tenants. Administrators don't have to manually invite and synchronize users between tenants to ensure continuous access to apps/resources within the organization. | Enables external collaboration. More control and monitoring for administrators by managing the B2B collaboration users. Administrators can limit the access that these external users have to their apps/resources. | Enables external collaboration within Teams Connect shared channels only. More convenient for administrators because they don't have to manage B2B users. | -| **Primary administrator workflow** | Configure the cross-tenant synchronization engine to synchronize users between multiple tenants as B2B collaboration users. | Add external users to resource tenant by using the B2B invitation process or build your own onboarding experience using the [B2B collaboration invitation manager](../external-identities/external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration). | Configure cross-tenant access to provide external users inbound access to tenant the credentials for their home tenant. | -| **Trust level** | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. | Low to mid trust. User objects can be tracked easily and managed with granular controls. | Mid trust. B2B direct connect users are less easy to track, mandating a certain level of trust with the external organization. | -| **Effect on users** | Within the same organization, users are synchronized from their home tenant to the resource tenant as B2B collaboration users. | External users are added to a tenant as B2B collaboration users. | Users access the resource tenant using the credentials for their home tenant. User objects aren't created in the resource tenant. | -| **User type** | B2B collaboration user<br/>- External member (default)<br/>- External guest | B2B collaboration user<br/>- External member<br/>- External guest (default) | B2B direct connect user<br/>- N/A | +| | B2B direct connect<br/>(Org-to-org external or internal) | B2B collaboration<br/>(Org-to-org external or internal) | Cross-tenant synchronization<br/>(Org internal) | Multi-tenant organization<br/>(Org internal) | +| | | | | | +| **Purpose** | Users can access Teams Connect shared channels hosted in external tenants. | Users can access apps/resources hosted in external tenants, usually with limited guest privileges. Depending on automatic redemption settings, users might need to accept a consent prompt in each tenant. | Users can seamlessly access apps/resources across the same organization, even if they're hosted in different tenants. | Users can more seamlessly collaborate across a multi-tenant organization in new Teams and people search. | +| **Value** | Enables external collaboration within Teams Connect shared channels only. More convenient for administrators because they don't have to manage B2B users. | Enables external collaboration. More control and monitoring for administrators by managing the B2B collaboration users. Administrators can limit the access that these external users have to their apps/resources. | Enables collaboration across organizational tenants. Administrators don't have to manually invite and synchronize users between tenants to ensure continuous access to apps/resources within the organization. | Enables collaboration across organizational tenants. Administrators continue to have full configuration ability via cross-tenant access settings. Optional cross-tenant access templates allow pre-configuration of cross-tenant access settings. | +| **Primary administrator workflow** | Configure cross-tenant access to provide external users inbound access to tenant the credentials for their home tenant. | Add external users to resource tenant by using the B2B invitation process or build your own onboarding experience using the [B2B collaboration invitation manager](../external-identities/external-identities-overview.md#azure-ad-microsoft-graph-api-for-b2b-collaboration). | Configure the cross-tenant synchronization engine to synchronize users between multiple tenants as B2B collaboration users. | Create a multi-tenant organization, add (invite) tenants, join a multi-tenant organization. Leverage existing B2B collaboration users or use cross-tenant synchronization to provision B2B collaboration users. | +| **Trust level** | Mid trust. B2B direct connect users are less easy to track, mandating a certain level of trust with the external organization. | Low to mid trust. User objects can be tracked easily and managed with granular controls. | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. | High trust. All tenants are part of the same organization, and users are typically granted member access to all apps/resources. | +| **Effect on users** | Users access the resource tenant using the credentials for their home tenant. User objects aren't created in the resource tenant. | External users are added to a tenant as B2B collaboration users. | Within the same organization, users are synchronized from their home tenant to the resource tenant as B2B collaboration users. | Within the same multi-tenant organization, B2B collaboration users, particularly member users, benefit from enhanced, seamless collaboration across Microsoft 365. | +| **User type** | B2B direct connect user<br/>- N/A | B2B collaboration user<br/>- External member<br/>- External guest (default) | B2B collaboration user<br/>- External member (default)<br/>- External guest | B2B collaboration user<br/>- External member (default)<br/>- External guest | -The following diagram shows how cross-tenant synchronization, B2B collaboration, and B2B direct connect could be used together. +The following diagram shows how B2B direct connect, B2B collaboration, and cross-tenant synchronization capabilities could be used together. :::image type="content" source="./media/overview/multi-tenant-capabilities.png" alt-text="Diagram that shows different multi-tenant capabilities." lightbox="./media/overview/multi-tenant-capabilities.png"::: ## Terminology -To better understand multi-tenant organizations, you can refer back to the following list of terms. +To better understand multi-tenant organization scenario related Azure AD capabilities, you can refer back to the following list of terms. | Term | Definition | | | | | tenant | An instance of Azure Active Directory (Azure AD). | | organization | The top level of a business hierarchy. |-| multi-tenant organization | An organization that has more than one instance of Azure AD. | +| multi-tenant organization | An organization that has more than one instance of Azure AD, as well as a capability to group those instances in Azure AD. | +| creator tenant | The tenant that created the multi-tenant organization. | +| owner tenant | A tenant with the owner role. Initially, the creator tenant. | +| added tenant | A tenant that was added by an owner tenant. | +| joiner tenant | A tenant that is joining the multi-tenant organization. | +| join request | A joiner or added tenant submits a join request to join the multi-tenant organization. | +| pending tenant | A tenant that was added by an owner but that hasn't yet joined. | +| active tenant | A tenant that created or joined the multi-tenant organization. | +| member tenant | A tenant with the member role. Most joiner tenants start as members. | +| multi-tenant organization tenant | An active tenant of the multi-tenant organization, not pending. | | cross-tenant synchronization | A one-way synchronization service in Azure AD that automates creating, updating, and deleting B2B collaboration users across tenants in an organization. |-| cross-tenant access settings | Settings to manage collaboration with external Azure AD organizations. | +| cross-tenant access settings | Settings to manage collaboration for specific Azure AD organizations. | +| cross-tenant access settings template | An optional template to preconfigure cross-tenant access settings that are applied to any partner tenant newly joining the multi-tenant organization. | | organizational settings | Cross-tenant access settings for specific Azure AD organizations. | | configuration | An application and underlying service principal in Azure AD that includes the settings (such as target tenant, user scope, and attribute mappings) needed for cross-tenant synchronization. | | provisioning | The process of automatically creating or synchronizing objects across a boundary. | To better understand multi-tenant organizations, you can refer back to the follo ## Next steps +- [What is a multi-tenant organization in Azure Active Directory?](multi-tenant-organization-overview.md) - [What is cross-tenant synchronization?](cross-tenant-synchronization-overview.md) |
active-directory | Azure Pim Resource Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/azure-pim-resource-rbac.md | Title: View audit report for Azure resource roles in Privileged Identity Management (PIM) -description: View activity and audit history for Azure resource roles in Azure AD Privileged Identity Management (PIM). +description: View activity and audit history for Azure resource roles in Privileged Identity Management (PIM). documentationcenter: '' -With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Azure portal that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). +With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), part of Microsoft Entra, you can view activity, activations, and audit history for Azure resources roles within your organization. This includes subscriptions, resource groups, and even virtual machines. Any resource within the Microsoft Entra admin center that leverages the Azure role-based access control functionality can take advantage of the security and lifecycle management capabilities in Privileged Identity Management. If you want to retain audit data for longer than the default retention period, you can use Azure Monitor to route it to an Azure storage account. For more information, see [Archive Azure AD logs to an Azure storage account](../reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md). > [!NOTE] > If your organization has outsourced management functions to a service provider who uses [Azure Lighthouse](../../lighthouse/overview.md), role assignments authorized by that service provider won't be shown here. With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), To see what actions a specific user took in various resources, you can view the Azure resource activity that's associated with a given activation period. -1. Open **Azure AD Privileged Identity Management**. +1. Open **Privileged Identity Management**. 1. Select **Azure resources**. To see what actions a specific user took in various resources, you can view the You may have a compliance requirement where you must provide a complete list of role assignments to auditors. Privileged Identity Management enables you to query role assignments at a specific resource, which includes role assignments for all child resources. Previously, it was difficult for administrators to get a complete list of role assignments for a subscription and they had to export role assignments for each specific resource. Using Privileged Identity Management, you can query for all active and eligible role assignments in a subscription including role assignments for all resource groups and resources. -1. Open **Azure AD Privileged Identity Management**. +1. Open **Privileged Identity Management**. 1. Select **Azure resources**. You may have a compliance requirement where you must provide a complete list of Resource audit gives you a view of all role activity for a resource. -1. Open **Azure AD Privileged Identity Management**. +1. Open **Privileged Identity Management**. 1. Select **Azure resources**. My audit enables you to view your personal role activity. [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -1. Sign in to the [Azure portal](https://portal.azure.com) with Privileged Role administrator role permissions, and open Azure AD. -1. Select **Audit logs**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Privileged Role administrator role permissions. ++1. Browse to **Identity** > **Audit logs**. + 1. Use the **Service** filter to display only audit events for the Privileged identity Management service. On the **Audit logs** page, you can: - See the reason for an audit event in the **Status reason** column. My audit enables you to view your personal role activity. 1. Select an audit log event to see the ticket number on the **Activity** tab of the **Details** pane. - [![Check the ticket number for the audit event](media/azure-pim-resource-rbac/audit-event-ticket-number.png "Check the ticket number for the audit event")](media/azure-pim-resource-rbac/audit-event-ticket-number.png)] + [![Check the ticket number for the audit event](media/azure-pim-resource-rbac/audit-event-ticket-number.png "Check the ticket number for the audit event")](media/azure-pim-resource-rbac/audit-event-ticket-number.png) 1. You can view the requester (person activating the role) on the **Targets** tab of the **Details** pane for an audit event. There are three target types for Azure resource roles: |
active-directory | Concept Pim For Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/concept-pim-for-groups.md | One group can be an eligible member of another group, even if one of those group If a user is an active member of Group A, and Group A is an eligible member of Group B, the user can activate their membership in Group B. This activation will be only for the user that requested the activation for, it does not mean that the entire Group A becomes an active member of Group B. +## Privileged Identity Management and app provisioning (Public Preview) ++> [!VIDEO https://www.youtube.com/embed/9T6lKEloq0Q] ++If the group is configured for [app provisioning](../app-provisioning/index.yml), activation of group membership will trigger provisioning of group membership (and user account itself if it wasnΓÇÖt provisioned previously) to the application using SCIM protocol. ++In Public Preview we have a functionality that triggers provisioning right after group membership is activated in PIM. +Provisioning configuration depends on the application. Generally, we recommend having at least two groups assigned to the application. Depending on the number of roles in your application, you may choose to define additional ΓÇ£privileged groups.ΓÇ¥: +++|Group|Purpose|Members|Group membership|Role assigned in the application| +|--|--|--|--|--| +|All users group|Ensure that all users that need access to the application are constantly provisioned to the application.|All users that need to access application.|Active|None, or low-privileged role| +|Privileged group|Provide just-in-time access to privileged role in the application.|Users that need to have just-in-time access to privileged role in the application.|Eligible|Privileged role| + ## Next steps - [Bring groups into Privileged Identity Management](groups-discover-groups.md) |
active-directory | Groups Activate Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-activate-roles.md | This article is for eligible members or owners who want to activate their group ## Activate a role - When you need to take on a group membership or ownership, you can request activation by using the **My roles** navigation option in PIM. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) -1. Select **Azure AD Privileged Identity Management -> My roles -> Groups**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **My roles** > **Groups**. >[!NOTE] > You may also use this [short link](https://aka.ms/pim) to open the **My roles** page directly. |
active-directory | Groups Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-approval-workflow.md | Follow the steps in this article to approve or deny requests for group membershi As a delegated approver, you receive an email notification when an Azure resource role request is pending your approval. You can view pending requests in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Azure AD Privileged Identity Management** > **Approve requests** > **Groups**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests** > **Groups**. 1. In the **Requests for role activations** section, you see a list of requests pending your approval. When you activate a role in Privileged Identity Management, the activation might If your activation is delayed: -1. Sign out of the Azure portal and then sign back in. +1. Sign out of the Microsoft Entra admin center and then sign back in. 1. In Privileged Identity Management, verify that you're listed as the member of the role. ## Next steps |
active-directory | Groups Assign Member Owner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-assign-member-owner.md | Follow these steps to make a user eligible member or owner of a group. You will > [!NOTE] > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) -1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**. ++1. Here you can view groups that are already enabled for PIM for Groups. :::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png"::: Follow these steps to update or remove an existing role assignment. You will nee > [!NOTE] > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. -1. Sign in to the [Azure portal](https://portal.azure.com) with appropriate role permissions. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with appropriate role permissions. ++1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**. -1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups. +1. Here you can view groups that are already enabled for PIM for Groups. :::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png"::: |
active-directory | Groups Audit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-audit.md | Title: Audit activity history for group assignments in Privileged Identity Management -description: View activity and audit activity history for group assignments in Azure AD Privileged Identity Management (PIM). +description: View activity and audit activity history for group assignments in Privileged Identity Management (PIM). documentationcenter: '' Follow these steps to view the audit history for groups in Privileged Identity M **Resource audit** gives you a view of all activity associated with groups in PIM. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Azure AD Privileged Identity Management -> Groups**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**. 1. Select the group you want to view audit history for. Follow these steps to view the audit history for groups in Privileged Identity M **My audit** enables you to view your personal role activity for groups in PIM. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Azure AD Privileged Identity Management -> Groups**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**. 1. Select the group you want to view audit history for. |
active-directory | Groups Discover Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-discover-groups.md | In Azure Active Directory (Azure AD), part of Microsoft Entra, you can use Privi ## Identify groups to manage - Before you will start, you need an Azure AD Security group or Microsoft 365 group. To learn more about group management in Azure AD, see [Manage Azure Active Directory groups and group membership](../fundamentals/how-to-manage-groups.md). Dynamic groups and groups synchronized from on-premises environment cannot be managed in PIM for Groups. You need appropriate permissions to bring groups in Azure AD PIM. For role-assig > Other roles with permissions to manage groups (such as Exchange Administrators for non-role-assignable M365 groups) and administrators with assignments scoped at administrative unit level can manage groups through Groups API/UX and override changes made in Azure AD PIM. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator). ++1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**. -1. Select **Azure AD Privileged Identity Management -> Groups** and view groups that are already enabled for PIM for Groups. +1. Here you can view groups that are already enabled for PIM for Groups. :::image type="content" source="media/pim-for-groups/pim-group-1.png" alt-text="Screenshot of where to view groups that are already enabled for PIM for Groups." lightbox="media/pim-for-groups/pim-group-1.png"::: You need appropriate permissions to bring groups in Azure AD PIM. For role-assig > [!NOTE]-> Alternatively, you can use the Groups blade to bring group under Privileged Identity Management. +> Alternatively, you can use the Groups pane to bring group under Privileged Identity Management. > [!NOTE] > Once a group is managed, it can't be taken out of management. This prevents another resource administrator from removing PIM settings. |
active-directory | Groups Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-renew-extend.md | After the request has been submitted, resource administrators are notified of a ### Admin approves -Resource administrators can access the renewal request from the link in the email notification or by accessing Privileged Identity Management from the Azure portal and selecting **Approve requests** from the left pane. +Resource administrators can access the renewal request from the link in the email notification or by accessing Privileged Identity Management from the Microsoft Entra admin center and selecting **Approve requests** from the left pane. When an administrator selects **Approve** or **Deny**, the details of the request are shown along with a field to provide a business justification for the audit logs. |
active-directory | Groups Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/groups-role-settings.md | Role settings are defined per role per group. All assignments for the same role ## Update role settings - To open the settings for a group role: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Azure AD Privileged Identity Management** > **Groups**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Groups**. 1. Select the group for which you want to configure role settings. |
active-directory | Pim Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-approval-workflow.md | Title: Approve or deny requests for Azure AD roles in PIM -description: Learn how to approve or deny requests for Azure AD roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to approve or deny requests for Azure AD roles in Privileged Identity Management (PIM). documentationcenter: '' With Privileged Identity Management (PIM) in Azure Active Directory (Azure AD), ## View pending requests - As a delegated approver, you'll receive an email notification when an Azure AD role request is pending your approval. You can view these pending requests in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Open **Azure AD Privileged Identity Management**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Approve requests**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests**. ![Approve requests - page showing request to review Azure AD roles](./media/azure-ad-pim-approval-workflow/resources-approve-pane.png) |
active-directory | Pim Complete Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-complete-roles-and-resource-roles-review.md | Title: Complete an access review of Azure resource and Azure AD roles in PIM -description: Learn how to complete an access review of Azure resource and Azure AD roles Privileged Identity Management in Azure Active Directory. +description: Learn how to complete an access review of Azure resource and Azure AD roles Privileged Identity Management. documentationcenter: '' Once the review has been created, follow the steps in this article to complete t ## Complete access reviews +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a user that is assigned to one of the prerequisite role(s). -1. Sign in to the [Azure portal](https://portal.azure.com). For **Azure resources**, navigate to **Privileged Identity Management** and select **Azure resources** under **Manage** from the dashboard. For **Azure AD roles**, select **Azure AD roles** from the same dashboard. +1. Browse to **Identity governance** > **Privileged Identity Management** > For **Azure AD roles**, select **Azure AD roles**. For **Azure resources**, select **Azure resources** -2. For **Azure resources**, select your resource under **Azure resources** and then select **Access reviews** from the dashboard. For **Azure AD roles**, proceed directly to the **Access reviews** on the dashboard. --3. Select the access review that you want to manage. Below is a sample screenshot of the **Access Reviews** overview for both **Azure resources** and **Azure AD roles**. +1. Select the access review that you want to manage. Below is a sample screenshot of the **Access Reviews** overview for both **Azure resources** and **Azure AD roles**. :::image type="content" source="media/pim-complete-azure-ad-roles-and-resource-roles-review/rbac-azure-ad-roles-home-list.png" alt-text="Access reviews list showing role, owner, start date, end date, and status screenshot." lightbox="media/pim-complete-azure-ad-roles-and-resource-roles-review/rbac-azure-ad-roles-home-list.png"::: |
active-directory | Pim Create Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-create-roles-and-resource-roles-review.md | Title: Create an access review of Azure resource and Azure AD roles in PIM -description: Learn how to create an access review of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to create an access review of Azure resource and Azure AD roles in Privileged Identity Management (PIM). documentationcenter: '' For more information about licenses for PIM, refer to [License requirements to u Access Reviews for **Service Principals** requires an Entra Workload Identities Premium plan in addition to Microsoft Entra Premium P2 or Microsoft Entra ID Governance licenses. -- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.+- Workload Identities Premium licensing: You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Microsoft Entra admin center. ## Create access reviews +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as a user that is assigned to one of the prerequisite role(s). -1. Sign in to the [Azure portal](https://portal.azure.com) as a user that is assigned to one of the prerequisite role(s). +1. Browse to **Identity governance** > **Privileged Identity Management** > For **Azure AD roles**, select **Azure AD roles**. For **Azure resources**, select **Azure resources** -2. Select **Identity Governance**. - -3. For **Azure AD roles**, select **Azure AD roles** under **Privileged Identity Management**. For **Azure resources**, select **Azure resources** under **Privileged Identity Management**. -- :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Azure portal screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png"::: + :::image type="content" source="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png" alt-text="Select Identity Governance in the Microsoft Entra admin center screenshot." lightbox="./media/pim-create-azure-ad-roles-and-resource-roles-review/identity-governance.png"::: 4. For **Azure AD roles**, select **Azure AD roles** again under **Manage**. For **Azure resources**, select the subscription you want to manage. |
active-directory | Pim Deployment Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-deployment-plan.md | For both Azure AD and Azure resource role, make sure that you’ve users represe ### Plan rollback -If PIM fails to work as desired in the production environment, you can change the role assignment from eligible to active once again. For each role that you’ve configured, select the ellipsis (…) for all users with assignment type as **eligible**. You can then select the **Make active** option to go back and make the role assignment **active**. +If PIM fails to work as desired in the production environment, you can change the role assignment from eligible to active once again. For each role that you’ve configured, select the ellipsis **(…)** for all users with assignment type as **eligible**. You can then select the **Make active** option to go back and make the role assignment **active**. ## Plan and implement PIM for Azure AD roles |
active-directory | Pim Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-getting-started.md | Title: Start using PIM -description: Learn how to enable and get started using Azure AD Privileged Identity Management (PIM) in the Azure portal. +description: Learn how to enable and get started using Privileged Identity Management (PIM) in the Microsoft Entra admin center. documentationcenter: '' Once Privileged Identity Management is set up, you can learn your way around. [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -To make it easier to open Privileged Identity Management, add a PIM tile to your Azure portal dashboard. +To make it easier to open Privileged Identity Management, add a PIM tile to your Microsoft Entra admin center dashboard. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center all services page](https://entra.microsoft.com/#allservices/category/All) -1. Select **All services** and find the **Azure AD Privileged Identity Management** service. +1. Find the **Azure AD Privileged Identity Management** service. ![Azure AD Privileged Identity Management in All services](./media/pim-getting-started/pim-all-services-find.png) |
active-directory | Pim How To Activate Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-activate-role.md | Title: Activate Azure AD roles in PIM -description: Learn how to activate Azure AD roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to activate Azure AD roles in Privileged Identity Management (PIM). documentationcenter: '' This article is for administrators who need to activate their Azure AD role in P ## Activate a role - When you need to assume an Azure AD role, you can request activation by opening **My roles** in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) -1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). +1. Browse to **Identity governance** > **Privileged Identity Management** > **My roles**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). -1. Select **My roles**, and then select **Azure AD roles** to see a list of your eligible Azure AD roles. +1. Select **Azure AD roles** to see a list of your eligible Azure AD roles. ![My roles page showing roles you can activate](./media/pim-how-to-activate-role/my-roles.png) |
active-directory | Pim How To Add Role To User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-add-role-to-user.md | Title: Assign Azure AD roles in PIM -description: Learn how to assign Azure AD roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to assign Azure AD roles in Privileged Identity Management (PIM). documentationcenter: '' -With Azure Active Directory (Azure AD), a Global administrator can make **permanent** Azure AD admin role assignments. These role assignments can be created using the [Azure portal](../roles/permissions-reference.md) or using [PowerShell commands](/powershell/module/azuread#directory_roles). +With Azure Active Directory (Azure AD), a Global administrator can make **permanent** Azure AD admin role assignments. These role assignments can be created using the [Microsoft Entra admin center](../roles/permissions-reference.md) or using [PowerShell commands](/powershell/module/azuread#directory_roles). The Azure AD Privileged Identity Management (PIM) service also allows Privileged role administrators to make permanent admin role assignments. Additionally, Privileged role administrators can make users **eligible** for Azure AD admin roles. An eligible administrator can activate the role when they need it, and then their permissions expire once they're done. Privileged Identity Management support both built-in and custom Azure AD roles. ## Assign a role - Follow these steps to make a user eligible for an Azure AD admin role. -1. Sign in to the [Azure portal](https://portal.azure.com) with a user that is a member of the [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator) role. --1. Open **Azure AD Privileged Identity Management**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator). -1. Select **Azure AD roles**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles**. 1. Select **Roles** to see the list of roles for Azure AD permissions. Follow these steps to make a user eligible for an Azure AD admin role. For certain roles, the scope of the granted permissions can be restricted to a single admin unit, service principal, or application. This procedure is an example if assigning a role that has the scope of an administrative unit. For a list of roles that support scope via administrative unit, see [Assign scoped roles to an administrative unit](../roles/admin-units-assign-roles.md). This feature is currently being rolled out to Azure AD organizations. -1. Sign in to the [Azure portal](https://portal.azure.com) with Privileged Role Administrator permissions. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator). -1. Select **Azure Active Directory** > **Roles and administrators**. +1. Browse to **Identity** > **Roles & admins** > **Roles & admins**. 1. Select the **User Administrator**. |
active-directory | Pim How To Change Default Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-change-default-settings.md | Title: Configure Azure AD role settings in PIM -description: Learn how to configure Azure AD role settings in Azure AD Privileged Identity Management (PIM). +description: Learn how to configure Azure AD role settings in Privileged Identity Management (PIM). documentationcenter: '' PIM role settings are also known as PIM policies. ## Open role settings - To open the settings for an Azure AD role: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). ++1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD roles** > **Roles**. -1. Select **Azure AD Privileged Identity Management** > **Azure AD Roles** > **Roles**. This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles. +1. This page shows a list of Azure AD roles available in the tenant, including built-in and custom roles. :::image type="content" source="media/pim-how-to-change-default-settings/role-settings.png" alt-text="Screenshot that shows the list of Azure AD roles available in the tenant, including built-in and custom roles." lightbox="media/pim-how-to-change-default-settings/role-settings.png"::: 1. Select the role whose settings you want to configure. |
active-directory | Pim How To Configure Security Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts.md | Title: Security alerts for Azure AD roles in PIM -description: Configure security alerts for Azure AD roles Privileged Identity Management in Azure Active Directory. +description: Configure security alerts for Azure AD roles Privileged Identity Management. documentationcenter: '' Severity: **Low** ## Customize security alert settings - Follow these steps to configure security alerts for Azure AD roles in Privileged Identity Management: -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). --1. From the left menu, select **Azure AD Roles**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. From the left menu, select **Alerts**, and then select **Setting**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure AD Roles** > **Alerts** > **Setting**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). ![Screenshots of alerts page with the settings highlighted.](media/pim-how-to-configure-security-alerts/alert-settings.png) |
active-directory | Pim How To Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-renew-extend.md | Title: Renew Azure AD role assignments in PIM -description: Learn how to extend or renew Azure Active Directory role assignments in Azure AD Privileged Identity Management (PIM). +description: Learn how to extend or renew Azure Active Directory role assignments in Privileged Identity Management (PIM). documentationcenter: '' After the request has been submitted, administrators are notified of a pending r ### Admin approves -Azure AD administrators can access the renewal request from the link in the email notification, or by accessing Privileged Identity Management from the Azure portal and selecting **Approve requests** in PIM. +Azure AD administrators can access the renewal request from the link in the email notification, or by accessing Privileged Identity Management from the Microsoft Entra admin center and selecting **Approve requests** in PIM. ![Azure AD roles - Approve requests page listing requests and links to approve or deny](./media/pim-how-to-renew-extend/extend-admin-approve-list.png) |
active-directory | Pim How To Use Audit Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-how-to-use-audit-log.md | Follow these steps to view the audit history for Azure AD roles. Resource audit gives you a view of all activity associated with your Azure AD roles. -1. Open **Azure AD Privileged Identity Management**. +1. Open **Privileged Identity Management**. 1. Select **Azure AD roles**. Resource audit gives you a view of all activity associated with your Azure AD ro My audit enables you to view your personal role activity. -1. Open **Azure AD Privileged Identity Management**. +1. Open **Privileged Identity Management**. 1. Select **Azure AD roles**. |
active-directory | Pim Perform Roles And Resource Roles Review | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-perform-roles-and-resource-roles-review.md | Title: Perform an access review of Azure resource and Azure AD roles in PIM -description: Learn how to review access of Azure resource and Azure AD roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to review access of Azure resource and Azure AD roles in Privileged Identity Management (PIM). documentationcenter: '' -If you are assigned to an administrative role, your organization's privileged role administrator may ask you to regularly confirm that you still need that role for your job. You might get an email that includes a link, or you can go straight to the [Azure portal](https://portal.azure.com) and begin. +If you're assigned to an administrative role, your organization's privileged role administrator may ask you to regularly confirm that you still need that role for your job. You might get an email that includes a link, or you can go straight to the [Microsoft Entra admin center](https://entra.microsoft.com) and begin. If you're a privileged role administrator or global administrator interested in access reviews, get more details at [How to start an access review](./pim-create-roles-and-resource-roles-review.md). ## Approve or deny access +You can approve or deny access based on whether the user still needs access to the role. Choose **Approve** if you want them to stay in the role, or **Deny** if they don't need the access anymore. The users' assignment status won't change until the review closes and the administrator applies the results. Common scenarios in which certain denied users can't have results applied to them may include the following: -You can approve or deny access based on whether the user still needs access to the role. Choose **Approve** if you want them to stay in the role, or **Deny** if they do not need the access anymore. The users' assignment status will not change until the review closes and the administrator applies the results. Common scenarios in which certain denied users cannot have results applied to them may include the following: --- **Reviewing members of a synced on-premises Windows AD group**: If the group is synced from an on-premises Windows AD, the group cannot be managed in Azure AD and therefore membership cannot be changed.-- **Reviewing a role with nested groups assigned**: For users who have membership through a nested group, the access review will not remove their membership to the nested group and therefore they will retain access to the role being reviewed.+- **Reviewing members of a synced on-premises Windows AD group**: If the group is synced from an on-premises Windows AD, the group can't be managed in Azure AD, and therefore membership can't be changed. +- **Reviewing a role with nested groups assigned**: For users who have membership through a nested group, the access review won't remove their membership to the nested group and therefore they retain access to the role being reviewed. - **User not found or other errors**: These may also result in an apply result not being supported. Follow these steps to find and complete the access review: -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Select **Azure Active Directory** and open **Privileged Identity Management**. -1. Select **Review access**. If you have any pending access reviews, they will appear in the access reviews page. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). ++1. Browse to **Identity governance** > **Privileged Identity Management** > **Review access**. - :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png" alt-text="Screenshot of Privileged Identity Management application, with Review access blade selected for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png"::: +1. If you have any pending access reviews, they appear in the access reviews page. ++ :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png" alt-text="Screenshot of Privileged Identity Management application, with Review access pane selected for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-complete.png"::: 1. Select the review you want to complete.+ 1. Choose **Approve** or **Deny**. In the **Provide a reason box**, enter a business justification for your decision as needed. :::image type="content" source="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-completed.png" alt-text="Screenshot of Privileged Identity Management application, with the selected Access Review for Azure AD roles." lightbox="media/pim-perform-azure-ad-roles-and-resource-roles-review/rbac-access-review-azure-ad-completed.png"::: |
active-directory | Pim Resource Roles Approval Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-approval-workflow.md | Title: Approve requests for Azure resource roles in PIM -description: Learn how to approve or deny requests for Azure resource roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to approve or deny requests for Azure resource roles in Privileged Identity Management (PIM). documentationcenter: '' Follow the steps in this article to approve or deny requests for Azure resource ## View pending requests - As a delegated approver, you'll receive an email notification when an Azure resource role request is pending your approval. You can view these pending requests in Privileged Identity Management. -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Open **Azure AD Privileged Identity Management**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Approve requests**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve requests**. ![Approve requests - Azure resources page showing request to review](./media/pim-resource-roles-approval-workflow/resources-approve-requests.png) |
active-directory | Pim Resource Roles Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md | Title: Assign Azure resource roles in Privileged Identity Management -description: Learn how to assign Azure resource roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to assign Azure resource roles in Privileged Identity Management (PIM). documentationcenter: '' For more information, see [What is Azure attribute-based access control (Azure A Follow these steps to make a user eligible for an Azure resource role. -1. Sign in to the [Azure portal](https://portal.azure.com) with Owner or User Access Administrator role permissions. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) with Owner or User Access Administrator role permissions. -1. Open **Azure AD Privileged Identity Management**. --1. Select **Azure resources**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources**. 1. Select the **Resource type** you want to manage. For example, such as **Resource**, or **Resource group**. Then select the resource you want to manage to open its overview page. |
active-directory | Pim Resource Roles Configure Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-alerts.md | Title: Configure security alerts for Azure roles in Privileged Identity Management -description: Learn how to configure security alerts for Azure resource roles in Azure AD Privileged Identity Management (PIM). +description: Learn how to configure security alerts for Azure resource roles in Privileged Identity Management (PIM). documentationcenter: '' Alert | Severity | Trigger | Recommendation ## Configure security alert settings - Follow these steps to configure security alerts for Azure roles in Privileged Identity Management: -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Open **Azure AD Privileged Identity Management**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). --1. From the left menu, select **Azure resources**. --1. From the list of resources, select your Azure subscription. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. On the **Alerts** page, select **Settings**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure resources** Select your subscription > **Alerts** > **Setting**. For information about how to add the Privileged Identity Management tile to your dashboard, see [Start using Privileged Identity Management](pim-getting-started.md). ![Screenshot of the alerts page with settings highlighted.](media/pim-resource-roles-configure-alerts/rbac-navigate-settings.png) |
active-directory | Pim Resource Roles Configure Role Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-configure-role-settings.md | Title: Configure Azure resource role settings in PIM -description: Learn how to configure Azure resource role settings in Azure AD Privileged Identity Management (PIM). +description: Learn how to configure Azure resource role settings in Privileged Identity Management (PIM). documentationcenter: '' PIM role settings are also known as PIM policies. ## Open role settings - To open the settings for an Azure resource role: -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com). -1. Select **Azure AD Privileged Identity Management** > **Azure Resources**. This page shows a list of Azure resources discovered in Privileged Identity Management. Use the **Resource type** filter to select all required resource types. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Approve Resources**. This page shows a list of Azure resources discovered in Privileged Identity Management. Use the **Resource type** filter to select all required resource types. :::image type="content" source="media/pim-resource-roles-configure-role-settings/resources-list.png" alt-text="Screenshot that shows the list of Azure resources discovered in Privileged Identity Management." lightbox="media/pim-resource-roles-configure-role-settings/resources-list.png"::: |
active-directory | Pim Resource Roles Discover Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-discover-resources.md | Title: Discover Azure resources to manage in PIM -description: Learn how to discover Azure resources to manage in Azure AD Privileged Identity Management (PIM). +description: Learn how to discover Azure resources to manage in Privileged Identity Management (PIM). documentationcenter: '' You can view and manage the management groups or subscriptions to which you have ## Discover resources +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Privileged role administrator](../roles/permissions-reference.md#privileged-role-administrator). -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Open **Azure AD Privileged Identity Management**. --1. Select **Azure resources**. +1. Browse to **Identity governance** > **Privileged Identity Management** > **Azure Resources**. If this is your first time using Privileged Identity Management for Azure resources, you'll see a **Discover resources** page. |
active-directory | Pim Resource Roles Renew Extend | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-resource-roles-renew-extend.md | Title: Renew Azure resource role assignments in PIM -description: Learn how to extend or renew Azure resource role assignments in Azure AD Privileged Identity Management (PIM). +description: Learn how to extend or renew Azure resource role assignments in Privileged Identity Management (PIM). documentationcenter: '' Users assigned to a role can extend expiring role assignments directly from the ![Azure resources - My roles page listing eligible roles with an Action column](media/pim-resource-roles-renew-extend/aadpim-rbac-extend-ui.png) -When the assignment end date-time is within 14 days, the link to **Extend** becomes an active in the Azure portal. In the following example, assume the current date is March 27. +When the assignment end date-time is within 14 days, the link to **Extend** becomes an active in the Microsoft Entra admin center. In the following example, assume the current date is March 27. >[!Note] >For a group assigned to a role, the **Extend** link never becomes available so that a user with an inherited assignment can't extend the group assignment. |
active-directory | Pim Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/privileged-identity-management/pim-roles.md | We support all Microsoft 365 roles in the Azure AD Roles and Administrators port > [!NOTE] > - Eligible users for the SharePoint administrator role, the Device administrator role, and any roles trying to access the Microsoft Security & Compliance Center might experience delays of up to a few hours after activating their role. We are working with those teams to fix the issues.-> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-device-administrator-role). +> - For information about delays activating the Azure AD Joined Device Local Administrator role, see [How to manage the local administrators group on Azure AD joined devices](../devices/assign-local-admin.md#manage-the-azure-ad-joined-device-local-administrator-role). ## Next steps |
active-directory | Concept Activity Logs Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md | - Title: Azure Active Directory activity log integration options -description: Introduction to the options for integrating Azure Active Directory activity logs with storage and analysis tools. ------- Previously updated : 07/27/2023-----# Azure AD activity log integrations --Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs. --With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered. --## Supported reports --The following logs can be integrated with one of many endpoints: --* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. -* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors. -* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications. -* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity. -* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization. --## Integration options --To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories: --- Troubleshooting-- Long-term storage-- Analysis and monitoring--### Troubleshooting --If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed. --If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options. --### Long-term storage --If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often. --If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options. --### Analysis and monitoring --If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring. --If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools. --If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub. --## Cost considerations --There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day. --Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day. --Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles: --- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md)-- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md)-- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md)--Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md). --## Estimate your costs --To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint. --The following factors could affect costs for your organization: --- Audit log events use around 2 KB of data storage-- Sign-in log events use on average 11.5 KB of data storage-- A tenant of about 100,000 users could incur about 1.5 million events per day-- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame--### Daily log size --To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). --If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start: --- 1000 records-- For large tenants, 15 minutes of sign-ins-- For small to medium tenants, 1 hour of sign-ins--You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly. --With the data sample captured, multiply accordingly to find out how large the file would be for one day. --### Estimate the daily cost --To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase. --To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done. --With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts. --![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png) --Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost. --## Calculate estimated costs --From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products. --- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/)-- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/)-- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/)-- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/)--Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices. --![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png) --## Next steps --* [Create a storage account](../../storage/common/storage-account-create.md) -* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md) -* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md) -* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md) |
active-directory | Concept All Sign Ins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md | - Title: Sign-in logs (preview) -description: Conceptual information about sign-in logs, including new features in preview. ------- Previously updated : 03/28/2023-----# Sign-in logs in Azure Active Directory (preview) --Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs. --Two other activity logs are also available to help monitor the health of your tenant: -- **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources.-- **[Provisioning](concept-provisioning-logs.md)** ΓÇô Activities performed by a provisioning service, such as the creation of a group in ServiceNow or a user imported from Workday.--The classic sign-in logs in Azure AD provide you with an overview of interactive user sign-ins. Three more sign-in logs are now in preview: --- Non-interactive user sign-ins-- Service principal sign-ins-- Managed identities for Azure resource sign-ins--This article gives you an overview of the sign-in activity report with the preview of non-interactive, application, and managed identities for Azure resources sign-ins. For information about the sign-in report without the preview features, see [Sign-in logs in Azure Active Directory](concept-sign-ins.md). --## How do you access the sign-in logs? --You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com). --To access the sign-ins log for a tenant, you must have one of the following roles: --- Global Administrator-- Security Administrator-- Security Reader-- Global Reader-- Reports Reader-->[!NOTE] ->To see Conditional Access data in the sign-ins log, you need to be a user in one of the following roles: -Company Administrator, Global Reader, Security Administrator, Security Reader, Conditional Access Administrator . --The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade. --**To access the Azure AD sign-ins log preview:** --1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role. -1. Go to **Azure Active Directory** > **Sign-ins log**. -1. Select the **Try out our new sign-ins preview** link. -- ![Screenshot of the preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-preview-link.png) -- To toggle back to the legacy view, select the **Click here to leave the preview** link. -- ![Screenshot of the leave preview link on the sign-in logs page.](./media/concept-all-sign-ins/sign-in-logs-leave-preview-link.png) --You can also access the sign-in logs from the following areas of Azure AD: --- Users-- Groups-- Enterprise applications--On the sign-in logs page, you can switch between: --- **Interactive user sign-ins:** Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code.--- **Non-interactive user sign-ins:** Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials.--- **Service principal sign-ins:** Sign-ins by apps and service principals that don't involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources.--- **Managed identities for Azure resources sign-ins:** Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md) --![Screenshot of the sign-in log types.](./media/concept-all-sign-ins/sign-ins-report-types.png) --## View the sign-ins log --To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down. --### Interactive user sign-ins --Interactive user sign-ins provide an authentication factor to Azure AD or interact directly with Azure AD or a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Azure AD or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Azure AD. --> [!NOTE] -> The interactive user sign-in log previously contained some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign in log for increased accuracy. --**Report size:** small </br> -**Examples:** --- A user provides username and password in the Azure AD sign-in screen.-- A user passes an SMS MFA challenge.-- A user provides a biometric gesture to unlock their Windows PC with Windows Hello for Business.-- A user is federated to Azure AD with an AD FS SAML assertion.--In addition to the default fields, the interactive sign-in log also shows: --- The sign-in location-- Whether Conditional Access has been applied--You can customize the list view by clicking **Columns** in the toolbar. --![Screenshot customize columns button.](./media/concept-all-sign-ins/sign-in-logs-columns-preview.png) --#### Considerations for MFA sign-ins --When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`. --### Non-interactive user sign-ins --Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user perceives these sign-ins as happening in the background. --**Report size:** Large </br> -**Examples:** --- A client app uses an OAuth 2.0 refresh token to get an access token.-- A client uses an OAuth 2.0 authorization code to get an access token and refresh token.-- A user performs single sign-on (SSO) to a web or Windows app on an Azure AD joined PC (without providing an authentication factor or interacting with an Azure AD prompt).-- A user signs in to a second Microsoft Office app while they have a session on a mobile device using FOCI (Family of Client IDs).--In addition to the default fields, the non-interactive sign-in log also shows: --- Resource ID-- Number of grouped sign-ins--You can't customize the fields shown in this report. --![Screenshot of the disabled columns option.](./media/concept-all-sign-ins/disabled-columns.png) --To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in. --When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) have a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps. --Sign-ins are aggregated in the non-interactive users when the following data matches: --- Application-- User-- IP address-- Status-- Resource ID--> [!NOTE] -> The IP address of non-interactive sign-ins performed by [confidential clients](../develop/msal-client-applications.md) doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance. --### Service principal sign-ins --Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any nonuser account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources. ---**Report size:** Large </br> -**Examples:** --- A service principal uses a certificate to authenticate and access the Microsoft Graph. -- An application uses a client secret to authenticate in the OAuth Client Credentials flow. --You can't customize the fields shown in this report. --To make it easier to digest the data in the service principal sign-in logs, service principal sign-in events are grouped. Sign-ins from the same entity under the same conditions are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the service principal report when the following data matches: --- Service principal name or ID-- Status-- IP address-- Resource name or ID--### Managed identity for Azure resources sign-ins --Managed identities for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management. A VM with managed credentials uses Azure AD to get an Access Token. --**Report size:** Small </br> -**Examples:** -- You can't customize the fields shown in this report. --To make it easier to digest the data, managed identities for Azure resources sign in logs, non-interactive sign-in events are grouped. Sign-ins from the same entity are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the managed identities report when all of the following data matches: --- Managed identity name or ID-- Status-- Resource name or ID--Select an item in the list view to display all sign-ins that are grouped under a node. Select a grouped item to see all details of the sign-in. --### Filter the results --Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential. --Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters. Take note of the **Date** range in your filter to ensure that Azure AD only returns the data you need. The filter you configure for interactive sign-ins is persisted for non-interactive sign-ins and vice versa. --Select the **Add filters** option from the top of the table to get started. --![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-all-sign-ins/sign-in-logs-filter-preview.png) --There are several filter options to choose from: --- **User:** The *user principal name* (UPN) of the user in question.-- **Status:** Options are *Success*, *Failure*, and *Interrupted*.-- **Resource:** The name of the service used for the sign-in.-- **Conditional Access:** The status of the Conditional Access policy. Options are: - - *Not applied:* No policy applied to the user and application during sign-in. - - *Success:* One or more Conditional Access policies applied to the user and application (but not necessarily the other conditions) during sign-in. - - *Failure:* The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access. -- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.-The following table provides the options and descriptions for the **Client app** filter option. --> [!NOTE] -> Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario. --|Name|Modern authentication|Description| -||:-:|| -|Authenticated SMTP| |Used by POP and IMAP client's to send email messages.| -|Autodiscover| |Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.| -|Exchange ActiveSync| |This filter shows all sign-in attempts where the EAS protocol has been attempted.| -|Browser|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using web browsers| -|Exchange ActiveSync| | Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online| -|Exchange Online PowerShell| |Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).| -|Exchange Web Services| |A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.| -|IMAP4| |A legacy mail client using IMAP to retrieve email.| -|MAPI over HTTP| |Used by Outlook 2010 and later.| -|Mobile apps and desktop clients|![Blue checkmark.](./media/concept-all-sign-ins/check.png)|Shows all sign-in attempts from users using mobile apps and desktop clients.| -|Offline Address Book| |A copy of address list collections that are downloaded and used by Outlook.| -|Outlook Anywhere (RPC over HTTP)| |Used by Outlook 2016 and earlier.| -|Outlook Service| |Used by the Mail and Calendar app for Windows 10.| -|POP3| |A legacy mail client using POP3 to retrieve email.| -|Reporting Web Services| |Used to retrieve report data in Exchange Online.| -|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.| --## Analyze the sign-in logs --Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools. --### Sign-in error code --If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue. --![Screenshot of a sign-in error code.](./media/concept-all-sign-ins/error-code.png) - -For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button. --![Screenshot of the error code lookup tool.](./media/concept-all-sign-ins/error-code-lookup-tool.png) --### Authentication details --The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt: --- A list of authentication policies applied, such as Conditional Access or Security Defaults.-- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.-- The sequence of authentication methods used to sign-in.-- If the authentication attempt was successful and the reason why.--This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track: --- The volume of sign-ins protected by MFA. -- The reason for the authentication prompt, based on the session lifetime policies.-- Usage and success rates for each authentication method.-- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.-- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.--While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab. --![Screenshot of the Authentication Details tab.](media/concept-all-sign-ins/authentication-details-tab.png) --When analyzing authentication details, take note of the following details: --- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).-- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include: - - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged. - - The **Primary authentication** row isn't initially logged. -- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.--## Sign-in data used by other services --Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage. --### Risky sign-in data in Azure AD Identity Protection --Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data: --- Risky users-- Risky user sign-ins -- Risky service principals-- Risky service principal sign-ins--For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md). --![Screenshot of risky users in Identity Protection.](media/concept-all-sign-ins/id-protection-overview.png) --### Azure AD application and authentication sign-in activity --With an application-centric view of your sign-in data, you can answer questions such as: --- Who is using my applications?-- What are the top three applications in my organization?-- How is my newest application doing?--To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md). --![Screenshot of the Azure AD application activity report.](media/concept-all-sign-ins/azure-ad-app-activity.png) --Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication. --![Screenshot of the Authentication methods report.](media/concept-all-sign-ins/azure-ad-authentication-methods.png) --### Microsoft 365 activity logs --You can view Microsoft 365 activity logs from the [Microsoft 365 admin center](/office365/admin/admin-overview/about-the-admin-center). Microsoft 365 activity and Azure AD activity logs share a significant number of directory resources. Only the Microsoft 365 admin center provides a full view of the Microsoft 365 activity logs. --You can also access the Microsoft 365 activity logs programmatically by using the [Office 365 Management APIs](/office/office-365-management-api/office-365-management-apis-overview). --## Next steps --- [Basic info in the Azure AD sign-in logs](reference-basic-info-sign-in-logs.md)--- [How to download logs in Azure Active Directory](howto-download-logs.md)--- [How to access activity logs in Azure AD](howto-access-activity-logs.md) |
active-directory | Concept Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-audit-logs.md | - Title: Audit logs in Azure Active Directory -description: Overview of the audit logs in Azure Active Directory. + Title: Learn about the audit logs in Azure Active Directory +description: Learn about the types of identity related events that are captured in Azure Active Directory audit logs. Two other activity logs are also available to help monitor the health of your te This article gives you an overview of the audit logs. -## What is it? +## What can you do with audit logs? -Audit logs in Azure AD provide access to system activity records, often needed for compliance. This log is categorized by user, group, and application management. +Audit logs in Azure AD provide access to system activity records, often needed for compliance. You can get answers to questions related to users, groups, and applications. -With a user-centric view, you can get answers to questions such as: --- What types of updates have been applied to users?+**Users:** +- What types of changes were recently applied to users? - How many users were changed?- - How many passwords were changed? -- What has an administrator done in a directory?---With a group-centric view, you can get answers to questions such as: --- What are the groups that have been added?--- Are there groups with membership changes?+**Groups:** +- What groups were recently added? - Have the owners of group been changed?- - What licenses have been assigned to a group or a user? +**Applications:** -With an application-centric view, you can get answers to questions such as: --- What applications have been added or updated?--- What applications have been removed?-+- What applications have been added, updated, or removed? - Has a service principal for an application changed?- - Have the names of applications been changed?--- Who gave consent to an application?- -## How do I access it? --To access the audit log for a tenant, you must have one of the following roles: --- Reports Reader-- Security Reader-- Security Administrator-- Global Reader-- Global Administrator--Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure AD** and select **Audit log** from the **Monitoring** section. --The audit activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the audit log through the [Microsoft Graph API](/graph/api/resources/azure-ad-auditlog-overview). See [Getting started with Azure Active Directory Premium](../fundamentals/get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade. - ## What do the logs show? Audit logs have a default list view that shows: Audit logs have a default list view that shows: - Category and name of the activity (*what*) - Status of the activity (success or failure) - Target-- Initiator / actor of an activity (who)+- Initiator / actor of an activity (*who*) You can customize and filter the list view by clicking the **Columns** button in the toolbar. Editing the columns enables you to add or remove fields from your view. -![Screenshot of available fields.](./media/concept-audit-logs/columnselect.png "Remove fields") - ### Filtering audit logs You can filter the audit data using the options visible in your list such as date range, service, category, and activity. |
active-directory | Concept Diagnostic Settings Logs Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-diagnostic-settings-logs-options.md | ++ + Title: Logs available for streaming to endpoints from Azure Active Directory +description: Learn about the Azure Active Directory logs available for streaming to an endpoint for storage, analysis, or monitoring. +++++++ Last updated : 08/09/2023++++++# Learn about the identity logs you can stream to an endpoint ++Using Diagnostic settings in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term retention and data insights. You select the logs you want to route, then select the endpoint. ++This article describes the logs that you can route to an endpoint from Azure AD Diagnostic settings. ++## Prerequisites ++Setting up an endpoint, such as an event hub or storage account, may require different roles and licenses. To create or edit a new Diagnostic setting, you need a user who's a **Security Administrator** or **Global Administrator** for the Azure AD tenant. ++To help decide which log routing option is best for you, see [How to access activity logs](howto-access-activity-logs.md). The overall process and requirements for each endpoint type are covered in the following articles. ++- [Send logs to a Log Analytics workspace to integrate with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md) +- [Archive logs to a storage account](howto-archive-logs-to-storage-account.md) +- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md) +- [Send to a partner solution](../../partner-solutions/overview.md) ++## Activity log options ++The following logs can be sent to an endpoint. Some logs may be in public preview but still visible in the portal. ++### Audit logs ++The `AuditLogs` report capture changes to applications, groups, users, and licenses in your Azure AD tenant. Once you've routed your audit logs, you can filter or analyze by date/time, the service that logged the event, and who made the change. For more information, see [Audit logs](concept-audit-logs.md). ++### Sign-in logs ++The `SignInLogs` send the interactive sign-in logs, which are logs generated by your users signing in. Sign-in logs are generated by users providing their username and password on an Azure AD sign-in screen or passing an MFA challenge. For more information, see [Interactive user sign-ins](concept-all-sign-ins.md#interactive-user-sign-ins). ++### Non-interactive sign-in logs ++The `NonInteractiveUserSIgnInLogs` are sign-ins done on behalf of a user, such as by a client app. The device or client uses a token or code to authenticate or access a resource on behalf of a user. For more information, see [Non-interactive user sign-ins](concept-all-sign-ins.md#non-interactive-user-sign-ins). ++### Service principal sign-in logs ++If you need to review sign-in activity for apps or service principals, the `ServicePrincipalSignInLogs` may be a good option. In these scenarios, certificates or client secrets are used for authentication. For more information, see [Service principal sign-ins](concept-all-sign-ins.md#service-principal-sign-ins). ++### Managed identity sign-in logs ++The `ManagedIdentitySignInLogs` provide similar insights as the service principal sign-in logs, but for managed identities, where Azure manages the secrets. For more information, see [Managed identity sign-ins](concept-all-sign-ins.md#managed-identity-for-azure-resources-sign-ins). ++### Provisioning logs ++If your organization provisions users through a third-party application such as Workday or ServiceNow, you may want to export the `ProvisioningLogs` reports. For more information, see [Provisioning logs](concept-provisioning-logs.md). ++### AD FS sign-in logs ++Sign-in activity for Active Directory Federated Services (AD FS) applications are captured in this Usage and insight reports. You can export the `ADFSSignInLogs` report to monitor sign-in activity for AD FS applications. For more information, see [AD FS sign-in logs](concept-usage-insights-report.md#ad-fs-application-activity). ++### Risky users ++The `RiskyUsers` logs identify users who are at risk based on their sign-in activity. This report is part of Azure AD Identity Protection and uses sign-in data from Azure AD. For more information, see [What is Azure AD Identity Protection?](../identity-protection/overview-identity-protection.md). ++### User risk events ++The `UserRiskEvents` logs are part of Azure AD Identity Protection. These logs capture details about risky sign-in events. For more information, see [How to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins). ++### Risky service principals ++The `RiskyServicePrincipals` logs provide information about service principals that Azure AD Identity Protection detected as risky. Service principal risk represents the probability that an identity or account is compromised. These risks are calculated asynchronously using data and patterns from Microsoft's internal and external threat intelligence sources. These sources may include security researchers, law enforcement professionals, and security teams at Microsoft. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md) ++### Service principal risk events ++The `ServicePrincipalRiskEvents` logs provide details around the risky sign-in events for service principals. These logs may include any identified suspicious events related to the service principal accounts. For more information, see [Securing workload identities](../identity-protection/concept-workload-identity-risk.md) ++### Enriched Microsoft 365 audit logs ++The `EnrichedOffice365AuditLogs` logs are associated with the enriched logs you can enable for Microsoft Entra Internet Access. Selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet to secure access to your Microsoft 365 traffic *and* you enabled the enriched logs. For more information, see [How to use the Global Secure Access enriched Microsoft 365 logs](../../global-secure-access/how-to-view-enriched-logs.md). ++### Microsoft Graph activity logs ++The `MicrosoftGraphActivityLogs` logs are associated with a feature that is still in preview. The logs are visible in Azure AD, but selecting these options won't add new logs to your workspace unless your organization was included in the preview. ++### Network access traffic logs ++The `NetworkAccessTrafficLogs` logs are associated with Microsoft Entra Internet Access and Microsoft Entra Private Access. The logs are visible in Azure AD, but selecting this option doesn't add new logs to your workspace unless your organization is using Microsoft Entra Internet Access and Microsoft Entra Private Access to secure access to your corporate resources. For more information, see [What is Global Secure Access?](../../global-secure-access/overview-what-is-global-secure-access.md). ++## Next steps ++- [Learn about the sign-ins logs](concept-all-sign-ins.md) +- [Explore how to access the activity logs](howto-access-activity-logs.md) |
active-directory | Concept Log Monitoring Integration Options Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-log-monitoring-integration-options-considerations.md | + + Title: Azure Active Directory activity log integration options and considerations +description: Introduction to the options and considerations for integrating Azure Active Directory activity logs with storage and analysis tools. +++++++ Last updated : 08/09/2023++++# Azure AD activity log integrations ++Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can route activity logs to several endpoints for long term data retention and insights. You can archive logs for storage, route to Security Information and Event Management (SIEM) tools, and integrate logs with Azure Monitor logs. ++With these integrations, you can enable rich visualizations, monitoring, and alerting on the connected data. This article describes the recommended uses for each integration type or access method. Cost considerations for sending Azure AD activity logs to various endpoints are also covered. ++## Supported reports ++The following logs can be integrated with one of many endpoints: ++* The [**audit logs activity report**](concept-audit-logs.md) gives you access to the history of every task that's performed in your tenant. +* With the [**sign-in activity report**](concept-sign-ins.md), you can see when users attempt to sign in to your applications or troubleshoot sign-in errors. +* With the [**provisioning logs**](../app-provisioning/application-provisioning-log-analytics.md), you can monitor which users have been created, updated, and deleted in all your third-party applications. +* The [**risky users logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risky-users) helps you monitor changes in user risk level and remediation activity. +* With the [**risk detections logs**](../identity-protection/howto-identity-protection-investigate-risk.md#risk-detections), you can monitor user's risk detections and analyze trends in risk activity detected in your organization. ++## Integration options ++To help choose the right method for integrating Azure AD activity logs for storage or analysis, think about the overall task you're trying to accomplish. We've grouped the options into three main categories: ++- Troubleshooting +- Long-term storage +- Analysis and monitoring ++### Troubleshooting ++If you're performing troubleshooting tasks but you don't need to retain the logs for more than 30 days, we recommend using the Azure portal or Microsoft Graph to access activity logs. You can filter the logs for your scenario and export or download them as needed. ++If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, take a look at the long-term storage options. ++### Long-term storage ++If you're performing troubleshooting tasks *and* you need to retain the logs for more than 30 days, you can export your logs to an Azure storage account. This option is ideal of you don't plan on querying that data often. ++If you need to query the data that you're retaining for more than 30 days, take a look at the analysis and monitoring options. ++### Analysis and monitoring ++If your scenario requires that you retain data for more than 30 days *and* you plan on querying that data regularly, you've got a few options to integrate your data with SIEM tools for analysis and monitoring. ++If you have a third party SIEM tool, we recommend setting up an Event Hubs namespace and event hub that you can stream your data through. With an event hub, you can stream logs to one of the supported SIEM tools. ++If you don't plan on using a third-party SIEM tool, we recommend sending your Azure AD activity logs to Azure Monitor logs. With this integration, you can query your activity logs with Log Analytics. In Addition to Azure Monitor logs, Microsoft Sentinel provides near real-time security detection and threat hunting. If you decide to integrate with SIEM tools later, you can stream your Azure AD activity logs along with your other Azure data through an event hub. ++## Cost considerations ++There's a cost for sending data to a Log Analytics workspace, archiving data in a storage account, or streaming logs to an event hub. The amount of data and the cost incurred can vary significantly depending on the tenant size, the number of policies in use, and even the time of day. ++Because the size and cost for sending logs to an endpoint is difficult to predict, the most accurate way to determine your expected costs is to route your logs to an endpoint for day or two. With this snapshot, you can get an accurate prediction for your expected costs. You can also get an estimate of your costs by downloading a sample of your logs and multiplying accordingly to get an estimate for one day. ++Other considerations for sending Azure AD logs to Azure Monitor logs are covered in the following Azure Monitor cost details articles: ++- [Azure Monitor logs cost calculations and options](../../azure-monitor/logs/cost-logs.md) +- [Azure Monitor cost and usage](../../azure-monitor/usage-estimated-costs.md) +- [Optimize costs in Azure Monitor](../../azure-monitor/best-practices-cost.md) ++Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md). ++## Estimate your costs ++To estimate the costs for your organization, you can estimate either the daily log size or the daily cost for integrating your logs with an endpoint. ++The following factors could affect costs for your organization: ++- Audit log events use around 2 KB of data storage +- Sign-in log events use on average 11.5 KB of data storage +- A tenant of about 100,000 users could incur about 1.5 million events per day +- Events are batched into about 5-minute intervals and sent as a single message that contains all the events within that time frame ++### Daily log size ++To estimate the daily log size, gather a sample of your logs, adjust the sample to reflect your tenant size and settings, then apply that sample to the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). ++If you haven't downloaded logs from the Azure portal, review the [How to download logs in Azure AD](howto-download-logs.md) article. Depending on the size of your organization, you may need to choose a different sample size to start your estimation. The following sample sizes are a good place to start: ++- 1000 records +- For large tenants, 15 minutes of sign-ins +- For small to medium tenants, 1 hour of sign-ins ++You should also consider the geographic distribution and peak hours of your users when you capture your data sample. If your organization is based in one region, it's likely that sign-ins peak around the same time. Adjust your sample size and when you capture the sample accordingly. ++With the data sample captured, multiply accordingly to find out how large the file would be for one day. ++### Estimate the daily cost ++To get an idea of how much a log integration could cost for your organization, you can enable an integration for a day or two. Use this option if your budget allows for the temporary increase. ++To enable a log integration, follow the steps in the [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) article. If possible, create a new resource group for the logs and endpoint you want to try out. Having a devoted resource group makes it easy to view the cost analysis and then delete it when you're done. ++With the integration enabled, navigate to **Azure portal** > **Cost Management** > **Cost analysis**. There are several ways to analyze costs. This [Cost Management quickstart](../../cost-management-billing/costs/quick-acm-cost-analysis.md) should help you get started. The figures in the following screenshot are used for example purposes and are not intended to reflect actual amounts. ++![Screenshot of a cost analysis breakdown as a pie chart.](media/concept-activity-logs-azure-monitor/cost-analysis-breakdown.png) ++Make sure you're using your new resource group as the scope. Explore the daily costs and forecasts to get an idea of how much your log integration could cost. ++## Calculate estimated costs ++From the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) landing page, you can estimate the costs for various products. ++- [Azure Monitor](https://azure.microsoft.com/pricing/details/monitor/) +- [Azure storage](https://azure.microsoft.com/pricing/details/storage/blobs/) +- [Azure Event Hubs](https://azure.microsoft.com/pricing/details/event-hubs/) +- [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/microsoft-sentinel/) ++Once you have an estimate for the GB/day that will be sent to an endpoint, enter that value in the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/). The figures in the following screenshot are used for example purposes and are not intended to reflect actual prices. ++![Screenshot of the Azure pricing calculator, with 8 GB/Day used as an example.](media/concept-activity-logs-azure-monitor/azure-pricing-calculator-values.png) ++## Next steps ++* [Create a storage account](../../storage/common/storage-account-create.md) +* [Archive activity logs to a storage account](quickstart-azure-monitor-route-logs-to-storage-account.md) +* [Route activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md) +* [Integrate activity logs with Azure Monitor](howto-integrate-activity-logs-with-log-analytics.md) |
active-directory | Concept Provisioning Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md | Title: Provisioning logs in Azure Active Directory -description: Overview of the provisioning logs in Azure Active Directory. +description: Learn about the information included in the provisioning logs in Azure Active Directory. Application owners can view logs for their own applications. The following roles - Global Administrator - Users in a custom role with the [provisioningLogs permission](../roles/custom-enterprise-app-permissions.md#full-list-of-permissions) -To access the provisioning log data, you have the following options: +There are several ways to view or analyze the Provisioning logs: -- Select **Provisioning logs** from the **Monitoring** section of Azure AD.+- View in the Azure portal. +- Stream logs to [Azure Monitor](../app-provisioning/application-provisioning-log-analytics.md) through Diagnostic settings. +- Analyze logs through [Workbook](howto-use-workbooks.md) templates. +- Access logs programmatically through the [Microsoft Graph API](/graph/api/resources/provisioningobjectsummary). +- [Download the logs](howto-download-logs.md) as a CSV or JSON file. -- Stream the provisioning logs into [Azure Monitor](../app-provisioning/application-provisioning-log-analytics.md). This method allows for extended data retention and building custom dashboards, alerts, and queries.+To access the logs in the Azure portal: -- Query the [Microsoft Graph API](/graph/api/resources/provisioningobjectsummary) for the provisioning logs.--- Download the provisioning logs as a CSV or JSON file.+1. Sign in to the [Azure portal](https://portal.azure.com) using the Reports Reader role. +1. Browse to **Azure Active Directory** > **Monitoring** > **Provisioning logs**. ## View the provisioning logs This area enables you to display more fields or remove fields that are already d ## Filter the results -When you filter your provisioning data, some filter values are dynamically populated based on your tenant. For example, if you don't have any "create" events in your tenant, there won't be a **Create** filter option. +When you filter your provisioning data, some filter values are dynamically populated based on your tenant. For example, if you don't have any "create" events in your tenant, the\= **Create** filter option isn't available. The **Identity** filter enables you to specify the name or the identity that you care about. This identity might be a user, group, role, or other object. When you select an item in the provisioning list view, you get more details abou ## Download logs as CSV or JSON -You can download the provisioning logs for later use by going to the logs in the Azure portal and selecting **Download**. The file will be filtered based on the filter criteria you've selected. Make the filters as specific as possible to reduce the size and time of the download. +You can download the provisioning logs for later use by going to the logs in the Azure portal and selecting **Download**. The results are filtered based on the filter criteria you've selected. Make the filters as specific as possible to reduce the size and time of the download. The CSV download includes three files: The JSON file is downloaded in minified format to reduce the size of the downloa - Use [Visual Studio Code to format the JSON](https://code.visualstudio.com/docs/languages/json#_formatting). -- Use PowerShell to format the JSON. This script will output the JSON in a format that includes tabs and spaces: +- Use PowerShell to format the JSON. This script produces a JSON output in a format that includes tabs and spaces: ` $JSONContent = Get-Content -Path "<PATH TO THE PROVISIONING LOGS FILE>" | ConvertFrom-JSON` Here are some tips and considerations for provisioning reports: - You can use the change ID attribute as unique identifier, which can be helpful when you're interacting with product support, for example. -- You might see skipped events for users who aren't in scope. This behavior is expected, especially when the sync scope is set to all users and groups. The service will evaluate all the objects in the tenant, even the ones that are out of scope. +- You might see skipped events for users who aren't in scope. This behavior is expected, especially when the sync scope is set to all users and groups. The service evaluates all the objects in the tenant, even the ones that are out of scope. - The provisioning logs don't show role imports (applies to AWS, Salesforce, and Zendesk). You can find the logs for role imports in the audit logs. Use the following table to better understand how to resolve errors that you find |Error code|Description| ||| |Conflict,<br>EntryConflict|Correct the conflicting attribute values in either Azure AD or the application. Or, review your matching attribute configuration if the conflicting user account was supposed to be matched and taken over. Review the [documentation](../app-provisioning/customize-application-attributes.md) for more information on configuring matching attributes.|-|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt will automatically be retired. Microsoft has also been notified of this issue.| -|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing it from working. This attempt will automatically be retried in 40 minutes.| +|TooManyRequests|The target app rejected this attempt to update the user because it's overloaded and receiving too many requests. There's nothing to do. This attempt is automatically retired. Microsoft has also been notified of this issue.| +|InternalServerError |The target app returned an unexpected error. A service issue with the target application might be preventing it from working. This attempt is automatically retried in 40 minutes.| |InsufficientRights,<br>MethodNotAllowed,<br>NotPermitted,<br>Unauthorized| Azure AD authenticated with the target application but wasn't authorized to perform the update. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md).| |UnprocessableEntity|The target application returned an unexpected response. The configuration of the target application might not be correct, or a service issue with the target application might be preventing it from working.|-|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There's nothing to do. This attempt will automatically be retried in 40 minutes.| -|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning will trigger an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant will have to be evaluated again, and certain provisioning events might be dropped.| +|WebExceptionProtocolError |An HTTP protocol error occurred in connecting to the target application. There's nothing to do. This attempt is automatically retried in 40 minutes.| +|InvalidAnchor|A user that was previously created or matched by the provisioning service no longer exists. Ensure that the user exists. To force a new matching of all users, use the Microsoft Graph API to [restart the job](/graph/api/synchronization-synchronizationjob-restart?tabs=http&view=graph-rest-beta&preserve-view=true). <br><br>Restarting provisioning triggers an initial cycle, which can take time to complete. Restarting provisioning also deletes the cache that the provisioning service uses to operate. That means all users and groups in the tenant must be evaluated again, and certain provisioning events might be dropped.| |NotImplemented | The target app returned an unexpected response. The configuration of the app might not be correct, or a service issue with the target app might be preventing it from working. Review any instructions that the target application has provided, along with the respective application [tutorial](../saas-apps/tutorial-list.md). | |MandatoryFieldsMissing,<br>MissingValues |The user couldn't be created because required values are missing. Correct the missing attribute values in the source record, or review your matching attribute configuration to ensure that the required fields aren't omitted. [Learn more](../app-provisioning/customize-application-attributes.md) about configuring matching attributes.| |SchemaAttributeNotFound |The operation couldn't be performed because an attribute was specified that doesn't exist in the target application. See the [documentation](../app-provisioning/customize-application-attributes.md) on attribute customization and ensure that your configuration is correct.|-|InternalError |An internal service error occurred within the Azure AD provisioning service. There's nothing to do. This attempt will automatically be retried in 40 minutes.| +|InternalError |An internal service error occurred within the Azure AD provisioning service. There's nothing to do. This attempt is automatically retired in 40 minutes.| |InvalidDomain |The operation couldn't be performed because an attribute value contains an invalid domain name. Update the domain name on the user or add it to the permitted list in the target application. |-|Timeout |The operation couldn't be completed because the target application took too long to respond. There's nothing to do. This attempt will automatically be retried in 40 minutes.| +|Timeout |The operation couldn't be completed because the target application took too long to respond. There's nothing to do. This attempt is automatically retried in 40 minutes.| |LicenseLimitExceeded|The user couldn't be created in the target application because there are no available licenses for this user. Procure more licenses for the target application. Or, review your user assignments and attribute mapping configuration to ensure that the correct users are assigned with the correct attributes.| |DuplicateTargetEntries |The operation couldn't be completed because more than one user in the target application was found with the configured matching attributes. Remove the duplicate user from the target application, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).| |DuplicateSourceEntries | The operation couldn't be completed because more than one user was found with the configured matching attributes. Remove the duplicate user, or [reconfigure your attribute mappings](../app-provisioning/customize-application-attributes.md).| |ImportSkipped | When each user is evaluated, the system tries to import the user from the source system. This error commonly occurs when the user who's being imported is missing the matching property defined in your attribute mappings. Without a value present on the user object for the matching attribute, the system can't evaluate scoping, matching, or export changes. The presence of this error doesn't indicate that the user is in scope, because you haven't yet evaluated scoping for the user.| |EntrySynchronizationSkipped | The provisioning service has successfully queried the source system and identified the user. No further action was taken on the user and they were skipped. The user might have been out of scope, or the user might have already existed in the target system with no further changes required.|-|SystemForCrossDomainIdentity<br>ManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, you'll get this error.| +|SystemForCrossDomainIdentity<br>ManagementMultipleEntriesInResponse| A GET request to retrieve a user or group received multiple users or groups in the response. The system expects to receive only one user or group in the response. For example, if you do a [GET Group request](../app-provisioning/use-scim-to-provision-users-and-groups.md#get-group) to retrieve a group, provide a filter to exclude members, and your System for Cross-Domain Identity Management (SCIM) endpoint returns the members, this error appears.| |SystemForCrossDomainIdentity<br>ManagementServiceIncompatible|The Azure AD provisioning service is unable to parse the response from the third party application. Work with the application developer to ensure that the SCIM server is compatible with the [Azure AD SCIM client](../app-provisioning/use-scim-to-provision-users-and-groups.md#understand-the-azure-ad-scim-implementation).| |SchemaPropertyCanOnlyAcceptValue|The property in the target system can only accept one value, but the property in the source system has multiple. Ensure that you either map a single-valued attribute to the property that is throwing an error, update the value in the source to be single-valued, or remove the attribute from the mappings.| Use the following table to better understand how to resolve errors that you find > | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). | > |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Further investigation likely requires contacting support.|-> |AzureActiveDirectoryInsufficientRights|When a B2B user in the target tenant has a role other than User, Helpdesk Admin, or User Account Admin, they cannot be deleted.| Remove the role(s) on the user in the target tenant in order to successfully delete the user in the target tenant.| > |AzureActiveDirectoryForbidden|External collaboration settings have blocked invitations.|Navigate to user settings and ensure that [external collaboration settings](../external-identities/external-collaboration-settings-configure.md) are permitted.| > |InvitationCreationFailureInvalidPropertyValue|Potential causes:<br/>* The Primary SMTP Address is an invalid value.<br/>* UserType is neither guest nor member<br/>* Group email Address is not supported | Potential solutions:<br/>* The Primary SMTP Address has an invalid value. Resolving this issue will likely require updating the mail property of the source user. For more information, see [Prepare for directory synchronization to Microsoft 365](https://aka.ms/DirectoryAttributeValidations)<br/>* Ensure that the userType property is provisioned as type guest or member. This can be fixed by checking your attribute mappings to understand how the userType attribute is mapped.<br/>* The email address address of the user matches with the email address of a group in the tenant. Update the email address for one of the two objects.| > |InvitationCreationFailureAmbiguousUser| The invited user has a proxy address that matches an internal user in the target tenant. The proxy address must be unique. | To resolve this error, delete the existing internal user in the target tenant or remove this user from sync scope.| |
active-directory | Concept Sign In Log Activity Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-in-log-activity-details.md | + + Title: Learn about the sign-in log activity details +description: Learn about the information available on each of the tabs on the Azure AD sign-in log activity details. +++++++ Last updated : 08/22/2023+++++# Learn about the sign-in log activity details ++Azure AD logs all sign-ins into an Azure tenant for compliance purposes. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. ++- [Learn about the sign-in logs](concept-sign-ins.md). +- [Customize and filter the sign-in logs](howto-customize-filter-logs.md) ++This article explains the values on the Basic info tab of the sign-ins log. ++## Basic info tab ++The Basic info tab contains most of the details that are also displayed in the table. You can launch the Sign-in Diagnostic from the Basic info tab. For more information, see [How to use the Sign-in Diagnostic](howto-use-sign-in-diagnostics.md). ++### Sign-in error codes ++If a sign-in failed, you can get more information about the reason in the Basic info tab of the related log item. The error code and associated failure reason appear in the details. For more information, see [How to troubleshoot sign-in errors.](howto-troubleshoot-sign-in-errors.md). ++![Screenshot of the sign-in error code on the basics tab.](media/concept-sign-in-log-activity-details/sign-in-error-code.png) ++## Location and Device info ++The **Location** and **Device info** tabs display general information about the location and IP address of the user. The Device info tab provides details on the browser and operating system used to sign in. This tab also provides details on if the device is compliant, managed, or hybrid Azure AD joined. ++## Authentication details ++The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt: ++- A list of authentication policies applied, such as Conditional Access or Security Defaults. +- The sequence of authentication methods used to sign-in. +- If the authentication attempt was successful and the reason why. ++This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track: ++- The volume of sign-ins protected by MFA. +- Usage and success rates for each authentication method. +- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business. +- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP. ++![Screenshot of the Authentication Details tab.](media/concept-sign-in-log-activity-details/sign-in-activity-details-authentication.png) ++When analyzing authentication details, take note of the following details: ++- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app). +- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include: + - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged. + - The **Primary authentication** row isn't initially logged. +- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting. +- If Conditional Access policies for authentication or session lifetime are applied, they're listed above the sign-in attempts. If you don't see either of these, those policies aren't currently applied. For more information, see [Conditional Access session controls](../conditional-access/concept-conditional-access-session.md). +++## Unique identifiers ++In Azure AD, a resource access has three relevant components: ++- **Who** ΓÇô The identity (User) doing the sign-in. +- **How** ΓÇô The client (Application) used for the access. +- **What** ΓÇô The target (Resource) accessed by the identity. ++Each component has an associated unique identifier (ID). ++### Tenant ++The sign-in log tracks two tenant identifiers: ++- **Home tenant** ΓÇô The tenant that owns the user identity. +- **Resource tenant** ΓÇô The tenant that owns the (target) resource. ++These identifiers are relevant in cross-tenant scenarios. For example, to find out how users outside your tenant are accessing your resources, select all entries where the home tenant doesnΓÇÖt match the resource tenant. +For the home tenant, Azure AD tracks the ID and the name. ++### Request ID ++The request ID is an identifier that corresponds to an issued token. If you're looking for sign-ins with a specific token, you need to extract the request ID from the token, first. +++### Correlation ID ++The correlation ID groups sign-ins from the same sign-in session. The identifier was implemented for convenience. Its accuracy isn't guaranteed because the value is based on parameters passed by a client. ++### Sign-in ++The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a user principal name (UPN), but can be another identifier such as a phone number. ++### Authentication requirement ++This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. Graph API supports `$filter` (`eq` and `startsWith` operators only). ++### Sign-in event types ++Indicates the category of the sign in the event represents. For user sign-ins, the category can be `interactiveUser` or `nonInteractiveUser` and corresponds to the value for the **isInteractive** property on the sign-in resource. For managed identity sign-ins, the category is `managedIdentity`. For service principal sign-ins, the category is **servicePrincipal**. The Azure portal doesn't show this value, but the sign-in event is placed in the tab that matches its sign-in event type. Possible values are: ++- `interactiveUser` +- `nonInteractiveUser` +- `servicePrincipal` +- `managedIdentity` +- `unknownFutureValue` ++The Microsoft Graph API, supports: `$filter` (`eq` operator only) ++### User type ++The type of a user. Examples include `member`, `guest`, or `external`. +++### Cross-tenant access type ++This attribute describes the type of cross-tenant access used by the actor to access the resource. Possible values are: ++- `none` - A sign-in event that didn't cross an Azure AD tenant's boundaries. +- `b2bCollaboration`- A cross tenant sign-in performed by a guest user using B2B Collaboration. +- `b2bDirectConnect` - A cross tenant sign-in performed by a B2B. +- `microsoftSupport`- A cross tenant sign-in performed by a Microsoft support agent in a Microsoft customer tenant. +- `serviceProvider` - A cross-tenant sign-in performed by a Cloud Service Provider (CSP) or similar admin on behalf of that CSP's customer in a tenant +- `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](/graph/best-practices-concept). ++If the sign-in didn't the pass the boundaries of a tenant, the value is `none`. ++### Conditional Access evaluation ++This value shows whether continuous access evaluation (CAE) was applied to the sign-in event. There are multiple sign-in requests for each authentication. Some are shown on the interactive tab, while others are shown on the non-interactive tab. CAE is only displayed as true for one of the requests, and it can be on the interactive tab or non-interactive tab. For more information, see [Monitor and troubleshoot sign-ins with continuous access evaluation in Azure AD](../conditional-access/howto-continuous-access-evaluation-troubleshoot.md). ++## Next steps ++* [Learn about exporting Azure AD sign-in logs](concept-activity-logs-azure-monitor.md) +* [Explore the sign-in diagnostic in Azure AD](./howto-use-sign-in-diagnostics.md) |
active-directory | Concept Sign Ins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md | Title: Sign-in logs in Azure Active Directory -description: Conceptual information about Azure AD sign-in logs. +description: Learn about the four types of sign-in logs available in Azure Active Directory Monitoring and health. -# Sign-in logs in Azure Active Directory +# What are Azure Active Directory sign-in logs? -Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure Active Directory (Azure AD) are a powerful type of [activity log](overview-reports.md) that IT administrators can analyze. This article explains how to access and utilize the sign-in logs. +Azure Active Directory (Azure AD) logs all sign-ins into an Azure tenant, which includes your internal apps and resources. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. ++Reviewing sign-in errors and patterns provides valuable insight into how your users access applications and services. The sign-in logs provided by Azure AD are a powerful type of [activity log](overview-reports.md) that you can analyze. This article explains how to access and utilize the sign-in logs. ++The preview view of the sign-in logs includes interactive and non-interactive user sign-ins as well as service principal and managed identity sign-ins. You can still view the classic sign-in logs, which only include interactive sign-ins. Two other activity logs are also available to help monitor the health of your tenant: - **[Audit](concept-audit-logs.md)** ΓÇô Information about changes applied to your tenant, such as users and group management or updates applied to your tenantΓÇÖs resources. Two other activity logs are also available to help monitor the health of your te ## What can you do with sign-in logs? -You can use the sign-ins log to find answers to questions like: +You can use the sign-in logs to answer questions such as: -- What is the sign-in pattern of a user?+- How many users have signed into a particular application this week? +- How many failed sign-in attempts have occurred in the last 24 hours? +- Are users signing in from specific browsers or operating systems? +- Which of my Azure resources are being accessed by managed identities and service principals? -- How many users have signed in over a week?+You can also describe the activity associated with a sign-in request by identifying the following details: -- WhatΓÇÖs the status of these sign-ins?+- **Who** ΓÇô The identity (User) performing the sign-in. +- **How** ΓÇô The client (Application) used for the sign-in. +- **What** ΓÇô The target (Resource) accessed by the identity. -## How do you access the sign-in logs? +## What are the types of sign-in logs? -You can always access your own sign-ins history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com). +There are four types of logs in the sign-in logs preview: -To access the sign-ins log for a tenant, you must have one of the following roles: +- Interactive user sign-ins +- Non-interactive user sign-ins +- Service principal sign-ins +- Managed identity sign-ins -- Global Administrator-- Security Administrator-- Security Reader-- Global Reader-- Reports Reader+The classic sign-in logs only include interactive user sign-ins. -The sign-in activity report is available in [all editions of Azure AD](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data). If you have an Azure Active Directory P1 or P2 license, you can access the sign-in activity report through the Microsoft Graph API. See [Getting started with Azure Active Directory Premium](../fundamentals/get-started-premium.md) to upgrade your Azure Active Directory edition. It will take a couple of days for the data to show up in Graph after you upgrade to a premium license with no data activities before the upgrade. +### Interactive user sign-ins -**To access the Azure AD sign-ins log:** +Interactive user sign-ins provide an authentication factor to Azure AD. That authentication factor could also interact with a helper app, such as the Microsoft Authenticator app. Users can provide passwords, responses to MFA challenges, biometric factors, or QR codes to Azure AD or to a helper app. This log also includes federated sign-ins from identity providers that are federated to Azure AD. -1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role. -1. Go to **Azure Active Directory** > **Sign-ins log**. - ![Screenshot of the Monitoring side menu with sign-in logs highlighted.](./media/concept-sign-ins/side-menu-sign-in-logs.png) +> [!NOTE] +> The interactive user sign-in log previously contained some non-interactive sign-ins from Microsoft Exchange clients. Although those sign-ins were non-interactive, they were included in the interactive user sign-in log for additional visibility. Once the non-interactive user sign-in log entered public preview in November 2020, those non-interactive sign-in logs were moved to the non-interactive user sign-in log for increased accuracy. -You can also access the sign-in logs from the following areas of Azure AD: +**Report size:** small </br> +**Examples:** -- Users-- Groups-- Enterprise applications+- A user provides username and password in the Azure AD sign-in screen. +- A user passes an SMS MFA challenge. +- A user provides a biometric gesture to unlock their Windows PC with Windows Hello for Business. +- A user is federated to Azure AD with an AD FS SAML assertion. -## View the sign-ins log +In addition to the default fields, the interactive sign-in log also shows: -To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can specify what columns to include and filter the data to narrow things down. +- The sign-in location +- Whether Conditional Access has been applied -### Customize the layout +### Non-interactive user sign-ins -The sign-ins log has a default view, but you can customize the view using over 30 column options. +Like interactive user sign-ins, non-interactive sign-ins are done on behalf of a user. These sign-ins were performed by a client app or OS components on behalf of a user and don't require the user to provide an authentication factor. Instead, the device or client app uses a token or code to authenticate or access a resource on behalf of a user. In general, the user perceives these sign-ins as happening in the background. -1. Select **Columns** from the menu at the top of the log. -1. Select the columns you want to view and select the **Save** button at the bottom of the window. +![Screenshot of the non-interactive user sign-ins log.](media/concept-sign-ins/sign-in-logs-user-noninteractive.png) -![Screenshot of the sign-in logs page with the Columns option highlighted.](./media/concept-sign-ins/sign-in-logs-columns.png) +**Report size:** Large </br> +**Examples:** -### Filter the results <h3 id="filter-sign-in-activities"></h3> +- A client app uses an OAuth 2.0 refresh token to get an access token. +- A client uses an OAuth 2.0 authorization code to get an access token and refresh token. +- A user performs single sign-on (SSO) to a web or Windows app on an Azure AD joined PC (without providing an authentication factor or interacting with an Azure AD prompt). +- A user signs in to a second Microsoft Office app while they have a session on a mobile device using FOCI (Family of Client IDs). -Filtering the sign-ins log is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential. +In addition to the default fields, the non-interactive sign-in log also shows: -Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters. +- Resource ID +- Number of grouped sign-ins -Select the **Add filters** option from the top of the table to get started. +You can't customize the fields shown in this report. -![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/concept-sign-ins/sign-in-logs-filter.png) +To make it easier to digest the data, non-interactive sign-in events are grouped. Clients often create many non-interactive sign-ins on behalf of the same user in a short time period. The non-interactive sign-ins share the same characteristics except for the time the sign-in was attempted. For example, a client may get an access token once per hour on behalf of a user. If the state of the user or client doesn't change, the IP address, resource, and all other information is the same for each access token request. The only state that does change is the date and time of the sign-in. -There are several filter options to choose from: -- **User:** The *user principal name* (UPN) of the user in question.-- **Status:** Options are *Success*, *Failure*, and *Interrupted*.-- **Resource:** The name of the service used for the sign-in.-- **Conditional Access:** The status of the Conditional Access policy. Options are: - - *Not applied:* No policy applied to the user and application during sign-in. - - *Success:* One or more Conditional Access policies applied to or were evaluated for the user and application (but not necessarily the other conditions) during sign-in. Even though a Conditional Access policy might not apply, if it was evaluated, the Conditional Access status will show 'Success'. - - *Failure:* The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access. -- **IP addresses:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information.+When Azure AD logs multiple sign-ins that are identical other than time and date, those sign-ins are from the same entity and are aggregated into a single row. A row with multiple identical sign-ins (except for date and time issued) has a value greater than 1 in the *# sign-ins* column. These aggregated sign-ins may also appear to have the same time stamps. The **Time aggregate** filter can set to 1 hour, 6 hours, or 24 hours. You can expand the row to see all the different sign-ins and their different time stamps. -The following table provides the options and descriptions for the **Client app** filter option. --> [!NOTE] -> Due to privacy commitments, Azure AD does not populate this field to the home tenant in the case of a cross-tenant scenario. +Sign-ins are aggregated in the non-interactive users when the following data matches: -|Name|Modern authentication|Description| -||:-:|| -|Authenticated SMTP| |Used by POP and IMAP client's to send email messages.| -|Autodiscover| |Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.| -|Exchange ActiveSync| |This filter shows all sign-in attempts where the EAS protocol has been attempted.| -|Browser|![Blue checkmark.](./media/concept-sign-ins/check.png)|Shows all sign-in attempts from users using web browsers| -|Exchange ActiveSync| | Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online| -|Exchange Online PowerShell| |Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).| -|Exchange Web Services| |A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.| -|IMAP4| |A legacy mail client using IMAP to retrieve email.| -|MAPI over HTTP| |Used by Outlook 2010 and later.| -|Mobile apps and desktop clients|![Blue checkmark.](./media/concept-sign-ins/check.png)|Shows all sign-in attempts from users using mobile apps and desktop clients.| -|Offline Address Book| |A copy of address list collections that are downloaded and used by Outlook.| -|Outlook Anywhere (RPC over HTTP)| |Used by Outlook 2016 and earlier.| -|Outlook Service| |Used by the Mail and Calendar app for Windows 10.| -|POP3| |A legacy mail client using POP3 to retrieve email.| -|Reporting Web Services| |Used to retrieve report data in Exchange Online.| -|Other clients| |Shows all sign-in attempts from users where the client app isn't included or unknown.| +- Application +- User +- IP address +- Status +- Resource ID -## Analyze the sign-in logs --Now that your sign-in logs table is formatted appropriately, you can more effectively analyze the data. Some common scenarios are described here, but they aren't the only ways to analyze sign-in data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools. +> [!NOTE] +> The IP address of non-interactive sign-ins performed by [confidential clients](../develop/msal-client-applications.md) doesn't match the actual source IP of where the refresh token request is coming from. Instead, it shows the original IP used for the original token issuance. -### Sign-in error codes +### Service principal sign-ins -If a sign-in failed, you can get more information about the reason in the **Basic info** section of the related log item. The error code and associated failure reason appear in the details. Because of the complexity of some Azure AD environments, we can't document every possible error code and resolution. Some errors may require [submitting a support request](../fundamentals/how-to-get-support.md) to resolve the issue. +Unlike interactive and non-interactive user sign-ins, service principal sign-ins don't involve a user. Instead, they're sign-ins by any nonuser account, such as apps or service principals (except managed identity sign-in, which are in included only in the managed identity sign-in log). In these sign-ins, the app or service provides its own credential, such as a certificate or app secret to authenticate or access resources. -![Screenshot of a sign-in error code.](./media/concept-sign-ins/error-code.png) +![Screenshot of the service principal sign-ins log.](media/concept-sign-ins/sign-in-logs-service-principal.png) -For a list of error codes related to Azure AD authentication and authorization, see the [Azure AD authentication and authorization error codes](../develop/reference-error-codes.md) article. In some cases, the [sign-in error lookup tool](https://login.microsoftonline.com/error) may provide remediation steps. Enter the **Error code** provided in the sign-in log details into the tool and select the **Submit** button. +**Report size:** Large </br> +**Examples:** -![Screenshot of the error code lookup tool.](./media/concept-sign-ins/error-code-lookup-tool.png) +- A service principal uses a certificate to authenticate and access the Microsoft Graph. +- An application uses a client secret to authenticate in the OAuth Client Credentials flow. -### Authentication details +You can't customize the fields shown in this report. -The **Authentication Details** tab in the details of a sign-in log provides the following information for each authentication attempt: +To make it easier to digest the data in the service principal sign-in logs, service principal sign-in events are grouped. Sign-ins from the same entity under the same conditions are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the service principal report when the following data matches: -- A list of authentication policies applied, such as Conditional Access or Security Defaults.-- A list of session lifetime policies applied, such as Sign-in frequency or Remember MFA.-- The sequence of authentication methods used to sign-in.-- If the authentication attempt was successful and the reason why.+- Service principal name or ID +- Status +- IP address +- Resource name or ID -This information allows you to troubleshoot each step in a userΓÇÖs sign-in. Use these details to track: +### Managed identity sign-ins -- The volume of sign-ins protected by MFA. -- The reason for the authentication prompt, based on the session lifetime policies.-- Usage and success rates for each authentication method.-- Usage of passwordless authentication methods, such as Passwordless Phone Sign-in, FIDO2, and Windows Hello for Business.-- How frequently authentication requirements are satisfied by token claims, such as when users aren't interactively prompted to enter a password or enter an SMS OTP.+Managed identities for Azure resources sign-ins are sign-ins that were performed by resources that have their secrets managed by Azure to simplify credential management. A VM with managed credentials uses Azure AD to get an Access Token. -While viewing the sign-ins log, select a sign-in event, and then select the **Authentication Details** tab. +![Screenshot of the managed identity sign-ins log.](media/concept-sign-ins/sign-in-logs-managed-identity.png) -![Screenshot of the Authentication Details tab](media/concept-sign-ins/authentication-details-tab.png) +**Report size:** Small </br> +**Examples:** -When analyzing authentication details, take note of the following details: + You can't customize the fields shown in this report. -- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app).-- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include: - - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged. - - The **Primary authentication** row isn't initially logged. -- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.+To make it easier to digest the data, managed identities for Azure resources sign-in logs, non-interactive sign-in events are grouped. Sign-ins from the same entity are aggregated into a single row. You can expand the row to see all the different sign-ins and their different time stamps. Sign-ins are aggregated in the managed identities report when all of the following data matches: -#### Considerations for MFA sign-ins +- Managed identity name or ID +- Status +- Resource name or ID -When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`. +Select an item in the list view to display all sign-ins that are grouped under a node. Select a grouped item to see all details of the sign-in. ## Sign-in data used by other services -Sign-in data is used by several services in Azure to monitor risky sign-ins and provide insight into application usage. +Sign-in data is used by several services in Azure to monitor risky sign-ins, provide insight into application usage, and more. -### Risky sign-in data in Azure AD Identity Protection +### Azure AD Identity Protection Sign-in log data visualization that relates to risky sign-ins is available in the **Azure AD Identity Protection** overview, which uses the following data: - Risky users - Risky user sign-ins -- Risky service principals-- Risky service principal sign-ins-- For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md). +- Risky workload identities -![Screenshot of risky users in Identity Protection.](media/concept-sign-ins/id-protection-overview.png) +For more information about the Azure AD Identity Protection tools, see the [Azure AD Identity Protection overview](../identity-protection/overview-identity-protection.md). -### Azure AD application and authentication sign-in activity +### Azure AD Usage and insights To view application-specific sign-in data, go to **Azure AD** and select **Usage & insights** from the Monitoring section. These reports provide a closer look at sign-ins for Azure AD application activity and AD FS application activity. For more information, see [Azure AD Usage & insights](concept-usage-insights-report.md). -![Screenshot of the Azure AD application activity report.](media/concept-sign-ins/azure-ad-app-activity.png) -Azure AD Usage & insights also provides the **Authentication methods activity** report, which breaks down authentication by the method used. Use this report to see how many of your users are set up with MFA or passwordless authentication. +There are several reports available in **Usage & insights**. Some of these reports are in preview. -![Screenshot of the Authentication methods report.](media/concept-sign-ins/azure-ad-authentication-methods.png) +- Azure AD application activity (preview) +- AD FS application activity +- Authentication methods activity +- Service principal sign-in activity (preview) +- Application credential activity (preview) ### Microsoft 365 activity logs |
active-directory | Concept Usage Insights Report | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md | Title: Usage and insights report -description: Introduction to usage and insights report in the Azure portal +description: Learn about the information you can explore using the Usage and insights report in Azure Active Directory. You can access the Usage and insights reports from the Azure portal and using Mi ### To access Usage & insights in the portal: -1. Sign in to the [Azure portal](https://portal.azure.com) using the appropriate least privileged role. -1. Go to **Azure Active Directory** > **Usage & insights**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Monitoring & health** > **Usage & insights**. The **Usage & insights** reports are also available from the **Enterprise applications** area of Azure AD. All users can access their own sign-ins at the [My Sign-Ins portal](https://mysignins.microsoft.com/security-info). For more information, see [Application sign-in in Microsoft Graph](/graph/api/re ## AD FS application activity -The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user login to authenticate in the last 30 days. These applications have not been migrated to Azure AD for authentication. +The **AD FS application activity** report in Usage & insights lists all Active Directory Federated Services (AD FS) applications in your organization that have had an active user sign-in to authenticate in the last 30 days. These applications haven't been migrated to Azure AD for authentication. Viewing the AD FS application activity using Microsoft Graph retrieves a list of the `relyingPartyDetailedSummary` objects, which identifies the relying party to a particular Federation Service. Are you planning on running a registration campaign to nudge users to sign up fo Looking for the details of a user and their authentication methods? Look at the **User registration details** report from the side menu and search for a name or UPN. The default MFA method and other methods registered are displayed. You can also see if the user is capable of registering for one of the authentication methods. -Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You'll be able to see the method used to attempt to register or reset an authentication method. +Looking for the status of an authentication registration or reset event of a user? Look at the **Registration and reset events** report from the side menu and then search for a name or UPN. You can see the method used to attempt to register or reset an authentication method. ## Service principal sign-in activity (preview) -The Service principal sign-in activity (preview) report provides the last activity date for every service principal. The report provides you information on the usage of the service principal - whether it was used as a client or resource app and whether it was used in an app-only or delegated context. The report shows the last time the service principal was used. +The Service principal sign-in activity (preview) report provides the last activity date for every service principal. The report provides you with information on the usage of the service principal - whether it was used as a client or resource app and whether it was used in an app-only or delegated context. The report shows the last time the service principal was used. [ ![Screenshot of the service principal sign-in activity report.](./media/concept-usage-insights-report/service-principal-sign-ins.png) ](./media/concept-usage-insights-report/service-principal-sign-ins.png#lightbox) Add the following query to retrieve the service principal sign-in activity, then GET https://graph.microsoft.com/beta/reports/servicePrincipalSignInActivities/{id} ``` -The following is an example of the response: +Example response: ```json { For more information, see [List service principal activity in Microsoft Graph](/ ## Application credential activity (preview) -The Application credential activity (preview) report provides the last credential activity date for every application credential. The report provides the credential type (certificate or client secret), the last used date, and the expiration date. With this report you can view the expiration dates of all your applications in one place. +The Application credential activity (preview) report provides the last credential activity date for every application credential. The report provides the credential type (certificate or client secret), the last used date, and the expiration date. With this report, you can view the expiration dates of all your applications in one place. To view the details of the application credential activity, select the **View more details** link. These details include the application object, service principal, and resource IDs. You can also see if the credential origin is the application or the service principal. To get started, follow these instructions to work with `appCredentialSignInActiv ```http GET https://graph.microsoft.com/beta/reports/appCredentialSignInActivities/{id} ```-The following is an example of the response: +Example response: ```json { |
active-directory | How To View Applied Conditional Access Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/how-to-view-applied-conditional-access-policies.md | - Title: View applied Conditional Access policies in Azure AD sign-in logs + Title: View applied Conditional Access policies in the Azure AD sign-in logs description: Learn how to view Conditional Access policies in Azure AD sign-in logs so that you can assess the effect of those policies. To see applied Conditional Access policies in the sign-in logs, administrators m The following built-in roles grant permissions to *read Conditional Access policies*: -- Global Administrator +- Security Reader - Global Reader - Security Administrator -- Security Reader - Conditional Access Administrator +- Global Administrator The following built-in roles grant permission to *view sign-in logs*: -- Global Administrator -- Security Administrator -- Security Reader -- Global Reader - Reports Reader +- Security Reader +- Global Reader +- Security Administrator +- Global Administrator ## Permissions for client apps The Azure AD Graph PowerShell module doesn't support viewing applied Conditional The activity details of sign-in logs contain several tabs. The **Conditional Access** tab lists the Conditional Access policies applied to that sign-in event. -1. Sign in to the [Azure portal](https://portal.azure.com) using the Security Reader role. -1. In the **Monitoring** section, select **Sign-in logs**. -1. Select a sign-in item from the table to open the **Activity Details: Sign-ins context** pane. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. +1. Select a sign-in item from the table to view the sign-in details pane. 1. Select the **Conditional Access** tab. If you don't see the Conditional Access policies, confirm you're using a role that provides access to both the sign-in logs and the Conditional Access policies. |
active-directory | Howto Access Activity Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-access-activity-logs.md | Title: Access activity logs in Azure AD -description: Learn how to choose the right method for accessing the activity logs in Azure AD. +description: Learn how to choose the right method for accessing the activity logs in Azure Active Directory. -# How To: Access activity logs in Azure AD +# How to access activity logs in Azure AD -The data in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with various options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario. +The data collected in your Azure Active Directory (Azure AD) logs enables you to assess many aspects of your Azure AD tenant. To cover a broad range of scenarios, Azure AD provides you with several options to access your activity log data. As an IT administrator, you need to understand the intended uses cases for these options, so that you can select the right access method for your scenario. You can access Azure AD activity logs and reports using the following methods: Each of these methods provides you with capabilities that may align with certain ## Prerequisites -The required roles and licenses may vary based on the report. Global Administrator can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). +The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). | Log / Report | Roles | Licenses | |--|--|--| The required roles and licenses may vary based on the report. Global Administrat | Usage and insights | Security Reader<br>Reports Reader<br> Security Administrator | Premium P1/P2 | | Identity Protection* | Security Administrator<br>Security Operator<br>Security Reader<br>Global Reader | Azure AD Free/Microsoft 365 Apps<br>Azure AD Premium P1/P2 | -*The level of access and capabilities for Identity Protection varies with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements). +*The level of access and capabilities for Identity Protection vary with the role and license. For more information, see the [license requirements for Identity Protection](../identity-protection/overview-identity-protection.md#license-requirements). Audit logs are available for features that you've licensed. To access the sign-ins logs using the Microsoft Graph API, your tenant must have an Azure AD Premium license associated with it. The SIEM tools you can integrate with your event hub can provide analysis and mo ### Quick steps -1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator). 1. Create an Event Hubs namespace and event hub.-1. Go to **Azure AD** > **Diagnostic settings**. +1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**. 1. Choose the logs you want to stream, select the **Stream to an event hub** option, and complete the fields. - [Set up an Event Hubs namespace and an event hub](../../event-hubs/event-hubs-create.md) - [Learn more about streaming activity logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) The SIEM tools you can integrate with your event hub can provide analysis and mo ## Access logs with Microsoft Graph API -The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance. +The Microsoft Graph API provides a unified programmability model that you can use to access data for your Azure AD Premium tenants. It doesn't require an administrator or developer to set up extra infrastructure to support your script or app. ### Recommended uses Integrating Azure AD logs with Azure Monitor logs provides a centralized locatio ### Quick steps -1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator). 1. [Create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md).-1. Go to **Azure AD** > **Diagnostic settings**. +1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**. 1. Choose the logs you want to stream, select the **Send to Log Analytics workspace** option, and complete the fields.-1. Go to **Azure AD** > **Log Analytics** and begin querying the data. +1. Browse to **Identity** > **Monitoring & health** > **Log Analytics** and begin querying the data. - [Integrate Azure AD logs with Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) - [Learn how to query using Log Analytics](howto-analyze-activity-logs-log-analytics.md) The reports available in the Azure portal provide a wide range of capabilities t Use the following basic steps to access the reports in the Azure portal. #### Azure AD activity logs -1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs** from the **Monitoring** menu. +1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs**. 1. Adjust the filter according to your needs. - [Learn how to filter activity logs](quickstart-filter-audit-log.md) - [Explore the Azure AD audit log categories and activities](reference-audit-activities.md) Use the following basic steps to access the reports in the Azure portal. #### Azure AD Identity Protection reports -1. Go to **Azure AD** > **Security** > **Identity Protection**. +1. Browse to **Protection** > **Identity Protection**. 1. Explore the available reports. - [Learn more about Identity Protection](../identity-protection/overview-identity-protection.md) - [Learn how to investigate risk](../identity-protection/howto-identity-protection-investigate-risk.md) #### Usage and insights reports -1. Go to **Azure AD** and select **Usage and insights** from the **Monitoring** menu. +1. Browse to **Identity** > **Monitoring & health** > **Usage and insights**. 1. Explore the available reports. - [Learn more about the Usage and insights report](concept-usage-insights-report.md) We recommend manually downloading and storing your activity logs if you have bud Use the following basic steps to archive or download your activity logs. -### Archive activity logs to a storage account +#### Archive activity logs to a storage account -1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator). 1. Create a storage account.-1. Go to **Azure AD** > **Diagnostic settings**. +1. Browse to **Identity** > **Monitoring & health** > **Diagnostic settings**. 1. Choose the logs you want to stream, select the **Archive to a storage account** option, and complete the fields. - [Review the data retention policies](reference-reports-data-retention.md) #### Manually download activity logs -1. Sign in to the [Azure portal](https://portal.azure.com) using one of the required roles. -1. Go to **Azure AD** and select **Audit logs**, **Sign-in logs**, or **Provisioning logs** from the **Monitoring** menu. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs** from the **Monitoring** menu. 1. Select **Download**. - [Learn more about how to download logs](howto-download-logs.md). |
active-directory | Howto Analyze Activity Logs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md | Title: Analyze activity logs using Log Analytics -description: Learn how to analyze Azure Active Directory activity logs using Log Analytics +description: Learn how to analyze audit, sign-in, and provisioning logs Azure Active Directory using Log Analytics queries. This article describes to analyze the Azure AD activity logs in your Log Analyti ## Roles and licenses -To analyze Azure AD logs with Azure Monitor, you need the following roles and licenses: +To analyze activity logs with Log Analytics, you need: ++- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md) +- A Log Analytics workspace *and* access to that workspace +- The appropriate roles for Azure Monitor *and* Azure AD ++### Log Analytics workspace ++You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data. ++For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md). ++### Azure Monitor roles ++Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access. ++- **View**: + - Monitoring Reader + - Log Analytics Reader ++- **View and modify settings**: + - Monitoring Contributor + - Log Analytics Contributor ++For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader). ++For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) ++### Azure AD roles -* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). +Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace. -* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. +- **Read**: + - Reports Reader + - Security Reader + - Global Reader -* **Reports Reader**, **Security Reader**, or **Security Administrator** access for the Azure AD tenant: These roles are required to view Log Analytics through the Azure AD portal. +- **Update**: + - Security Administrator -* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. +For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md). ## Access Log Analytics To view the Azure AD Log Analytics, you must already be sending your activity lo [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). -1. Go to **Azure Active Directory** > **Log Analytics**. A default search query runs. +1. Browse to **Identity** > **Monitoring & health** > **Log Analytics**. A default search query runs. ![Default query](./media/howto-analyze-activity-logs-log-analytics/defaultquery.png) |
active-directory | Howto Archive Logs To Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-archive-logs-to-storage-account.md | + + Title: How to archive activity logs to a storage account +description: Learn how to archive Azure Active Directory activity logs to a storage account through Diagnostic settings. +++++++ Last updated : 08/24/2023++++# Customer intent: As an IT administrator, I want to learn how to archive Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period. +++# How to archive Azure AD logs to an Azure storage account ++If you need to store Azure Active Directory (Azure AD) activity logs for longer than the [default retention period](reference-reports-data-retention.md), you can archive your logs to a storage account. ++## Prerequisites ++To use this feature, you need: ++* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). +* An Azure storage account. +* A user who's a *Security Administrator* or *Global Administrator* for the Azure AD tenant. ++## Archive logs to an Azure storage account +++6. Under **Destination Details** select the **Archive to a storage account** check box. ++7. Select the appropriate **Subscription** and **Storage account** from the menus. ++ ![Diagnostics settings](media/howto-archive-logs-to-storage-account/diagnostic-settings-storage.png) ++8. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. ++ > [!NOTE] + > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md). + +9. Select **Save** to save the setting. ++10. Close the window to return to the Diagnostic settings pane. ++## Next steps ++- [Learn about other ways to access activity logs](howto-access-activity-logs.md) +- [Manually download activity logs](howto-download-logs.md) +- [Integrate activity logs with Azure Monitor logs](howto-integrate-activity-logs-with-azure-monitor-logs.md) +- [Stream logs to an event hub](howto-stream-logs-to-event-hub.md) |
active-directory | Howto Configure Prerequisites For Reporting Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md | Title: Prerequisites for Azure Active Directory reporting API -description: Learn about the prerequisites to access the Azure AD reporting API +description: Learn how to configure the prerequisites that are required to access the Microsoft Graph reporting API. -The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. +The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs. The Microsoft Graph API is **not** designed for pulling large amounts of activity data. Pulling large amounts of activity data using the API may lead to issues with pagination and performance. This article describes how to enable Microsoft Graph to access the Azure AD reporting APIs in the Azure portal and through PowerShell To get access to the reporting data through the API, you need to have one of the - Security Administrator - Global Administrator -In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement. +In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. If the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any other license requirement. Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) in one of the required roles. Registration is needed even if you're accessing the reporting API using a script > ## Enable the Microsoft Graph API through the Azure portal -To enable your application to access Microsoft Graph without user intervention, you'll need to register your application with Azure AD, then grant permissions to the Microsoft Graph API. This article covers the steps to follow in the Azure portal. +To enable your application to access Microsoft Graph without user intervention, you need to register your application with Azure AD, then grant permissions to the Microsoft Graph API. This article covers the steps to follow in the Azure portal. ### Register an Azure AD application [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -1. Sign in to the [Azure portal](https://portal.azure.com). --1. Go to **Azure Active Directory** > **App registrations**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Security Reader](../roles/permissions-reference.md#security-reader). +1. Browse to **Identity** > **Applications** > **App registrations**. 1. Select **New registration**. To enable your application to access Microsoft Graph without user intervention, To access the Azure AD reporting API, you must grant your app *Read directory data* and *Read all audit log data* permissions for the Microsoft Graph API. -1. **Azure Active Directory** > **App Registrations**> **API permissions** and select **Add a permission**. +1. Browse to **Identity** > **Applications** > **App Registrations**. +1. Select **Add a permission**. ![Screenshot of the API permissions menu option and Add permissions button.](./media/howto-configure-prerequisites-for-reporting-api/api-permissions-new-permission.png) To access the Azure AD reporting API, you must grant your app *Read directory da Once you have the app registration configured, you can run activity log queries in Microsoft Graph. -1. Sign in to https://graph.microsoft.com using the **Security Reader** role. You may need to confirm that you're signed into the appropriate role. Select your profile icon in the upper-right corner of Microsoft Graph. +1. Sign in to https://graph.microsoft.com using the **Security Reader** role. + - You may need to confirm that you're signed into the appropriate role. + - Select your profile icon in the upper-right corner of Microsoft Graph. 1. Use one of the following queries to start using Microsoft Graph for accessing activity logs: - GET `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits` - GET `https://graph.microsoft.com/v1.0/auditLogs/signIns` Once you have the app registration configured, you can run activity log queries ## Access reports using Microsoft Graph PowerShell -To use PowerShell to access the Azure AD reporting API, you'll need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application). +To use PowerShell to access the Azure AD reporting API, you need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application). - Tenant ID - Client app ID To use PowerShell to access the Azure AD reporting API, you'll need to gather a You need these values when configuring calls to the reporting API. We recommend using a certificate because it's more secure. -1. Go to **Azure Active Directory** > **App Registrations**. +1. Browse to **Identity** > **Applications** > **App Registrations**. +1. Open the application you created. 1. Copy the **Directory (tenant) ID**. 1. Copy the **Application (client) ID**.-1. Go to **App Registration** > Select your application > **Certificates & secrets** > **Certificates** > **Upload certificate** and upload your certificate's public key file. +1. Browse to **Certificates & secrets** > **Certificates** > **Upload certificate** and upload your certificate's public key file. - If you don't have a certificate to upload, follow the steps outlined in the [Create a self-signed certificate to authenticate your application](../develop/howto-create-self-signed-certificate.md) article. -Next you'll authenticate with the configuration settings you just gathered. Open PowerShell and run the following command, replacing the placeholders with your information. +Next you need to authenticate with the configuration settings you just gathered. Open PowerShell and run the following command, replacing the placeholders with your information. ```powershell Connect-MgGraph -ClientID YOUR_APP_ID -TenantId YOUR_TENANT_ID -CertificateName YOUR_CERT_SUBJECT ## Or -CertificateThumbprint instead of -CertificateName |
active-directory | Howto Customize Filter Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-customize-filter-logs.md | + + Title: Customize and filter the activity logs in Azure AD +description: Learn how to customize the columns and filter of the Azure Active Directory activity logs so you can analyze the results. +++++++ Last updated : 08/22/2023+++++# How to customize and filter identity activity logs ++Sign-in logs are a commonly used tool to troubleshoot user access issues and investigate risky sign-in activity. Audit logs collect every logged event in Azure Active Directory (Azure AD) and can be used to investigate changes to your environment. There are over 30 columns you can choose from to customize your view of the sign-in logs in the Azure AD portal. Audit logs and Provisioning logs can also be customized and filtered for your needs. ++This article shows you how to customize the columns and then filter the logs to find the information you need more efficiently. ++## Prerequisites ++The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). ++| Log / Report | Roles | Licenses | +|--|--|--| +| Audit | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD | +| Sign-ins | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD | +| Provisioning | Same as audit and sign-ins, plus<br>Security Operator<br>Application Administrator<br>Cloud App Administrator<br>A custom role with `provisioningLogs` permission | Premium P1/P2 | +| Conditional Access data in the sign-in logs | Company Administrator<br>Global Reader<br>Security Administrator<br>Security Reader<br>Conditional Access Administrator | Premium P1/P2 | ++## How to access the activity logs in the Azure portal ++You can always access your own sign-in history at [https://mysignins.microsoft.com](https://mysignins.microsoft.com). You can also access the sign-in logs from **Users** and **Enterprise applications** in Azure AD. +++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs**. ++## Audit logs ++With the information in the Azure AD audit logs, you can access all records of system activities for compliance purposes. Audit logs can be accessed from the **Monitoring and health** section of Azure AD, where you can sort and filter on every category and activity. You can also access audit logs in the area of the portal for the service you're investigating. ++![Screenshot of the audit logs option on the side menu.](media/howto-customize-filter-logs/audit-logs-navigation.png) ++For example, if you're looking into changes to Azure AD groups, you can access the Audit logs from **Azure AD** > **Groups**. When you access the audit logs from the service, the filter is automatically adjusted according to the service. ++![Screenshot of the audit logs option from the Groups menu.](media/howto-customize-filter-logs/audit-logs-groups.png) ++### Customize the layout of the audit logs ++Audit logs can be customized like the sign-in logs. There aren't as many column options, but it's as important to make sure you're seeing the columns you need. The **Service**, **Category** and **Activity** columns are related to each other, so these columns should always be visible. ++### Filter the audit logs ++When you filter the logs by **Service**, the **Category** and **Activity** details automatically change. In some cases, there may only be one Category or Activity. For a detailed table of all potential combinations of these details, see [Audit activities](reference-audit-activities.md). +++## Sign-in logs ++On the sign-in logs page, you can switch between four sign-in log types. For more information on the logs, see [What are Azure AD sign-in logs?](concept-sign-ins.md). +++- **Interactive user sign-ins:** Sign-ins where a user provides an authentication factor, such as a password, a response through an MFA app, a biometric factor, or a QR code. ++- **Non-interactive user sign-ins:** Sign-ins performed by a client on behalf of a user. These sign-ins don't require any interaction or authentication factor from the user. For example, authentication and authorization using refresh and access tokens that don't require a user to enter credentials. ++- **Service principal sign-ins:** Sign-ins by apps and service principals that don't involve any user. In these sign-ins, the app or service provides a credential on its own behalf to authenticate or access resources. ++- **Managed identities for Azure resources sign-ins:** Sign-ins by Azure resources that have secrets managed by Azure. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md). ++### Customize the layout of the sign-in logs ++To more effectively view the sign-ins log, spend a few moments customizing the view for your needs. You can only customize the column for the interactive user sign-in log. The sign-ins log has a default view, but you can customize the view using over 30 column options. ++1. Select **Columns** from the menu at the top of the log. +1. Select the columns you want to view and select the **Save** button at the bottom of the window. ++![Screenshot of the sign-in logs page with the Columns option highlighted.](./media/howto-customize-filter-logs/sign-in-logs-columns.png) ++### Filter the sign-in logs <h3 id="filter-sign-in-activities"></h3> ++Filtering the sign-in logs is a helpful way to quickly find logs that match a specific scenario. For example, you could filter the list to only view sign-ins that occurred in a specific geographic location, from a specific operating system, or from a specific type of credential. ++Some filter options prompt you to select more options. Follow the prompts to make the selection you need for the filter. You can add multiple filters. ++Select the **Add filters** option from the top of the table to get started. ++![Screenshot of the sign-in logs page with the Add filters option highlighted.](./media/howto-customize-filter-logs/sign-in-logs-add-filters.png) ++Once you apply a filter, you may either enter a specific detail - such as a Request ID - or select another filter option. ++![Screenshot of the filter options with a field to enter filter details open.](./media/howto-customize-filter-logs/sign-in-logs-filter-options.png) ++You can filter on several details. The following table describes some commonly used filters. Not all filter options are described. ++| Filter | Description | +| | | +| Request ID | Unique identifier for a sign-in request | +| Correlation ID | Unique identifier for all sign-in requests that are part of a single sign-in attempt | +| User | The *user principal name* (UPN) of the user | +| Application | The application targeted by the sign-in request | +| Status | Options are *Success*, *Failure*, and *Interrupted* | +| Resource | The name of the service used for the sign-in | +| IP address | The IP address of the client used for the sign-in | +| Conditional Access | Options are *Not applied*, *Success*, and *Failure* | ++Now that your sign-in logs table is formatted for your needs, you can more effectively analyze the data. Further analysis and retention of sign-in data can be accomplished by exporting the logs to other tools. ++Customizing the columns and adjusting the filter helps to look at logs with similar characteristics. To look at the details of a sign-in, select a row in the table to open the **Activity Details** panel. There are several tabs in the panel to explore. For more information, see [Sign-in log activity details](concept-sign-in-log-activity-details.md). +++### Considerations for sign-in logs ++- **IP address and location:** There's no definitive connection between an IP address and where the computer with that address is physically located. Mobile providers and VPNs issue IP addresses from central pools that are often far from where the client device is actually used. Currently, converting IP address to a physical location is a best effort based on traces, registry data, reverse lookups and other information. ++- **Conditional Access:** + - *Not applied:* No policy applied to the user and application during sign-in. + - *Success:* One or more Conditional Access policies applied to or were evaluated for the user and application (but not necessarily the other conditions) during sign-in. Even though a Conditional Access policy might not apply, if it was evaluated, the Conditional Access status shows *Success*. + - *Failure:* The sign-in satisfied the user and application condition of at least one Conditional Access policy and grant controls are either not satisfied or set to block access. ++- **Home tenant name:** Due to privacy commitments, Azure AD doesn't populate the home tenant name field during cross-tenant scenarios. ++- **Multifactor authentication:** When a user signs in with MFA, several separate MFA events are actually taking place. For example, if a user enters the wrong validation code or doesn't respond in time, additional MFA events are sent to reflect the latest status of the sign-in attempt. These sign-in events appear as one line item in the Azure AD sign-in logs. That same sign-in event in Azure Monitor, however, appears as multiple line items. These events all have the same `correlationId`. ++- **Client app:** The **Client app** filter option has two subcategories: **Modern authentication clients** and **Legacy authentication clients**. + - *Browser* and *Mobile apps and desktop clients* are the two options in the Modern authentication clients category. + - Review the following table for the *Legacy authentication client* details. ++|Name|Description| +||| +|Authenticated SMTP|Used by POP and IMAP clients to send email messages.| +|Autodiscover|Used by Outlook and EAS clients to find and connect to mailboxes in Exchange Online.| +|Exchange ActiveSync|This filter shows all sign-in attempts where the EAS protocol has been attempted.| +|Exchange ActiveSync| Shows all sign-in attempts from users with client apps using Exchange ActiveSync to connect to Exchange Online| +|Exchange Online PowerShell|Used to connect to Exchange Online with remote PowerShell. If you block basic authentication for Exchange Online PowerShell, you need to use the Exchange Online PowerShell module to connect. For instructions, see [Connect to Exchange Online PowerShell using multi-factor authentication](/powershell/exchange/exchange-online/connect-to-exchange-online-powershell/mfa-connect-to-exchange-online-powershell).| +|Exchange Web Services|A programming interface that's used by Outlook, Outlook for Mac, and third-party apps.| +|IMAP4|A legacy mail client using IMAP to retrieve email.| +|MAPI over HTTP|Used by Outlook 2010 and later.| +|Offline Address Book|A copy of address list collections that are downloaded and used by Outlook.| +|Outlook Anywhere (RPC over HTTP)|Used by Outlook 2016 and earlier.| +|Outlook Service|Used by the Mail and Calendar app for Windows 10.| +|POP3|A legacy mail client using POP3 to retrieve email.| +|Reporting Web Services|Used to retrieve report data in Exchange Online.| +|Other clients|Shows all sign-in attempts from users where the client app isn't included or unknown.| ++## Next steps ++- [Analyze a sing-in error](quickstart-analyze-sign-in.md) +- [Troubleshoot sign-in errors](howto-troubleshoot-sign-in-errors.md) +- [Explore all audit log categories and activities](reference-audit-activities.md) |
active-directory | Howto Download Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-download-logs.md | - Title: How to download logs in Azure Active Directory -description: Learn how to download activity logs in Azure Active Directory. +description: Learn how to download audit, sign-in, and provisioning log data for storage in Azure Active Directory. -# How to: Download logs in Azure Active Directory +# How to download logs in Azure Active Directory The Azure Active Directory (Azure AD) portal gives you access to three types of activity logs: Azure AD stores the data in these logs for a limited amount of time. As an IT ad The option to download the data of an activity log is available in all editions of Azure AD. You can also download activity logs using Microsoft Graph; however, downloading logs programmatically requires a premium license. -The following roles provide read access to audit logs. Always use the least privileged role, according to Microsoft [Zero Trust guidance](/security/zero-trust/zero-trust-overview). -- Reports Reader-- Security Reader-- Security Administrator-- Global Reader (sign-in logs only)-- Global Administrator+The required roles and licenses may vary based on the report. Global Administrators can access all reports, but we recommend using a role with least privilege access to align with the [Zero Trust guidance](/security/zero-trust/zero-trust-overview). ++| Log / Report | Roles | Licenses | +|--|--|--| +| Audit | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD | +| Sign-ins | Report Reader<br>Security Reader<br>Security Administrator<br>Global Reader | All editions of Azure AD | +| Provisioning | Same as audit and sign-ins, plus<br>Security Operator<br>Application Administrator<br>Cloud App Administrator<br>A custom role with `provisioningLogs` permission | Premium P1/P2 | ## Log download details Azure AD stores activity logs for a specific period. For more information, see [ ## How to download activity logs -You can access the activity logs from the **Monitoring** section of Azure AD or from the **Users** page of Azure AD. If you view the audit logs from the **Users** page, the filter category will be set to **UserManagement**. Similarly, if you view the audit logs from the **Groups** page, the filter category will be set to **GroupManagement**. Regardless of how you access the activity logs, your download is based on the filter you've set. +You can access the activity logs from the **Monitoring** section of Azure AD or from the **Users** page of Azure AD. If you view the audit logs from the **Users** page, the filter category is set to **UserManagement**. Similarly, if you view the audit logs from the **Groups** page, the filter category is set to **GroupManagement**. Regardless of how you access the activity logs, your download is based on the filter you've set. -1. Navigate to the activity log you need to download. -1. Adjust the filter for your needs. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Monitoring & health** > **Audit logs**/**Sign-in logs**/**Provisioning logs**. 1. Select **Download**.- - For audit and sign-in logs, a window appears where you'll select the download format (CSV or JSON). - - For provisioning logs, you'll select the download format (CSV of JSON) from the Download button. + - For audit and sign-in logs, a window appears where you select the download format (CSV or JSON). + - For provisioning logs, you select the download format (CSV of JSON) from the Download button. - You can change the File Name of the download. - Select the **Download** button. 1. The download processes and sends the file to your default download location. |
active-directory | Howto Integrate Activity Logs With Arcsight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-arcsight.md | - Title: Integrate logs with ArcSight using Azure Monitor -description: Learn how to integrate Azure Active Directory logs with ArcSight using Azure Monitor ------- Previously updated : 10/31/2022-------# Integrate Azure Active Directory logs with ArcSight using Azure Monitor --[Micro Focus ArcSight](https://software.microfocus.com/products/siem-security-information-event-management/overview) is a security information and event management (SIEM) solution that helps you detect and respond to security threats in your platform. You can now route Azure Active Directory (Azure AD) logs to ArcSight using Azure Monitor using the ArcSight connector for Azure AD. This feature allows you to monitor your tenant for security compromise using ArcSight. --In this article, you learn how to route Azure AD logs to ArcSight using Azure Monitor. --## Prerequisites --To use this feature, you need: -* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -* A configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer. --Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor. --## Integrate Azure AD logs with ArcSight --1. First, complete the steps in the **Prerequisites** section of the configuration guide. This section includes the following steps: - * Set user permissions in Azure, to ensure there's a user with the **owner** role to deploy and configure the connector. - * Open ports on the server with Syslog NG Daemon SmartConnector, so it's accessible from Azure. - * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector. --2. Follow the steps in the **Deploying the Connector** section of configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder. --3. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites: - * The requisite Azure functions are created in your Azure subscription. - * The Azure AD logs are streamed to the correct destination. - * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps. - * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format. --4. Finally, complete the post-deployment steps in the **Post-Deployment Configurations** of the configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account. --5. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle. --## Next steps --[Configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://community.microfocus.com/t5/ArcSight-Connectors/SmartConnector-for-Microsoft-Azure-Monitor-Event-Hub/ta-p/1671292) |
active-directory | Howto Integrate Activity Logs With Azure Monitor Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-azure-monitor-logs.md | + + Title: Integrate Azure Active Directory logs with Azure Monitor logs +description: Learn how to integrate Azure Active Directory logs with Azure Monitor logs for querying and analysis. +++++++ Last updated : 08/08/2023+++++# Integrate Azure AD logs with Azure Monitor logs ++Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so your sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. ++This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor. ++Use the integration of Azure AD activity logs and Azure Monitor to perform the following tasks: ++- Compare your Azure AD sign-in logs against security logs published by Microsoft Defender for Cloud. +- Troubleshoot performance bottlenecks on your applicationΓÇÖs sign-in page by correlating application performance data from Azure Application Insights. +- Analyze the Identity Protection risky users and risk detections logs to detect threats in your environment. +- Identify sign-ins from applications still using the Active Directory Authentication Library (ADAL) for authentication. [Learn about the ADAL end-of-support plan.](../develop/msal-migration.md) ++> [!NOTE] +> Integrating Azure Active Directory logs with Azure Monitor automatically enables the Azure Active Directory data connector within Microsoft Sentinel. ++## How do I access it? ++To use this feature, you need: ++* An Azure subscription. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). +* An Azure AD Premium P1 or P2 tenant. +* **Global Administrator** or **Security Administrator** access for the Azure AD tenant. +* A **Log Analytics workspace** in your Azure subscription. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). +* Permission to access data in a Log Analytics workspace. See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. ++## Create a Log Analytics workspace ++A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). ++Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. ++## Send logs to Azure Monitor ++Follow the steps below to send logs from Azure Active Directory to Azure Monitor logs. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. +++6. Under **Destination Details** select the **Send to Log Analytics workspace** check box. ++7. Select the appropriate **Subscription** and **Log Analytics workspace** from the menus. ++8. Select the **Save** button. ++ ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-azure-monitor-logs/diagnostic-settings-log-analytics-workspace.png) ++If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs. ++## Next steps ++* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) +* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md) +* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md) |
active-directory | Howto Integrate Activity Logs With Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | - Title: Integrate Azure Active Directory logs with Azure Monitor | Microsoft Docs -description: Learn how to integrate Azure Active Directory logs with Azure Monitor ------- Previously updated : 06/26/2023------# How to integrate Azure AD logs with Azure Monitor logs --Using **Diagnostic settings** in Azure Active Directory (Azure AD), you can integrate logs with Azure Monitor so sign-in activity and the audit trail of changes within your tenant can be analyzed along with other Azure data. Integrating Azure AD logs with Azure Monitor logs enables rich visualizations, monitoring, and alerting on the connected data. --This article provides the steps to integrate Azure Active Directory (Azure AD) logs with Azure Monitor Logs. --## Roles and licenses --To integrate Azure AD logs with Azure Monitor, you need the following roles and licenses: --* **An Azure subscription:** If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). --* **An Azure AD Premium P1 or P2 tenant:** You can find the license type of your tenant on the [Overview](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) page in Azure AD. --* **Security Administrator access for the Azure AD tenant:** This role is required to set up the Diagnostics settings. --* **Permission to access data in a Log Analytics workspace:** See [Manage access to log data and workspaces in Azure Monitor](../../azure-monitor/logs/manage-access.md) for information on the different permission options and how to configure permissions. --## Integrate logs with Azure Monitor logs --To send Azure AD logs to Azure Monitor Logs you must first have a [Log Analytics workspace](../../azure-monitor/logs/log-analytics-overview.md). Then you can set up the Diagnostics settings in Azure AD to send your activity logs to that workspace. --### Create a Log Analytics workspace --A Log Analytics workspace allows you to collect data based on a variety or requirements, such as geographic location of the data, subscription boundaries, or access to resources. Learn how to [create a Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). --Looking for how to set up a Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. --### Set up Diagnostics settings --Once you have a Log Analytics workspace created, follow the steps below to send logs from Azure Active Directory to that workspace. ---Follow the steps below to send logs from Azure Active Directory to Azure Monitor. Looking for how to set up Log Analytics workspace for Azure resources outside of Azure AD? Check out the [Collect and view resource logs for Azure Monitor](../../azure-monitor/essentials/diagnostic-settings.md) article. --1. Sign in to the [Azure portal](https://portal.azure.com) as a **Security Administrator**. --1. Go to **Azure Active Directory** > **Diagnostic settings**. You can also select **Export Settings** from the Audit logs or Sign-in logs. --1. Select **+ Add diagnostic setting** to create a new integration or select **Edit setting** to change an existing integration. --1. Enter a **Diagnostic setting name**. If you're editing an existing integration, you can't change the name. --1. Any or all of the following logs can be sent to the Log Analytics workspace. Some logs may be in public preview but still visible in the portal. - * `AuditLogs` - * `SignInLogs` - * `NonInteractiveUserSignInLogs` - * `ServicePrincipalSignInLogs` - * `ManagedIdentitySignInLogs` - * `ProvisioningLogs` - * `ADFSSignInLogs` Active Directory Federation Services (ADFS) - * `RiskyServicePrincipals` - * `RiskyUsers` - * `ServicePrincipalRiskEvents` - * `UserRiskEvents` --1. The following logs are in preview but still visible in Azure AD. At this time, selecting these options will not add new logs to your workspace unless your organization was included in the preview. - * `EnrichedOffice365AuditLogs` - * `MicrosoftGraphActivityLogs` - * `NetworkAccessTrafficLogs` --1. In the **Destination details**, select **Send to Log Analytics workspace** and choose the appropriate details from the menus that appear. - * You can also send logs to any or all of the following destinations. Additional fields appear, depending on your selection. - * **Archive to a storage account:** Provide the number of days you'd like to retain the data in the **Retention days** boxes that appear next to the log categories. Select the appropriate details from the menus that appear. - * **Stream to an event hub:** Select the appropriate details from the menus that appear. - * **Send to partner solution:** Select the appropriate details from the menus that appear. --1. Select **Save** to save the setting. -- ![Screenshot of the Diagnostics settings with some destination details shown.](./media/howto-integrate-activity-logs-with-log-analytics/Configure.png) --If you do not see logs appearing in the selected destination after 15 minutes, sign out and back into Azure to refresh the logs. --> [!NOTE] -> Integrating Azure Active Directory logs with Azure Monitor will automatically enable the Azure Active Directory data connector within Microsoft Sentinel. --## Next steps --* [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) -* [Learn about the data sources you can analyze with Azure Monitor](../../azure-monitor/data-sources.md) -* [Automate creating diagnostic settings with Azure Policy](../../azure-monitor/essentials/diagnostic-settings-policy.md) |
active-directory | Howto Integrate Activity Logs With Splunk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-splunk.md | - Title: Integrate Splunk using Azure Monitor -description: Learn how to integrate Azure Active Directory logs with Splunk using Azure Monitor. ------- Previously updated : 10/31/2022-------# How to: Integrate Azure Active Directory logs with Splunk using Azure Monitor --In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with Splunk by using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with Splunk. --## Prerequisites --To use this feature, you need: --- An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). --- The [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). --## Integrate Azure Active Directory logs --1. Open your Splunk instance, and select **Data Summary**. -- ![The "Data Summary" button](./media/howto-integrate-activity-logs-with-splunk/DataSummary.png) --2. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub** -- ![The Data Summary Sourcetypes tab](./media/howto-integrate-activity-logs-with-splunk/source-eventhub.png) --Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure: -- ![Activity logs](./media/howto-integrate-activity-logs-with-splunk/activity-logs.png) --> [!NOTE] -> If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub. -> --## Next steps --* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) -* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Howto Integrate Activity Logs With Sumologic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-sumologic.md | - Title: Stream logs to SumoLogic using Azure Monitor -description: Learn how to integrate Azure Active Directory logs with SumoLogic using Azure Monitor. ------- Previously updated : 10/31/2022-------# Integrate Azure Active Directory logs with SumoLogic using Azure Monitor --In this article, you learn how to integrate Azure Active Directory (Azure AD) logs with SumoLogic using Azure Monitor. You first route the logs to an Azure event hub, and then you integrate the event hub with SumoLogic. --## Prerequisites --To use this feature, you need: -* An Azure event hub that contains Azure AD activity logs. Learn how to [stream your activity logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -* A SumoLogic single sign-on enabled subscription. --## Steps to integrate Azure AD logs with SumoLogic --1. First, [stream the Azure AD logs to an Azure event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). -2. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory). -3. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment. -- ![Dashboard](./media/howto-integrate-activity-logs-with-sumologic/overview-dashboard.png) --## Next steps --* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) -* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Howto Manage Inactive User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-manage-inactive-user-accounts.md | Title: How to manage inactive user accounts -description: Learn how to detect and resolve user accounts that have become obsolete +description: Learn how to detect and resolve Azure Active Directory user accounts that have become inactive or obsolete. + Previously updated : 05/02/2023 Last updated : 08/24/2023 -- # How To: Manage inactive user accounts The following details relate to the `lastSignInDateTime` property. If you need to view the latest sign-in activity for a user, you can view the user's sign-in details in Azure AD. You can also use the Microsoft Graph **users by name** scenario described in the [previous section](#detect-inactive-user-accounts-with-microsoft-graph). -1. Sign in to the [Azure portal](https://portal.azure.com). -1. Go to **Azure AD** > **Users** > select a user from the list. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Users** > **All users**. +1. Select a user from the list. 1. In the **My Feed** area of the user's Overview, locate the **Sign-ins** tile. ![Screenshot of the user overview page with the sign-in activity tile highlighted.](media/howto-manage-inactive-user-accounts/last-sign-activity-tile.png) |
active-directory | Howto Stream Logs To Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-stream-logs-to-event-hub.md | + + Title: Stream Azure Active Directory logs to an event hub +description: Learn how to stream Azure Active Directory activity logs to an event hub for SIEM tool integration and analysis. +++++++ Last updated : 08/24/2023++++# How to stream activity logs to an event hub ++Your Azure Active Directory (Azure AD) tenant produces large amounts of data every second. Sign-in activity and logs of changes made in your tenant add up to a lot of data that can be hard to analyze. Integrating with Security Information and Event Management (SIEM) tools can help you gain insights into your environment. ++This article shows how you can stream your logs to an event hub, to integrate with one of several SIEM tools. ++## Prerequisites ++To stream logs to a SIEM tool, you first need to create an **Azure event hub**. ++Once you have an event hub that contains Azure AD activity logs, you can set up the SIEM tool integration using the **Azure AD Diagnostics Settings**. ++## Stream logs to an event hub +++6. Select the **Stream to an event hub** check box. ++7. Select the Azure subscription, Event Hubs namespace, and optional event hub where you want to route the logs. ++The subscription and Event Hubs namespace must both be associated with the Azure AD tenant from where you're streaming the logs. ++Once you have the Azure event hub ready, navigate to the SIEM tool you want to integrate with the activity logs. You'll finish the process in the SIEM tool. ++We currently support Splunk, SumoLogic, and ArcSight. Select a tab below to get started. Refer to the tool's documentation. ++# [Splunk](#tab/splunk) ++To use this feature, you need the [Splunk Add-on for Microsoft Cloud Services](https://splunkbase.splunk.com/app/3110/#/details). ++### Integrate Azure AD logs with Splunk ++1. Open your Splunk instance and select **Data Summary**. ++ ![The "Data Summary" button](./media/howto-stream-logs-to-event-hub/datasummary.png) ++1. Select the **Sourcetypes** tab, and then select **mscs:azure:eventhub** ++ ![The Data Summary Sourcetypes tab](./media/howto-stream-logs-to-event-hub/source-eventhub.png) ++Append **body.records.category=AuditLogs** to the search. The Azure AD activity logs are shown in the following figure: ++ ![Activity logs](./media/howto-stream-logs-to-event-hub/activity-logs.png) ++If you cannot install an add-on in your Splunk instance (for example, if you're using a proxy or running on Splunk Cloud), you can forward these events to the Splunk HTTP Event Collector. To do so, use this [Azure function](https://github.com/splunk/azure-functions-splunk), which is triggered by new messages in the event hub. ++# [SumoLogic](#tab/SumoLogic) ++To use this feature, you need a SumoLogic single sign-on enabled subscription. ++### Integrate Azure AD logs with SumoLogic ++1. Configure your SumoLogic instance to [collect logs for Azure Active Directory](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#collecting-logs-for-azure-active-directory). ++1. [Install the Azure AD SumoLogic app](https://help.sumologic.com/docs/integrations/microsoft-azure/active-directory-azure#viewing-azure-active-directory-dashboards) to use the pre-configured dashboards that provide real-time analysis of your environment. ++ ![Dashboard](./media/howto-stream-logs-to-event-hub/overview-dashboard.png) ++# [ArcSight](#tab/ArcSight) ++To use this feature, you need a configured instance of ArcSight Syslog NG Daemon SmartConnector (SmartConnector) or ArcSight Load Balancer. If the events are sent to ArcSight Load Balancer, they're sent to the SmartConnector by the Load Balancer. ++Download and open the [configuration guide for ArcSight SmartConnector for Azure Monitor Event Hubs](https://software.microfocus.com/products/siem-security-information-event-management/overview). This guide contains the steps you need to install and configure the ArcSight SmartConnector for Azure Monitor. ++## Integrate Azure AD logs with ArcSight ++1. Complete the steps in the **Prerequisites** section of the ArcSight configuration guide. This section includes the following steps: + * Set user permissions in Azure to ensure there's a user with the **owner** role to deploy and configure the connector. + * Open ports on the server with Syslog NG Daemon SmartConnector so it's accessible from Azure. + * The deployment runs a Windows PowerShell script, so you must enable PowerShell to run scripts on the machine where you want to deploy the connector. ++1. Follow the steps in the **Deploying the Connector** section of the ArcSight configuration guide to deploy the connector. This section walks you through how to download and extract the connector, configure application properties and run the deployment script from the extracted folder. ++1. Use the steps in the **Verifying the Deployment in Azure** to make sure the connector is set up and functions correctly. Verify the following prerequisites: + * The requisite Azure functions are created in your Azure subscription. + * The Azure AD logs are streamed to the correct destination. + * The application settings from your deployment are persisted in the Application Settings in Azure Function Apps. + * A new resource group for ArcSight is created in Azure, with an Azure AD application for the ArcSight connector and storage accounts containing the mapped files in CEF format. ++1. Complete the post-deployment steps in the **Post-Deployment Configurations** of the ArcSight configuration guide. This section explains how to perform another configuration if you are on an App Service Plan to prevent the function apps from going idle after a timeout period, configure streaming of resource logs from the event hub, and update the SysLog NG Daemon SmartConnector keystore certificate to associate it with the newly created storage account. ++1. The configuration guide also explains how to customize the connector properties in Azure, and how to upgrade and uninstall the connector. There's also a section on performance improvements, including upgrading to an [Azure Consumption plan](https://azure.microsoft.com/pricing/details/functions) and configuring an ArcSight Load Balancer if the event load is greater than what a single Syslog NG Daemon SmartConnector can handle. ++++## Activity log integration options and considerations ++If your current SIEM isn't supported in Azure Monitor diagnostics yet, you can set up **custom tooling** by using the Event Hubs API. To learn more, see the [Getting started receiving messages from an event hub](../../event-hubs/event-hubs-dotnet-standard-getstarted-send.md). ++**IBM QRadar** is another option for integrating with Azure AD activity logs. The DSM and Azure Event Hubs Protocol are available for download at [IBM support](https://www.ibm.com/support). For more information about integration with Azure, go to the [IBM QRadar Security Intelligence Platform 7.3.0](https://www.ibm.com/support/knowledgecenter/SS42VS_DSM/c_dsm_guide_microsoft_azure_overview.html?cp=SS42VS_7.3.0) site. ++Some sign-in categories contain large amounts of log data, depending on your tenantΓÇÖs configuration. In general, the non-interactive user sign-ins and service principal sign-ins can be 5 to 10 times larger than the interactive user sign-ins. ++## Next steps ++- [Analyze Azure AD activity logs with Azure Monitor logs](howto-analyze-activity-logs-log-analytics.md) +- [Use Microsoft Graph to access Azure AD activity logs](quickstart-access-log-with-graph-api.md) |
active-directory | Howto Troubleshoot Sign In Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-troubleshoot-sign-in-errors.md | You need: [!INCLUDE [portal updates](~/articles/active-directory/includes/portal-update.md)] -1. Sign in to the [Azure portal](https://portal.azure.com) using a role of least privilege access. -1. Go to **Azure AD** > **Sign-ins**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. 1. Use the filters to narrow down the results - Search by username if you're troubleshooting a specific user. - Search by application if you're troubleshooting issues with a specific app. |
active-directory | Howto Use Azure Monitor Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-azure-monitor-workbooks.md | - Title: Azure Monitor workbooks for Azure Active Directory -description: Learn how to use Azure Monitor workbooks for Azure Active Directory reports. ------- Previously updated : 07/28/2023----# How to use Azure Active Directory Workbooks --Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD. --When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch. --- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks.-- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant.--## Prerequisites --To use Azure Workbooks for Azure AD, you need: --- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md)-- A Log Analytics workspace *and* access to that workspace-- The appropriate roles for Azure Monitor *and* Azure AD--### Log Analytics workspace --You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data. --For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md). --### Azure Monitor roles --Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access. --- **View**:- - Monitoring Reader - - Log Analytics Reader --- **View and modify settings**:- - Monitoring Contributor - - Log Analytics Contributor --For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader). --For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) --### Azure AD roles --Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace. --- **Read**:- - Reports Reader - - Security Reader - - Global Reader --- **Update**:- - Security Administrator --For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md). --## How to access Azure Workbooks for Azure AD ---1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to **Azure Active Directory** > **Monitoring** > **Workbooks**. - - **Workbooks**: All workbooks created in your tenant - - **Public Templates**: Prebuilt workbooks for common or high priority scenarios - - **My Templates**: Templates you've created -1. Select a report or template from the list. Workbooks may take a few moments to populate. - - Search for a template by name. - - Select the **Browse across galleries** to view templates that aren't specific to Azure AD. -- ![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png) --## Create a new workbook --Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json). --**To create a new workbook from scratch**: -1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**. -1. Select **+ New**. -1. Select an element from the **+ Add** menu. -- For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md). -- ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png) --**To create a new workbook from a template**: -1. Navigate to **Azure AD** > **Monitoring** > **Workbooks**. -1. Select a workbook template from the Gallery. -1. Select **Edit** from the top of the page. - - Each element of the workbook has its own **Edit** button. - - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md) --1. Select the **Edit** button for any element. Make your changes and select **Done editing**. - ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png) -1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name. -1. In the **Save As** window: - - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**. - - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md). -1. Select the **Apply** button. --## Next steps --* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md). -* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md). |
active-directory | Howto Use Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-recommendations.md | Title: How to use Azure Active Directory recommendations -description: Learn how to use Azure Active Directory recommendations. +description: Learn how to use Azure Active Directory recommendations to monitor and improve the health of your tenant. -# How to: Use Azure AD recommendations +# How to use Azure Active Directory Recommendations The Azure Active Directory (Azure AD) recommendations feature provides you with personalized insights with actionable guidance to: Some recommendations may require a P2 or other license. For more information, se To view the details of a recommendation: -1. Sign in to Azure using the appropriate least-privilege role. -1. Go to **Azure AD** > **Recommendations** and select a recommendation from the list. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Overview** > **Recommendations tab** +1. Select a recommendation from the list. ![Screenshot of the list of recommendations.](./media/howto-use-recommendations/recommendations-list.png) |
active-directory | Howto Use Sign In Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-sign-in-diagnostics.md | - Title: How to use the Sign-in diagnostic -description: Information on how to use the Sign-in diagnostic in Azure Active Directory. + Title: How to use Azure Active Directory Sign-in diagnostics +description: How to use the Sign-in diagnostic in tool Azure Active Directory to troubleshoot sign-in related scenarios. Determining the reason for a failed sign-in can quickly become a challenging tas This article gives you an overview of what the Sign-in diagnostic is and how you can use it to troubleshoot sign-in related errors. -## How it works +## Prerequisites ++To use the Sign-in diagnostic: +- You must be signed as at least a **Global Reader**. +- With the correct access level, you can start the Sign-in diagnostic from more than one place. +- Flagged sign-in events can also be reviewed from the Sign-in diagnostic. + - Flagged sign-in events are captured *after* a user has enabled flagging during their sign-in experience. + - For more information, see [flagged sign-ins](overview-flagged-sign-ins.md). ++## How does it work? In Azure AD, sign-in attempts are controlled by: Due to the greater flexibility of the system to respond to a sign-in attempt, yo - Displaying information about what happened. - Providing recommendations to resolve problems. -## How to access it --To use the Sign-in diagnostic, you must be signed into the tenant as a **Global Reader** or **Global Administrator**. With the correct access level, you can start the Sign-in diagnostic from more than one place. --Flagged sign-in events can also be reviewed from the Sign-in diagnostic. Flagged sign-in events are captured *after* a user has enabled flagging during their sign-in experience. For more information, see [flagged sign-ins](overview-flagged-sign-ins.md). - ### From Diagnose and Solve Problems You can start the Sign-in diagnostic from the **Diagnose and Solve Problems** area of Azure AD. From Diagnose and Solve Problems you can review any flagged sign-in events or search for a specific sign-in event. You can also start this process from the Conditional Access Diagnose and Solve Problems area. **To search for sign-in events**:-1. Go to **Azure AD** or **Azure AD Conditional Access** > **Diagnose and Solve Problems**. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). +1. Browse to **Learn & support** > **Diagnose & solve problems** or **Protection** **Conditional Access** > **Diagnose and Solve Problems**. +1. Select the **Troubleshoot** link on the **Sign-in Diagnostic** tile. 1. Select the **All Sign-In Events** tab to start a search. 1. Enter as many details as possible into the search fields. - **User**: Provide the name or email address of who made the sign-in attempt. You can start the Sign-in diagnostic from the **Diagnose and Solve Problems** ar You can start the Sign-in diagnostic from a specific sign-in event in the Sign-in logs. When you start the process from a specific sign-in event, the diagnostics start right away. You aren't prompted to enter details first. -1. Go to **Azure AD** > **Sign-in logs** and select a sign-in event. +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs** and select a sign-in event. - You can filter your list to make it easier to find specific sign-in events. 1. From the Activity Details window that opens, select the **Launch the Sign-in diagnostic** link. You can start the Sign-in diagnostic from a specific sign-in event in the Sign-i If you're in the middle of creating a support request *and* the options you selected are related to sign-in activity, you'll be prompted to run the Sign-in diagnostics during the support request process. -1. Go to **Azure AD** > **Diagnose and Solve Problems**. +1. Browse to **Diagnose and Solve Problems**. 1. Select the appropriate fields as necessary. For example: - **Service type**: Azure Active Directory Sign-in and Multi-Factor Authentication - **Problem type**: Multi-Factor Authentication If you're in the middle of creating a support request *and* the options you sele After the Sign-in diagnostic completes its search, a few things appear on the screen: -- The **Authentication Summary** lists all of the events that match the details you provided.+- The **Authentication summary** lists all of the events that match the details you provided. - Select the **View Columns** option in the upper-right corner of the summary to change the columns that appear.-- The **diagnostic Results** describe what happened during the sign-in events.+- The **Diagnostic results** describe what happened during the sign-in events. - Scenarios could include MFA requirements from a Conditional Access policy, sign-in events that may need to have a Conditional Access policy applied, or a large number of failed sign-in attempts over the past 48 hours. - Related content and links to troubleshooting tools may be provided. - Read through the results to identify any actions that you can take. - Because it's not always possible to resolve issues without more help, a recommended step might be to open a support ticket. - ![Screenshot of the diagnostic Results for a scenario.](media/howto-use-sign-in-diagnostics/diagnostic-result-mfa-proofup.png) + ![Screenshot of the Diagnostic results for a scenario.](media/howto-use-sign-in-diagnostics/diagnostic-result-mfa-proofup.png) - Provide feedback on the results to help improve the feature. |
active-directory | Howto Use Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-use-workbooks.md | + + Title: Azure Monitor workbooks for Azure Active Directory +description: Learn how to use Azure Monitor workbooks for analyzing identity logs in Azure Active Directory reports. +++++++ Last updated : 08/24/2023+++++# How to use Azure Active Directory Workbooks ++Workbooks are found in Azure AD and in Azure Monitor. The concepts, processes, and best practices are the same for both types of workbooks, however, workbooks for Azure Active Directory (AD) cover only those identity management scenarios that are associated with Azure AD. ++When using workbooks, you can either start with an empty workbook, or use an existing template. Workbook templates enable you to quickly get started using workbooks without needing to build from scratch. ++- **Public templates** published to a [gallery](../../azure-monitor/visualize/workbooks-overview.md#the-gallery) are a good starting point when you're just getting started with workbooks. +- **Private templates** are helpful when you start building your own workbooks and want to save one as a template to serve as the foundation for multiple workbooks in your tenant. ++## Prerequisites ++To use Azure Workbooks for Azure AD, you need: ++- An Azure AD tenant with a [Premium P1 license](../fundamentals/get-started-premium.md) +- A Log Analytics workspace *and* access to that workspace +- The appropriate roles for Azure Monitor *and* Azure AD ++### Log Analytics workspace ++You must create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md) *before* you can use Azure AD Workbooks. There are a combination of factors that determine access to Log Analytics workspaces. You need the right roles for the workspace *and* the resources sending the data. ++For more information, see [Manage access to Log Analytics workspaces](../../azure-monitor/logs/manage-access.md). ++### Azure Monitor roles ++Azure Monitor provides [two built-in roles](../../azure-monitor/roles-permissions-security.md#monitoring-reader) for viewing monitoring data and editing monitoring settings. Azure role-based access control (RBAC) also provides two Log Analytics built-in roles that grant similar access. ++- **View**: + - Monitoring Reader + - Log Analytics Reader ++- **View and modify settings**: + - Monitoring Contributor + - Log Analytics Contributor ++For more information on the Azure Monitor built-in roles, see [Roles, permissions, and security in Azure Monitor](../../azure-monitor/roles-permissions-security.md#monitoring-reader). ++For more information on the Log Analytics RBAC roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md#log-analytics-contributor) ++### Azure AD roles ++Read only access allows you to view Azure AD log data inside a workbook, query data from Log Analytics, or read logs in the Azure AD portal. Update access adds the ability to create and edit diagnostic settings to send Azure AD data to a Log Analytics workspace. ++- **Read**: + - Reports Reader + - Security Reader + - Global Reader ++- **Update**: + - Security Administrator ++For more information on Azure AD built-in roles, see [Azure AD built-in roles](../roles/permissions-reference.md). ++## How to access Azure Workbooks for Azure AD +++1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Reports Reader](../roles/permissions-reference.md#reports-reader). +1. Browse to **Identity** > **Monitoring & health** > **Workbooks**. + - **Workbooks**: All workbooks created in your tenant + - **Public Templates**: Prebuilt workbooks for common or high priority scenarios + - **My Templates**: Templates you've created +1. Select a report or template from the list. Workbooks may take a few moments to populate. + - Search for a template by name. + - Select the **Browse across galleries** to view templates that aren't specific to Azure AD. ++ ![Find the Azure Monitor workbooks in Azure AD](./media/howto-use-azure-monitor-workbooks/azure-monitor-workbooks-in-azure-ad.png) ++## Create a new workbook ++Workbooks can be created from scratch or from a template. When creating a new workbook, you can add elements as you go or use the **Advanced Editor** option to paste in the JSON representation of a workbook, copied from the [workbooks GitHub repository](https://github.com/Microsoft/Application-Insights-Workbooks/blob/master/schema/workbook.json). ++**To create a new workbook from scratch**: +1. Browse to **Identity** > **Monitoring & health** > **Workbooks**. +1. Select **+ New**. +1. Select an element from the **+ Add** menu. ++ For more information on the available elements, see [Creating an Azure Workbook](../../azure-monitor/visualize/workbooks-create-workbook.md). ++ ![Screenshot of the Azure Workbooks +Add menu options.](./media/howto-use-azure-monitor-workbooks/create-new-workbook-elements.png) ++**To create a new workbook from a template**: +1. Browse to **Identity** > **Monitoring & health** > **Workbooks**. +1. Select a workbook template from the Gallery. +1. Select **Edit** from the top of the page. + - Each element of the workbook has its own **Edit** button. + - For more information on editing workbook elements, see [Azure Workbooks Templates](../../azure-monitor/visualize/workbooks-templates.md) ++1. Select the **Edit** button for any element. Make your changes and select **Done editing**. + ![Screenshot of a workbook in Edit mode, with the Edit and Done Editing buttons highlighted.](./media/howto-use-azure-monitor-workbooks/edit-buttons.png) +1. When you're done editing the workbook, select the **Save As** to save your workbook with a new name. +1. In the **Save As** window: + - Provide a **Title**, **Subscription**, **Resource Group** (you must have the ability to save a workbook for the selected Resource Group), and **Location**. + - Optionally choose to save your workbook content to an [Azure Storage Account](../../azure-monitor/visualize/workbooks-bring-your-own-storage.md). +1. Select the **Apply** button. ++## Next steps ++* [Create interactive reports by using Monitor workbooks](../../azure-monitor/visualize/workbooks-overview.md). +* [Create custom Azure Monitor queries using Azure PowerShell](../governance/entitlement-management-logs-and-reporting.md). |
active-directory | Overview Flagged Sign Ins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-flagged-sign-ins.md | Azure AD sign-in events are critical to understanding what went right or wrong w Flagged Sign-ins is a feature intended to increase the signal to noise ratio for user sign-ins requiring help. The functionality is intended to empower users to raise awareness about sign-in errors they want help with. Admins and help desk workers also benefit from finding the right events more efficiently. Flagged Sign-in events contain the same information as other sign-in events contain with one addition: they also indicate that a user flagged the event for review by admins. -Flagged sign-ins gives the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event will then appear as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD sign-ins log. +Flagged sign-ins give the user the ability to enable flagging when an error is seen on a sign-in page and then reproduce that error. The error event then appears as ΓÇ£Flagged for ReviewΓÇ¥ in the Azure AD sign-ins log. In summary, you can use flagged sign-ins to: Flagged sign-ins gives you the ability to enable flagging when signing in using 5. Open a new browser window (in the same browser application) and attempt the same sign-in that failed. 6. Reproduce the sign-in error that was seen before. -With flagging enabled, the same browser application and client must be used or the events won't be flagged. +With flagging enabled, the same browser application and client must be used or the events aren't flagged. ### Admin: Find flagged events in reports -1. In the Azure portal, go to **Sign-in logs** > **Add Filters**. -1. From the **Pick a field** menu, select **Flagged for review** and **Apply**. -1. All events that were flagged by users are shown. +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as at least a [Global Reader](../roles/permissions-reference.md#global-reader). +1. Browse to **Identity** > **Monitoring & health** > **Sign-in logs**. +1. Open the **Add filters** menu and select **Flagged for review**. All events that were flagged by users are shown. 1. If needed, apply more filters to refine the event view. 1. Select the event to review what happened. Any user signing into Azure AD via web page can use flag sign-ins for review. Me ## Who can review flagged sign-ins? -Reviewing flagged sign-in events requires permissions to read the sign-in report events in the Azure portal. For more information, see [who can access it?](concept-sign-ins.md#how-do-you-access-the-sign-in-logs) +Reviewing flagged sign-in events requires permissions to read the sign-in report events in the Azure portal. For more information, see [How to access activity logs](howto-access-activity-logs.md#prerequisites). To flag sign-in failures, you don't need extra permissions. |
active-directory | Overview Monitoring Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring-health.md | + + Title: What is Azure Active Directory monitoring and health? +description: Provides a general overview of Azure Active Directory monitoring and health. +++++++ Last updated : 08/15/2023++++++# What is Azure Active Directory monitoring and health? ++The features of Azure Active Directory (Azure AD) Monitoring and health provide a comprehensive view of identity related activity in your environment. This data enables you to: ++- Determine how your users utilize your apps and services. +- Detect potential risks affecting the health of your environment. +- Troubleshoot issues preventing your users from getting their work done. ++Sign-in and audit logs comprise the activity logs behind many Azure AD reports, which can be used to analyze, monitor, and troubleshoot activity in your tenant. Routing your activity logs to an analysis and monitoring solution provides greater insights into your tenant's health and security. ++This article describes the types of activity logs available in Azure AD, the reports that use the logs, and the monitoring services available to help you analyze the data. ++## Identity activity logs ++Activity logs help you understand the behavior of users in your organization. There are three types of activity logs in Azure AD: ++- [**Audit logs**](concept-audit-logs.md) include the history of every task performed in your tenant. ++- [**Sign-in logs**](concept-all-sign-ins.md) capture the sign-in attempts of your users and client applications. ++- [**Provisioning logs**](concept-provisioning-logs.md) provide information around users provisioned in your tenant through a third party service. ++The activity logs can be viewed in the Azure portal or using the Microsoft Graph API. Activity logs can also be routed to various endpoints for storage or analysis. To learn about all of the options for viewing the activity logs, see [How to access activity logs](howto-access-activity-logs.md). ++### Audit logs ++Audit logs provide you with records of system activities for compliance. This data enables you to address common scenarios such as: ++- Someone in my tenant got access to an admin group. Who gave them access? +- I want to know the list of users signing into a specific app because I recently onboarded the app and want to know if itΓÇÖs doing well. +- I want to know how many password resets are happening in my tenant. ++### Sign-in logs ++The sign-ins logs enable you to find answers to questions such as: ++- What is the sign-in pattern of a user? +- How many users have users signed in over a week? +- WhatΓÇÖs the status of these sign-ins? ++### Provisioning logs ++You can use the provisioning logs to find answers to questions like: ++- What groups were successfully created in ServiceNow? +- What users were successfully removed from Adobe? +- What users from Workday were successfully created in Active Directory? ++## Identity reports ++Reviewing the data in the Azure AD activity logs can provide helpful information for IT administrators. To streamline the process of reviewing data on key scenarios, we've created several reports on common scenarios that use the activity logs. ++- [Identity Protection](../identity-protection/overview-identity-protection.md) uses sign-in data to create reports on risky users and sign-in activities. +- Activity related to your applications, such as service principal and app credential activity, are used to create reports in [Usage and insights](concept-usage-insights-report.md). +- [Azure AD workbooks](overview-workbooks.md) provide a customizable way to view and analyze the activity logs. +- [Monitor the status of Azure AD recommendations to improve your tenant's security.](overview-recommendations.md) ++## Identity monitoring and tenant health ++Reviewing Azure AD activity logs is the first step in maintaining and improving the health and security of your tenant. You need to analyze the data, monitor on risky scenarios, and determine where you can make improvements. Azure AD monitoring provides the necessary tools to help you make informed decisions. ++Monitoring Azure AD activity logs requires routing the log data to a monitoring and analysis solution. Endpoints include Azure Monitor logs, Microsoft Sentinel, or a third-party solution third-party Security Information and Event Management (SIEM) tool. ++- [Stream logs to an event hub to integrate with third-party SIEM tools.](howto-stream-logs-to-event-hub.md) +- [Integrate logs with Azure Monitor logs.](howto-integrate-activity-logs-with-log-analytics.md) +- [Analyze logs with Azure Monitor logs and Log Analytics.](howto-analyze-activity-logs-log-analytics.md) +++For an overview of how to access, store, and analyze activity logs, see [How to access activity logs](howto-access-activity-logs.md). +++## Next steps ++- [Learn about the sign-ins logs](concept-all-sign-ins.md) +- [Learn about the audit logs](concept-audit-logs.md) +- [Use Microsoft Graph to access activity logs](quickstart-access-log-with-graph-api.md) +- [Integrate activity logs with SIEM tools](howto-stream-logs-to-event-hub.md) |
active-directory | Overview Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-monitoring.md | -- Title: What is Azure Active Directory monitoring? -description: Provides a general overview of Azure Active Directory monitoring. ------- Previously updated : 11/01/2022----# Customer intent: As an Azure AD administrator, I want to understand what monitoring solutions are available for Azure AD activity data and how they can help me manage my tenant. ----# What is Azure Active Directory monitoring? --With Azure Active Directory (Azure AD) monitoring, you can now route your Azure AD activity logs to different endpoints. You can then either retain it for long-term use or integrate it with third-party Security Information and Event Management (SIEM) tools to gain insights into your environment. --Currently, you can route the logs to: --- An Azure storage account.-- An Azure event hub, so you can integrate with your Splunk and Sumologic instances.-- Azure Log Analytics workspace, wherein you can analyze the data, create dashboard and alert on specific events--**Prerequisite role**: Global Administrator --> [!VIDEO https://www.youtube.com/embed/syT-9KNfug8] ---## Licensing and prerequisites for Azure AD reporting and monitoring --You'll need an Azure AD premium license to access the Azure AD sign-in logs. --For detailed feature and licensing information in the [Azure Active Directory pricing guide](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). --To deploy Azure AD monitoring and reporting you'll need a user who is a global administrator or security administrator for the Azure AD tenant. --Depending on the final destination of your log data, you'll need one of the following: --* An Azure storage account that you have ListKeys permissions for. We recommend that you use a general storage account and not a Blob storage account. For storage pricing information, see the [Azure Storage pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=storage). --* An Azure Event Hubs namespace to integrate with third-party SIEM solutions. --* An Azure Log Analytics workspace to send logs to Azure Monitor logs. --## Diagnostic settings configuration --To configure monitoring settings for Azure AD activity logs, first sign in to the [Azure portal](https://portal.azure.com), then select **Azure Active Directory**. From here, you can access the diagnostic settings configuration page in two ways: --* Select **Diagnostic settings** from the **Monitoring** section. -- ![Diagnostics settings](./media/overview-monitoring/diagnostic-settings.png) - -* Select **Audit Logs** or **Sign-ins**, then select **Export settings**. -- ![Export settings](./media/overview-monitoring/export-settings.png) ---## Route logs to storage account --By routing logs to an Azure storage account, you can retain it for longer than the default retention period outlined in our [retention policies](reference-reports-data-retention.md). Learn how to [route data to your storage account](quickstart-azure-monitor-route-logs-to-storage-account.md). --## Stream logs to event hub --Routing logs to an Azure event hub allows you to integrate with third-party SIEM tools like Sumologic and Splunk. This integration allows you to combine Azure AD activity log data with other data managed by your SIEM, to provide richer insights into your environment. Learn how to [stream logs to an event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md). --## Send logs to Azure Monitor logs --[Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) is a solution that consolidates monitoring data from different sources and provides a query language and analytics engine that gives you insights into the operation of your applications and resources. By sending Azure AD activity logs to Azure Monitor logs, you can quickly retrieve, monitor and alert on collected data. Learn how to [send data to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md). --You can also install the pre-built views for Azure AD activity logs to monitor common scenarios involving sign-ins and audit events. Learn how to [install and use log analytics views for Azure AD activity logs](../../azure-monitor/visualize/workbooks-view-designer-conversion-overview.md). --## Next steps --* [Activity logs in Azure Monitor](concept-activity-logs-azure-monitor.md) -* [Stream logs to event hub](tutorial-azure-monitor-stream-logs-to-event-hub.md) -* [Send logs to Azure Monitor logs](howto-integrate-activity-logs-with-log-analytics.md) |
active-directory | Overview Reports | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-reports.md | -- Title: What are Azure Active Directory reports? -description: Provides a general overview of Azure Active Directory reports. ------- Previously updated : 02/03/2023----# Customer intent: As an Azure AD administrator, I want to understand what Azure AD reports are available and how I can use them to gain insights into my environment. ----# What are Azure Active Directory reports? --Azure Active Directory (Azure AD) reports provide a comprehensive view of activity in your environment. The provided data enables you to: --- Determine how your apps and services are utilized by your users-- Detect potential risks affecting the health of your environment-- Troubleshoot issues preventing your users from getting their work done --## Activity reports --Activity reports help you understand the behavior of users in your organization. There are two types of activity reports in Azure AD: --- **Audit logs** - The [audit logs activity report](concept-audit-logs.md) provides you with access to the history of every task performed in your tenant.--- **Sign-ins** - With the [sign-ins activity report](concept-sign-ins.md), you can determine, who has performed the tasks reported by the audit logs report.----> [!VIDEO https://www.youtube.com/embed/ACVpH6C_NL8] --### Audit logs report --The [audit logs report](concept-audit-logs.md) provides you with records of system activities for compliance. This data enables you to address common scenarios such as: --- Someone in my tenant got access to an admin group. Who gave them access? --- I want to know the list of users signing into a specific app since I recently onboarded the app and want to know if itΓÇÖs doing well--- I want to know how many password resets are happening in my tenant---#### What Azure AD license do you need to access the audit logs report? --The audit logs report is available for features for which you have licenses. If you have a license for a specific feature, you also have access to the audit log information for it. A detailed feature comparison as per [different types of licenses](../fundamentals/whatis.md#what-are-the-azure-ad-licenses) can be seen on the [Azure Active Directory pricing page](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). For more information, see [Azure Active Directory features and capabilities](../fundamentals/whatis.md#which-features-work-in-azure-ad). --### Sign-ins report --The [sign-ins report](concept-sign-ins.md) enables you to find answers to questions such as: --- What is the sign-in pattern of a user?-- How many users have users signed in over a week?-- WhatΓÇÖs the status of these sign-ins?--#### What Azure AD license do you need to access the sign-ins activity report? --To access the sign-ins activity report, your tenant must have an Azure AD Premium license associated with it. --## Programmatic access --In addition to the user interface, Azure AD also provides you with [programmatic access](./howto-configure-prerequisites-for-reporting-api.md) to the reports data, through a set of REST-based APIs. You can call these APIs from various programming languages and tools. --## Next steps --- [Risky sign-ins report](../identity-protection/howto-identity-protection-investigate-risk.md#risky-sign-ins)-- [Audit logs report](concept-audit-logs.md)-- [Sign-ins logs report](concept-sign-ins.md) |
active-directory | Overview Service Health Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/overview-service-health-notifications.md | - Title: What are Service Health notifications in Azure Active Directory? -description: Learn how Service Health notifications provide you with a customizable dashboard that tracks the health of your Azure services in the regions where you use them. ------- Previously updated : 11/01/2022------# What are Service Health notifications in Azure Active Directory? --Azure Service Health has been updated to provide notifications to tenant admins within the Azure portal when there are Service Health events for Azure Active Directory services. Due to the criticality of these events, an alert card in the Azure AD overview page will also be provided to support the discovery of these notifications. --## How it works --When there happens to be a Service Health notification for an Azure Active Directory service, it will be posted to the Service Health page within the Azure portal. Previously these were subscription events that were posted to all the subscription owners/readers of subscriptions within the tenant that had an issue. To improve the targeting of these notifications, they'll now be available as tenant events to the tenant admins of the impacted tenant. For a transition period, these service events will be available as both tenant events and subscription events. --Now that they're available as tenant events, they appear on the Azure AD overview page as alert cards. Any Service Health notification that has been updated within the last three days will be shown in one of the cards. -- -![Screenshot of the alert cards on the Azure AD overview page.](./media/overview-service-health-notifications/service-health-overview.png) ----Each card: --- Represents a currently active event, or a resolved one that will be distinguished by the icon in the card. -- Has a link to the event. You can review the event on the Azure Service Health pages. -- -![Screenshot of the event on the Azure Service Health page.](./media/overview-service-health-notifications/service-health-issues.png) --- --For more information on the new Azure Service Health tenant events, see [Azure Service Health portal updates](../../service-health/service-health-portal-update.md) --## Who will see the notifications --Most of the built-in admin roles will have access to see these notifications. For the complete list of all authorized roles, see [Azure Service Health Tenant Admin authorized roles](../../service-health/admin-access-reference.md). Currently custom roles aren't supported. --## What you should know --Service Health events allow the addition of alerts and notifications to be applied to subscription events. This feature isn't yet supported with tenant events, but will be coming soon. --- ----## Next steps --- [Service Health overview](../../service-health/service-health-overview.md) |
active-directory | Quickstart Access Log With Graph Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md | Title: Access Azure AD logs with the Microsoft Graph API -description: In this quickstart, you learn how you can access the sign-ins log using the Graph API. + Title: Analyze Azure AD sign-in logs with the Microsoft Graph API +description: Learn how to access the sign-ins log and analyze a single sign-in attempt using the Microsoft Graph API. Previously updated : 11/01/2022 Last updated : 08/25/2023 --+ #Customer intent: As an IT admin, you need to how to use the Graph API to access the log files so that you can fix issues. # Quickstart: Access Azure AD logs with the Microsoft Graph API -With the information in the Azure Active Directory (Azure AD) sign-in logs, you can figure out what happened if a sign-in of a user failed. This quickstart shows you how to access the sign-ins log using the Graph API. +With the information in the Azure Active Directory (Azure AD) sign-in logs, you can figure out what happened if a sign-in of a user failed. This quickstart shows you how to access the sign-ins log using the Microsoft Graph API. ## Prerequisites To complete the scenario in this quickstart, you need: - **Access to an Azure AD tenant**: If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - **A test account called Isabella Simonsen**: If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user).-- **Access to the reporting API**: If you haven't configured access yet, see [How to configure the prerequisites for the reporting API](howto-configure-prerequisites-for-reporting-api.md).+- **Access to the Microsoft Graph API**: If you haven't configured access yet, see [How to configure the prerequisites for the reporting API](howto-configure-prerequisites-for-reporting-api.md). ## Perform a failed sign-in To complete the scenario in this quickstart, you need: The goal of this step is to create a record of a failed sign-in in the Azure AD sign-ins log. -**To complete this step:** --1. Sign in to the [Azure portal](https://portal.azure.com) as Isabella Simonsen using an incorrect password. --2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. For more information, see [Activity reports](./overview-reports.md#activity-reports). +1. Sign in to the [Microsoft Entra admin center](https://entra.microsoft.com) as Isabella Simonsen using an incorrect password. +2. Wait for 5 minutes to ensure that you can find a record of the sign-in in the sign-ins log. ## Find the failed sign-in -This section provides you with the steps to get information about your sign-in using the Graph API. +This section provides the steps to locate the failed sign-in attempt using the Microsoft Graph API. ![Microsoft Graph Explorer query](./media/quickstart-access-log-with-graph-api/graph-explorer-query.png) -**To review the failed sign-in:** - 1. Navigate to [Microsoft Graph Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer). -2. Sign-in to your tenant as global administrator. +2. Follow the prompts to authenticate into your tenant. ![Microsoft Graph Explorer authentication](./media/quickstart-access-log-with-graph-api/graph-explorer-authentication.png) |
active-directory | Quickstart Analyze Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-analyze-sign-in.md | Title: Analyze sign-ins with the Azure AD sign-ins log + Title: Quickstart guide to analyze a failed Azure AD sign-in description: In this quickstart, you learn how you can use the sign-ins log to determine the reason for a failed sign-in to Azure AD. Previously updated : 11/01/2022 Last updated : 08/21/2023 --+ #Customer intent: As an IT admin, you need to know how to use the sign-ins log so that you can fix sign-in issues. # Quickstart: Analyze sign-ins with the Azure AD sign-ins log The goal of this step is to create a record of a failed sign-in in the Azure AD This section provides you with the steps to analyze a failed sign-in: - **Filter sign-ins**: Remove all records that aren't relevant to your analysis. For example, set a filter to display only the records of a specific user.-- **Lookup additional error information**: In addition to the information you can find in the sign-ins log, you can also look up the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error. +- **Lookup additional error information**: In addition to the information, you can find in the sign-ins log, you can also look up the error using the [sign-in error lookup tool](https://login.microsoftonline.com/error). This tool might provide you with additional information for a sign-in error. **To review the failed sign-in:** This section provides you with the steps to analyze a failed sign-in: Review the outcome of the tool and determine whether it provides you with additional information. -![Error code lookup tool](./media/concept-all-sign-ins/error-code-lookup-tool.png) -- ## More tests Now, that you know how to find an entry in the sign-in log by name, you should also try to find the record using the following filters: Now, that you know how to find an entry in the sign-in log by name, you should a ![Status failure](./media/quickstart-analyze-sign-in/status-failure.png) -- ## Clean up resources When no longer needed, delete the test user. If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users.md#delete-a-user). ## Next steps -> [!div class="nextstepaction"] -> [What are Azure Active Directory reports?](overview-reports.md) +- [Learn how to use the sign-in diagnostic](howto-use-sign-in-diagnostics.md) +- [Analyze sign-in logs with Azure Monitor Log Analytics](howto-analyze-activity-logs-log-analytics.md) |
active-directory | Quickstart Azure Monitor Route Logs To Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-azure-monitor-route-logs-to-storage-account.md | - Title: Tutorial - Archive Azure Active Directory logs to a storage account -description: Learn how to route Azure Active Directory logs to a storage account ------- Previously updated : 07/14/2023----# Customer intent: As an IT administrator, I want to learn how to route Azure AD logs to an Azure storage account so I can retain it for longer than the default retention period. ----# Tutorial: Archive Azure AD logs to an Azure storage account --In this tutorial, you learn how to set up Azure Monitor diagnostics settings to route Azure Active Directory (Azure AD) logs to an Azure storage account. --## Prerequisites --To use this feature, you need: --* An Azure subscription with an Azure storage account. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). -* An Azure AD tenant. -* A user who's a *Global Administrator* or *Security Administrator* for the Azure AD tenant. -* To export sign-in data, you must have an Azure AD P1 or P2 license. --## Archive logs to an Azure storage account ---1. Sign in to the [Azure portal](https://portal.azure.com). --1. Select **Azure Active Directory** > **Monitoring** > **Audit logs**. --1. Select **Export Data Settings**. --1. You can either create a new setting (up to three settings are allowed) or edit an existing setting. - - To change existing setting, select **Edit setting** next to the diagnostic setting you want to update. - - To add new settings, select **Add diagnostic setting**. -- ![Export settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/ExportSettings.png) --1. Once in the **Diagnostic setting** pane if you're creating a new setting, enter a name for the setting to remind you of its purpose (for example, *Send to Azure storage account*). You can't change the name of an existing setting. --1. Under **Destination Details** select the **Archive to a storage account** check box. Text fields for the retention period appear next to each log category. --1. Select the Azure subscription and storage account for you want to route the logs. --1. Select all the relevant categories in under **Category details**: -- ![Diagnostics settings](./media/quickstart-azure-monitor-route-logs-to-storage-account/DiagnosticSettings.png) --1. In the **Retention days** field, enter the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. - -1. Select **Save**. --1. After the categories have been selected, in the **Retention days** field, type in the number of days of retention you need of your log data. By default, this value is *0*, which means that logs are retained in the storage account indefinitely. If you set a different value, events older than the number of days selected are automatically cleaned up. -- > [!NOTE] - > The Diagnostic settings storage retention feature is being deprecated. For details on this change, see [**Migrate from diagnostic settings storage retention to Azure Storage lifecycle management**](../../azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md). --1. Select **Save** to save the setting. --1. Close the window to return to the Diagnostic settings pane. --## Next steps --* [Tutorial: Configure a log analytics workspace](tutorial-log-analytics-wizard.md) -* [Interpret audit logs schema in Azure Monitor](./overview-reports.md) -* [Interpret sign-in logs schema in Azure Monitor](reference-azure-monitor-sign-ins-log-schema.md) |
active-directory | Quickstart Filter Audit Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-filter-audit-log.md | - Title: Filter your Azure AD audit log -description: In this quickstart, you learn how you can filter entries in your Azure AD audit log. ---- Previously updated : 11/01/2022-------#Customer intent: As an IT admin, you need to know how to filter your audit log so that you can analyze management activities. --# Quickstart: Filter your Azure AD audit log --With the information in the Azure AD audit log, you get access to records of system activities for compliance. -This quickstart shows how to you can locate a newly created user account in your audit log. ---## Prerequisites --To complete the scenario in this quickstart, you need: --- **Access to an Azure AD tenant** - If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- **A test account called Isabella Simonsen** - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users.md#add-a-new-user).--## Find the new user account --This section provides you with the steps to filter your audit log. ---**To find the new user:** --1. Navigate to the [audit log](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Audit). --2. To list only records for Isabella Simonsen: -- a. In the toolbar, select **Add filters**. - - ![Add user filter](./media/quickstart-analyze-sign-in/add-filters.png) -- b. In the **Pick a field** list, select **Target**, and then select **Apply** -- c. In the **Target** textbox, type the **User Principal Name** of **Isabella Simonsen**, and then select **Apply**. --3. Select the filtered item. -- ![Filtered items](./media/quickstart-filter-audit-log/audit-log-list.png) --4. Review the **Audit Log Details**. - - ![Audit log details](./media/quickstart-filter-audit-log/audit-log-details.png) - - --## Clean up resources --When no longer needed, delete the test user. If you don't know how to delete an Azure AD user, see [Delete users from Azure AD](../fundamentals/add-users.md#delete-a-user). --## Next steps --> [!div class="nextstepaction"] -> [What are Azure Active Directory reports?](overview-reports.md) |
active-directory | Recommendation Migrate From Adal To Msal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/recommendation-migrate-from-adal-to-msal.md | Title: Azure Active Directory recommendation - Migrate from ADAL to MSAL | Microsoft Docs + Title: Migrate from ADAL to MSAL recommendation description: Learn why you should migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries. -+ Previously updated : 08/10/2023 Last updated : 08/15/2023 -- # Azure AD recommendation: Migrate from the Azure Active Directory Library to the Microsoft Authentication Libraries Existing apps that use ADAL will continue to work after the end-of-support date. ## Action plan -The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps in the Azure portal or programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. --### [Azure portal](#tab/Azure-portal) --There are four steps to identifying and updating your apps in the Azure portal. The following steps are covered in detail in the [List all apps using ADAL](../develop/howto-get-list-of-all-auth-library-apps.md) article. --1. Send Azure AD sign-in event to Azure Monitor. -1. [Access the sign-ins workbook in Azure AD.](../develop/howto-get-list-of-all-auth-library-apps.md) -1. Identify the apps that use ADAL. -1. Update your code. - - The steps to update your code vary depending on the type of application. - - For example, the steps for .NET and Python applications have separate instructions. - - For a full list of instructions for each scenario, see [How to migrate to MSAL](../develop/msal-migration.md#how-to-migrate-to-msal). +The first step to migrating your apps from ADAL to MSAL is to identify all applications in your tenant that are currently using ADAL. You can identify your apps programmatically with the Microsoft Graph API or the Microsoft Graph PowerShell SDK. The steps for the Microsoft Graph PowerShell SDK are provided in the Recommendation details in the Azure Active Directory portal. ### [Microsoft Graph API](#tab/Microsoft-Graph-API) You can use Microsoft Graph to identify apps that need to be migrated to MSAL. To get started, see [How to use Microsoft Graph with Azure AD recommendations](howto-use-recommendations.md#how-to-use-microsoft-graph-with-azure-active-directory-recommendations). -Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant. +1. Sign in to [Graph Explorer](https://aka.ms/ge). +1. Select **GET** as the HTTP method from the dropdown. +1. Set the API version to **beta**. +1. Run the following query in Microsoft Graph, replacing the `<TENANT_ID>` placeholder with your tenant ID. This query returns a list of the impacted resources in your tenant. ```http https://graph.microsoft.com/beta/directory/recommendations/<TENANT_ID>_Microsoft.Identity.IAM.Insights.AdalToMsalMigration/impactedResources You can run the following set of commands in Windows PowerShell. These commands + ## Frequently asked questions ### Why does it take 30 days to change the status to completed? To reduce false positives, the service uses a 30 day window for ADAL requests. T ### How were ADAL applications identified before the recommendation was released? -The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) is an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook does not capture Service Principal sign-ins, while the recommendation does. +The [Azure AD sign-ins workbook](../develop/howto-get-list-of-all-auth-library-apps.md) was an alternative method to identify these apps. The workbook is still available to you, but using the workbook requires streaming sign-in logs to Azure Monitor first. The ADAL to MSAL recommendation works out of the box. Plus, the sign-ins workbook doesn't capture Service Principal sign-ins, while the recommendation does. ### Why is the number of ADAL applications different in the workbook and the recommendation? |
active-directory | Reference Audit Activities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-audit-activities.md | Title: Azure Active Directory (Azure AD) audit activity reference -description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory (Azure AD). +description: Get an overview of the audit activities that can be logged in your audit logs in Azure Active Directory. Azure Active Directory (Azure AD) audit logs collect all traceable activities wi This article provides a comprehensive list of the audit categories and their related activities. Use the "In this article" section to jump to a specific audit category. -Audit log activities and categories change periodically. The tables are updated regularly, but may not be in sync with what is available in Azure AD. Provide us feedback if you think there's a missing audit category or activity. +Audit log activities and categories change periodically. The tables are updated regularly, but may not be in sync with what is available in Azure AD. Provide us with feedback if you think there's a missing audit category or activity. -1. Sign in to the **Azure portal** using one of the [required roles](concept-audit-logs.md#how-do-i-access-it). +1. Sign in to the **Azure portal** using one of the [required roles](concept-audit-logs.md). 1. Browse to **Azure Active Directory** > **Audit logs**. 1. Adjust the filters accordingly. 1. Select a row from the resulting table to view the details. With [Azure AD Identity Governance access reviews](../governance/manage-user-acc ## Account provisioning -Each time an account is provisioned in your Azure AD tenant, a log for that account is captured. Automated provisioning, such as with [Azure AD Connect cloud sync](../hybrid/cloud-sync/what-is-cloud-sync.md), will be found in this log. The Account provisioning service only has one audit category in the logs. +Each time an account is provisioned in your Azure AD tenant, a log for that account is captured. Automated provisioning, such as with [Azure AD Connect cloud sync](../hybrid/cloud-sync/what-is-cloud-sync.md), is found in this log. The Account provisioning service only has one audit category in the logs. |Audit Category|Activity| ||| This set of audit logs is related to [B2C](../../active-directory-b2c/overview.m |ApplicationManagement|Retrieve V2 application service principals| |ApplicationManagement|Update V2 application| |ApplicationManagement|Update V2 application permission grant|-|Authentication|A self-service sign up request was completed| +|Authentication|A self-service sign-up request was completed| |Authentication|An API was called as part of a user flow| |Authentication|Delete all available strong authentication devices| |Authentication|Evaluate Conditional Access policies| |
active-directory | Reference Basic Info Sign In Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-basic-info-sign-in-logs.md | - Title: Basic info in the Azure AD sign-in logs -description: Learn what the basic info in the sign-in logs is about. ------- Previously updated : 10/28/2022-------# Basic info in the Azure AD sign-in logs --Azure AD logs all sign-ins into an Azure tenant for compliance. As an IT administrator, you need to know what the values in the sign-in logs mean, so that you can interpret the log values correctly. [Learn how to access, view, and analyze Azure AD sign-in logs](concept-sign-ins.md) --This article explains the values on the Basic info tab of the sign-ins log. --## Unique identifiers --In Azure AD, a resource access has three relevant components: --- **Who** ΓÇô The identity (User) doing the sign-in. -- **How** ΓÇô The client (Application) used for the access. -- **What** ΓÇô The target (Resource) accessed by the identity.--Each component has an associated unique identifier (ID). Below is an example of user using the Microsoft Azure classic deployment model to access the Azure portal. --![Open audit logs](./media/reference-basic-info-sign-in-logs/sign-in-details-basic-info.png) --### Tenant --The sign-in log tracks two tenant identifiers: --- **Home tenant** ΓÇô The tenant that owns the user identity. -- **Resource tenant** ΓÇô The tenant that owns the (target) resource.--These identifiers are relevant in cross-tenant scenarios. For example, to find out how users outside your tenant are accessing your resources, select all entries where the home tenant doesnΓÇÖt match the resource tenant. -For the home tenant, Azure AD tracks the ID and the name. --### Request ID --The request ID is an identifier that corresponds to an issued token. If you are looking for sign-ins with a specific token, you need to extract the request ID from the token, first. ---### Correlation ID --The correlation ID groups sign-ins from the same sign-in session. The identifier was implemented for convenience. Its accuracy is not guaranteed because the value is based on parameters passed by a client. --### Sign-in --The sign-in identifier is a string the user provides to Azure AD to identify itself when attempting to sign-in. It's usually a user principal name (UPN), but can be another identifier such as a phone number. --### Authentication requirement --This attribute shows the highest level of authentication needed through all the sign-in steps for the sign-in to succeed. Graph API supports `$filter` (`eq` and `startsWith` operators only). --### Sign-in event types --Indicates the category of the sign in the event represents. For user sign-ins, the category can be `interactiveUser` or `nonInteractiveUser` and corresponds to the value for the **isInteractive** property on the sign-in resource. For managed identity sign-ins, the category is `managedIdentity`. For service principal sign-ins, the category is **servicePrincipal**. The Azure portal doesn't show this value, but the sign-in event is placed in the tab that matches its sign-in event type. Possible values are: --- `interactiveUser`-- `nonInteractiveUser`-- `servicePrincipal`-- `managedIdentity`-- `unknownFutureValue`--The Microsoft Graph API, supports: `$filter` (`eq` operator only) --### User type --The type of a user. Examples include `member`, `guest`, or `external`. ---### Cross-tenant access type --This attribute describes the type of cross-tenant access used by the actor to access the resource. Possible values are: --- `none` - A sign-in event that did not cross an Azure AD tenant's boundaries.-- `b2bCollaboration`- A cross tenant sign-in performed by a guest user using B2B Collaboration.-- `b2bDirectConnect` - A cross tenant sign-in performed by a B2B.-- `microsoftSupport`- A cross tenant sign-in performed by a Microsoft support agent in a Microsoft customer tenant.-- `serviceProvider` - A cross-tenant sign-in performed by a Cloud Service Provider (CSP) or similar admin on behalf of that CSP's customer in a tenant-- `unknownFutureValue` - A sentinel value used by MS Graph to help clients handle changes in enum lists. For more information, see [Best practices for working with Microsoft Graph](/graph/best-practices-concept).--If the sign-in did not the pass the boundaries of a tenant, the value is `none`. --### Conditional Access evaluation --This value shows whether continuous access evaluation (CAE) was applied to the sign-in event. There are multiple sign-in requests for each authentication. Some are shown on the interactive tab, while others are shown on the non-interactive tab. CAE is only displayed as true for one of the requests, and it can be on the interactive tab or non-interactive tab. For more information, see [Monitor and troubleshoot sign-ins with continuous access evaluation in Azure AD](../conditional-access/howto-continuous-access-evaluation-troubleshoot.md). --## Next steps --* [Learn about exporting Azure AD sign-in logs](concept-activity-logs-azure-monitor.md) -* [Explore the sign-in diagnostic in Azure AD](./howto-use-sign-in-diagnostics.md) |
active-directory | Reference Powershell Reporting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-powershell-reporting.md | |
active-directory | Tutorial Configure Log Analytics Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-configure-log-analytics-workspace.md | + + Title: Configure a log analytics workspace in Azure AD +description: Learn how to configure an Azure AD Log Analytics workspace and run Kusto queries on your identity data. ++++ Last updated : 08/25/2023++++++#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment. +++# Tutorial: Configure a log analytics workspace +++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Configure a log analytics workspace for your audit and sign-in logs +> * Run queries using the Kusto Query Language (KQL) +> * Create an alert rule that sends alerts when a specific account is used +> * Create a custom workbook using the quickstart template +> * Add a query to an existing workbook template ++## Prerequisites ++- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/). ++- An Azure Active Directory (Azure AD) tenant. ++- A user who's at least a **Security Administrator** for the Azure AD tenant. +++Familiarize yourself with these articles: ++- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md) ++- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md) ++- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md) ++- [KQL quick reference](/azure/data-explorer/kql-quick-reference) ++- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md) ++++## Configure a workspace +++++This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs. +Configuring a log analytics workspace consists of two main steps: + +1. Creating a log analytics workspace +2. Setting diagnostic settings ++**To configure a workspace:** ++1. Sign in to the [Azure portal](https://portal.azure.com) as at least a [Security Administrator](../roles/permissions-reference.md#security-administrator) ++2. Browse to **Log Analytics workspaces**. ++ ![Search resources services and docs](./media/tutorial-log-analytics-wizard/search-services.png) ++3. Select **Create**. ++ ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png) ++4. On the **Create Log Analytics workspace** page, perform the following steps: ++ ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png) ++ 1. Select your subscription. ++ 2. Select a resource group. + + 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1). ++ 4. Select your region. ++5. Click **Review + Create**. ++ ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png) ++6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace. ++ ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png) ++7. Search for **Azure Active Directory**. ++ ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png) ++8. In **Monitoring** section, click **Diagnostic setting**. ++ ![Screenshot shows Diagnostic settings selected from Monitoring.](./media/tutorial-log-analytics-wizard/diagnostic-settings.png) ++9. On the **Diagnostic settings** page, click **Add diagnostic setting**. ++ ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png) ++10. On the **Diagnostic setting** page, perform the following steps: ++ ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png) ++ 1. Under **Category details**, select **AuditLogs** and **SigninLogs**. ++ 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace. + + 3. Click **Save**. ++## Run queries ++This procedure shows how to run queries using the **Kusto Query Language (KQL)**. +++**To run a query:** +++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++3. In the **Monitoring** section, click **Logs**. ++4. On the **Logs** page, click **Get Started**. ++5. In the **Search* textbox, type your query. ++6. Click **Run**. +++### KQL query examples ++Take 10 random entries from the input data: ++`SigninLogs | take 10` ++Look at the sign-ins where the Conditional Access was a success ++`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus` +++Count how many successes there have been ++`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count` +++Aggregate count of successful sign-ins by user by day: ++`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)` +++View how many times a user does a certain operation in specific time period: ++`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity` +++Pivot the results on operation name ++`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)` +++Merge together Audit and Sign in Logs using an inner join: ++`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated` +++View number of signs ins by client app type: ++`SigninLogs | summarize count() by ClientAppUsed` ++Count the sign ins by day: ++`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)` ++Take 5 random entries and project the columns you wish to see in the results: ++`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated ` +++Take the top 5 in descending order and project the columns you wish to see ++`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated ` ++Create a new column by combining the values to two other columns: ++`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed` ++## Create an alert rule ++This procedure shows how to send alerts when the breakglass account is used. ++**To create an alert rule:** ++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++3. In the **Monitoring** section, click **Logs**. ++4. On the **Logs** page, click **Get Started**. ++5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName` ++6. Click **Run**. ++7. In the toolbar, click **New alert rule**. ++ ![New alert rule](./media/tutorial-log-analytics-wizard/new-alert-rule.png) ++8. On the **Create alert rule** page, verify that the scope is correct. ++9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count** ++ ![Default condition](./media/tutorial-log-analytics-wizard/default-condition.png) ++10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps: ++ ![Alert logic](./media/tutorial-log-analytics-wizard/alert-logic.png) ++ 1. As **Based on**, select **Number of results**. ++ 2. As **Operator**, select **Greater than**. ++ 3. As **Threshold value**, select **0**. ++11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps: ++ ![Evaluated based on](./media/tutorial-log-analytics-wizard/evaluated-based-on.png) ++ 1. As **Period (in minutes)**, select **5**. ++ 2. As **Frequency (in minutes)**, select **5**. ++ 3. Click **Done**. ++12. Under **Action group**, click **Select action group**. ++ ![Action group](./media/tutorial-log-analytics-wizard/action-group.png) ++13. On the **Select an action group to attach to this alert rule**, click **Create action group**. ++ ![Create action group](./media/tutorial-log-analytics-wizard/create-action-group.png) ++14. On the **Create action group** page, perform the following steps: ++ ![Instance details](./media/tutorial-log-analytics-wizard/instance-details.png) ++ 1. In the **Action group name** textbox, type **My action group**. ++ 2. In the **Display name** textbox, type **My action**. ++ 3. Click **Review + create**. ++ 4. Click **Create**. +++15. Under **Customize action**, perform the following steps: ++ ![Customize actions](./media/tutorial-log-analytics-wizard/customize-actions.png) ++ 1. Select **Email subject**. ++ 2. In the **Subject line** textbox, type: `Breakglass account has been used` ++16. Under **Alert rule details**, perform the following steps: ++ ![Alert rule details](./media/tutorial-log-analytics-wizard/alert-rule-details.png) ++ 1. In the **Alert rule name** textbox, type: `Breakglass account` ++ 2. In the **Description** textbox, type: `Your emergency access account has been used` ++17. Click **Create alert rule**. +++## Create a custom workbook ++This procedure shows how to create a new workbook using the quickstart template. ++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++3. In the **Monitoring** section, click **Workbooks**. ++ ![Screenshot shows Monitoring in the Azure portal menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png) ++4. In the **Quickstart** section, click **Empty**. ++ ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png) ++5. Click **Add**. ++ ![Add workbook](./media/tutorial-log-analytics-wizard/add-workbook.png) ++6. Click **Add text**. ++ ![Add text](./media/tutorial-log-analytics-wizard/add-text.png) +++7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**. ++ ![Workbook text](./media/tutorial-log-analytics-wizard/workbook-text.png) ++8. In the new workbook, click **Add**, and then click **Add query**. ++ ![Add query](./media/tutorial-log-analytics-wizard/add-query.png) ++9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed` ++10. Click **Run Query**. ++ ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png) ++11. In the toolbar, under **Visualization**, click **Pie chart**. ++ ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png) ++12. Click **Done Editing**. ++ ![Done editing](./media/tutorial-log-analytics-wizard/done-workbook-editing.png) ++++## Add a query to a workbook template ++This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of conditional access success to failures. +++1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. ++2. Search for **Azure Active Directory**. ++3. In the **Monitoring** section, click **Workbooks**. ++ ![Screenshot shows Monitoring in the menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png) ++4. In the **conditional access** section, click **Conditional Access Insights and Reporting**. ++ ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png) ++5. In the toolbar, click **Edit**. ++ ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png) ++6. In the toolbar, click the three dots, then **Add**, and then **Add query**. ++ ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png) ++7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus` ++8. Click **Run Query**. ++ ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png) ++9. Click **Time Range**, and then select **Set in query**. ++10. Click **Visualization**, and then select **Bar chart**. ++11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**. ++ ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png) +++++++++## Next steps ++Advance to the next article to learn how to manage device identities by using the Azure portal. +> [!div class="nextstepaction"] +> [Monitoring](overview-monitoring.md) |
active-directory | Tutorial Log Analytics Wizard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-log-analytics-wizard.md | - Title: Configure a log analytics workspace in Azure AD -description: Learn how to configure log analytics. ----- Previously updated : 10/31/2022-------#Customer intent: As an IT admin, I want to set up log analytics so I can analyze the health of my environment. ----# Tutorial: Configure a log analytics workspace ---In this tutorial, you learn how to: --> [!div class="checklist"] -> * Configure a log analytics workspace for your audit and sign-in logs -> * Run queries using the Kusto Query Language (KQL) -> * Create an alert rule that sends alerts when a specific account is used -> * Create a custom workbook using the quickstart template -> * Add a query to an existing workbook template --## Prerequisites --- An Azure subscription with at least one P1 licensed admin. If you don't have an Azure subscription, you can [sign up for a free trial](https://azure.microsoft.com/free/).--- An Azure Active Directory (Azure AD) tenant.--- A user who's a Global Administrator or Security Administrator for the Azure AD tenant.---Familiarize yourself with these articles: --- [Tutorial: Collect and analyze resource logs from an Azure resource](../../azure-monitor/essentials/tutorial-resource-logs.md)--- [How to integrate activity logs with Log Analytics](./howto-integrate-activity-logs-with-log-analytics.md)--- [Manage emergency access account in Azure AD](../roles/security-emergency-access.md)--- [KQL quick reference](/azure/data-explorer/kql-quick-reference)--- [Azure Monitor Workbooks](../../azure-monitor/visualize/workbooks-overview.md)----## Configure a workspace ---This procedure outlines how to configure a log analytics workspace for your audit and sign-in logs. -Configuring a log analytics workspace consists of two main steps: - -1. Creating a log analytics workspace -2. Setting diagnostic settings --**To configure a workspace:** ---1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. --2. Search for **log analytics workspaces**. -- ![Search resources services and docs](./media/tutorial-log-analytics-wizard/search-services.png) --3. On the log analytics workspaces page, click **Add**. -- ![Screenshot shows the Add button in the log analytics workspaces page.](./media/tutorial-log-analytics-wizard/add.png) --4. On the **Create Log Analytics workspace** page, perform the following steps: -- ![Create log analytics workspace](./media/tutorial-log-analytics-wizard/create-log-analytics-workspace.png) -- 1. Select your subscription. -- 2. Select a resource group. - - 3. In the **Name** textbox, type a name (e.g.: MytestWorkspace1). -- 4. Select your region. --5. Click **Review + Create**. -- ![Review and create](./media/tutorial-log-analytics-wizard/review-create.png) --6. Click **Create** and wait for the deployment to be succeeded. You may need to refresh the page to see the new workspace. -- ![Create](./media/tutorial-log-analytics-wizard/create-workspace.png) --7. Search for **Azure Active Directory**. -- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png) --8. In **Monitoring** section, click **Diagnostic setting**. -- ![Screenshot shows Diagnostic settings selected from Monitoring.](./media/tutorial-log-analytics-wizard/diagnostic-settings.png) --9. On the **Diagnostic settings** page, click **Add diagnostic setting**. -- ![Add diagnostic setting](./media/tutorial-log-analytics-wizard/add-diagnostic-setting.png) --10. On the **Diagnostic setting** page, perform the following steps: -- ![Select diagnostics settings](./media/tutorial-log-analytics-wizard/select-diagnostics-settings.png) -- 1. Under **Category details**, select **AuditLogs** and **SigninLogs**. -- 2. Under **Destination details**, select **Send to Log Analytics**, and then select your new log analytics workspace. - - 3. Click **Save**. --## Run queries --This procedure shows how to run queries using the **Kusto Query Language (KQL)**. ---**To run a query:** ---1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. --2. Search for **Azure Active Directory**. -- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png) --3. In the **Monitoring** section, click **Logs**. --4. On the **Logs** page, click **Get Started**. --5. In the **Search* textbox, type your query. --6. Click **Run**. ---### KQL query examples --Take 10 random entries from the input data: --`SigninLogs | take 10` --Look at the sign-ins where the Conditional Access was a success --`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus` ---Count how many successes there have been --`SigninLogs | where ConditionalAccessStatus == "success" | project UserDisplayName, ConditionalAccessStatus | count` ---Aggregate count of successful sign-ins by user by day: --`SigninLogs | where ConditionalAccessStatus == "success" | summarize SuccessfulSign-ins = count() by UserDisplayName, bin(TimeGenerated, 1d)` ---View how many times a user does a certain operation in specific time period: --`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | summarize count() by OperationName, Identity` ---Pivot the results on operation name --`AuditLogs | where TimeGenerated > ago(30d) | where OperationName contains "Add member to role" | project OperationName, Identity | evaluate pivot(OperationName)` ---Merge together Audit and Sign in Logs using an inner join: --`AuditLogs |where OperationName contains "Add User" |extend UserPrincipalName = tostring(TargetResources[0].userPrincipalName) | |project TimeGenerated, UserPrincipalName |join kind = inner (SigninLogs) on UserPrincipalName |summarize arg_min(TimeGenerated, *) by UserPrincipalName |extend SigninDate = TimeGenerated` ---View number of signs ins by client app type: --`SigninLogs | summarize count() by ClientAppUsed` --Count the sign ins by day: --`SigninLogs | summarize NumberOfEntries=count() by bin(TimeGenerated, 1d)` --Take 5 random entries and project the columns you wish to see in the results: --`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated ` ---Take the top 5 in descending order and project the columns you wish to see --`SigninLogs | take 5 | project ClientAppUsed, Identity, ConditionalAccessStatus, Status, TimeGenerated ` --Create a new column by combining the values to two other columns: --`SigninLogs | limit 10 | extend RiskUser = strcat(RiskDetail, "-", Identity) | project RiskUser, ClientAppUsed` --## Create an alert rule --This procedure shows how to send alerts when the breakglass account is used. --**To create an alert rule:** --1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. --2. Search for **Azure Active Directory**. -- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png) --3. In the **Monitoring** section, click **Logs**. --4. On the **Logs** page, click **Get Started**. --5. In the **Search** textbox, type: `SigninLogs |where UserDisplayName contains "BreakGlass" | project UserDisplayName` --6. Click **Run**. --7. In the toolbar, click **New alert rule**. -- ![New alert rule](./media/tutorial-log-analytics-wizard/new-alert-rule.png) --8. On the **Create alert rule** page, verify that the scope is correct. --9. Under **Condition**, click: **Whenever the average custom log search is greater than `logic undefined` count** -- ![Default condition](./media/tutorial-log-analytics-wizard/default-condition.png) --10. On the **Configure signal logic** page, in the **Alert logic** section, perform the following steps: -- ![Alert logic](./media/tutorial-log-analytics-wizard/alert-logic.png) -- 1. As **Based on**, select **Number of results**. -- 2. As **Operator**, select **Greater than**. -- 3. As **Threshold value**, select **0**. --11. On the **Configure signal logic** page, in the **Evaluated based on** section, perform the following steps: -- ![Evaluated based on](./media/tutorial-log-analytics-wizard/evaluated-based-on.png) -- 1. As **Period (in minutes)**, select **5**. -- 2. As **Frequency (in minutes)**, select **5**. -- 3. Click **Done**. --12. Under **Action group**, click **Select action group**. -- ![Action group](./media/tutorial-log-analytics-wizard/action-group.png) --13. On the **Select an action group to attach to this alert rule**, click **Create action group**. -- ![Create action group](./media/tutorial-log-analytics-wizard/create-action-group.png) --14. On the **Create action group** page, perform the following steps: -- ![Instance details](./media/tutorial-log-analytics-wizard/instance-details.png) -- 1. In the **Action group name** textbox, type **My action group**. -- 2. In the **Display name** textbox, type **My action**. -- 3. Click **Review + create**. -- 4. Click **Create**. ---15. Under **Customize action**, perform the following steps: -- ![Customize actions](./media/tutorial-log-analytics-wizard/customize-actions.png) -- 1. Select **Email subject**. -- 2. In the **Subject line** textbox, type: `Breakglass account has been used` --16. Under **Alert rule details**, perform the following steps: -- ![Alert rule details](./media/tutorial-log-analytics-wizard/alert-rule-details.png) -- 1. In the **Alert rule name** textbox, type: `Breakglass account` -- 2. In the **Description** textbox, type: `Your emergency access account has been used` --17. Click **Create alert rule**. ---## Create a custom workbook --This procedure shows how to create a new workbook using the quickstart template. -----1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. --2. Search for **Azure Active Directory**. -- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png) --3. In the **Monitoring** section, click **Workbooks**. -- ![Screenshot shows Monitoring in the Azure portal menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png) --4. In the **Quickstart** section, click **Empty**. -- ![Quick start](./media/tutorial-log-analytics-wizard/quick-start.png) --5. Click **Add**. -- ![Add workbook](./media/tutorial-log-analytics-wizard/add-workbook.png) --6. Click **Add text**. -- ![Add text](./media/tutorial-log-analytics-wizard/add-text.png) ---7. In the textbox, type: `# Client apps used in the past week`, and then click **Done Editing**. -- ![Workbook text](./media/tutorial-log-analytics-wizard/workbook-text.png) --8. In the new workbook, click **Add**, and then click **Add query**. -- ![Add query](./media/tutorial-log-analytics-wizard/add-query.png) --9. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(7d) | project TimeGenerated, UserDisplayName, ClientAppUsed | summarize count() by ClientAppUsed` --10. Click **Run Query**. -- ![Screenshot shows the Run Query button.](./media/tutorial-log-analytics-wizard/run-workbook-query.png) --11. In the toolbar, under **Visualization**, click **Pie chart**. -- ![Pie chart](./media/tutorial-log-analytics-wizard/pie-chart.png) --12. Click **Done Editing**. -- ![Done editing](./media/tutorial-log-analytics-wizard/done-workbook-editing.png) ----## Add a query to a workbook template --This procedure shows how to add a query to an existing workbook template. The example is based on a query that shows the distribution of Conditional Access success to failures. ---1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator. --2. Search for **Azure Active Directory**. -- ![Screenshot shows Azure Active Directory in Azure search.](./media/tutorial-log-analytics-wizard/search-azure-ad.png) --3. In the **Monitoring** section, click **Workbooks**. -- ![Screenshot shows Monitoring in the menu with Workbooks selected.](./media/tutorial-log-analytics-wizard/workbooks.png) --4. In the **Conditional Access** section, click **Conditional Access Insights and Reporting**. -- ![Screenshot shows the Conditional Access Insights and Reporting option.](./media/tutorial-log-analytics-wizard/conditional-access-template.png) --5. In the toolbar, click **Edit**. -- ![Screenshot shows the Edit button.](./media/tutorial-log-analytics-wizard/edit-workbook-template.png) --6. In the toolbar, click the three dots, then **Add**, and then **Add query**. -- ![Add workbook query](./media/tutorial-log-analytics-wizard/add-custom-workbook-query.png) --7. In the query textbox, type: `SigninLogs | where TimeGenerated > ago(20d) | where ConditionalAccessPolicies != "[]" | summarize dcount(UserDisplayName) by bin(TimeGenerated, 1d), ConditionalAccessStatus` --8. Click **Run Query**. -- ![Screenshot shows the Run Query button to run this query.](./media/tutorial-log-analytics-wizard/run-workbook-insights-query.png) --9. Click **Time Range**, and then select **Set in query**. --10. Click **Visualization**, and then select **Bar chart**. --11. Click **Advanced Settings**, as chart title, type `Conditional Access status over the last 20 days`, and then click **Done Editing**. -- ![Set chart title](./media/tutorial-log-analytics-wizard/set-chart-title.png) ---------## Next steps --Advance to the next article to learn how to manage device identities by using the Azure portal. -> [!div class="nextstepaction"] -> [Monitoring](overview-monitoring.md) |
active-directory | Admin Units Assign Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md | |
active-directory | Admin Units Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-manage.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Administrative units let you subdivide your organization into any unit that you want, and then assign specific administrators that can manage only the members of that unit. For example, you could use administrative units to delegate permissions to administrators of each school at a large university, so they could control access, manage users, and set policies only in the School of Engineering. |
active-directory | Admin Units Members Dynamic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-dynamic.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. You can add or remove users or devices for administrative units manually. With this preview, you can add or remove users or devices for administrative units dynamically using rules. This article describes how to create administrative units with dynamic membership rules using the Azure portal, PowerShell, or Microsoft Graph API. |
active-directory | Admin Units Members List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-list.md | |
active-directory | Admin Units Members Remove | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-remove.md | |
active-directory | Admin Units Restricted Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-restricted-management.md | -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> See the [Product Terms](https://aka.ms/EntraPreviewsTermsOfUse) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. Restricted management administrative units allow you to protect specific objects in your tenant from modification by anyone other than a specific set of administrators that you designate. This allows you to meet security or compliance requirements without having to remove tenant-level role assignments from your administrators. |
active-directory | Assign Roles Different Scopes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/assign-roles-different-scopes.md | |
active-directory | Concept Understand Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/concept-understand-roles.md | |
active-directory | Custom Assign Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-assign-powershell.md | |
active-directory | Custom Available Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-available-permissions.md | |
active-directory | Custom Consent Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-consent-permissions.md | |
active-directory | Custom Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-create.md | |
active-directory | Custom Enterprise App Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-app-permissions.md | |
active-directory | Custom Enterprise Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-enterprise-apps.md | |
active-directory | Custom Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/custom-overview.md | |
active-directory | Groups Assign Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-assign-role.md | |
active-directory | Groups Create Eligible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-create-eligible.md | |
active-directory | Groups Remove Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-remove-assignment.md | |
active-directory | Groups View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/groups-view-assignments.md | |
active-directory | Manage Roles Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/manage-roles-portal.md | |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | This article lists the Azure AD built-in roles you can assign to allow managemen > | [Fabric Administrator](#fabric-administrator) | Can manage all aspects of the Fabric and Power BI products. | a9ea8996-122f-4c74-9520-8edcd192826c | > | [Global Administrator](#global-administrator) | Can manage all aspects of Azure AD and Microsoft services that use Azure AD identities. | 62e90394-69f5-4237-9190-012177145e10 | > | [Global Reader](#global-reader) | Can read everything that a Global Administrator can, but not update anything. | f2ef992c-3afb-46b9-b7cf-a126ee74c451 |-> | [Global Secure Access Administrator](#global-secure-access-administrator) | Create and manage all aspects of Microsoft Entra Internet Access and Microsoft Entra Private Access, including managing access to public and private endpoints. | ac434307-12b9-4fa1-a708-88bf58caabc1 | +> | [Global Secure Access Administrator](#global-secure-access-administrator) | Create and manage all aspects of Microsoft Entra Internet Access and Microsoft Entra Private Access, including managing access to public and private endpoints. | ac434307-12b9-4fa1-a708-88bf58caabc1 | > | [Groups Administrator](#groups-administrator) | Members of this role can create/manage groups, create/manage groups settings like naming and expiration policies, and view groups activity and audit reports. | fdd7a751-b60b-444a-984c-02652fe8fa1c | > | [Guest Inviter](#guest-inviter) | Can invite guest users independent of the 'members can invite guests' setting. | 95e79109-95c0-4d8e-aee3-d01accf2d47b | > | [Helpdesk Administrator](#helpdesk-administrator) | Can reset passwords for non-administrators and Helpdesk Administrators. | 729827e3-9c14-49f7-bb1b-9608f156bbb8 | Users with this role have the ability to manage Azure Active Directory Condition > | microsoft.directory/namedLocations/delete | Delete custom rules that define network locations | > | microsoft.directory/namedLocations/standard/read | Read basic properties of custom rules that define network locations | > | microsoft.directory/namedLocations/basic/update | Update basic properties of custom rules that define network locations |-> | microsoft.directory/conditionalAccessPolicies/create | Create Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/delete | Delete Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies | -> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for Conditional Access policies | +> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies | +> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies | > | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions | ## Customer LockBox Access Approver Users with this role have access to all administrative features in Azure Active > | microsoft.directory/organization/allProperties/allTasks | Read and update all properties for an organization | > | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties |-> | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of Conditional Access policies | +> | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of conditional access policies | > | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy | Users with this role have access to all administrative features in Azure Active > | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/create | Create cross-tenant sync policy for partners | +> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/basic/update | Update basic settings of cross-tenant sync policy | +> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions | Users with this role have access to all administrative features in Azure Active > | microsoft.commerce.billing/purchases/standard/read | Read purchase services in M365 Admin Center. | > | microsoft.dynamics365/allEntities/allTasks | Manage all aspects of Dynamics 365 | > | microsoft.edge/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Edge |+> | microsoft.networkAccess/allEntities/allProperties/allTasks | Manage all aspects of Entra Network Access | > | microsoft.flow/allEntities/allTasks | Manage all aspects of Microsoft Power Automate | > | microsoft.hardware.support/shippingAddress/allProperties/allTasks | Create, read, update, and delete shipping addresses for Microsoft hardware warranty claims, including shipping addresses created by others | > | microsoft.hardware.support/shippingStatus/allProperties/read | Read shipping status for open Microsoft hardware warranty claims | Users with this role have access to all administrative features in Azure Active > | microsoft.powerApps.powerBI/allEntities/allTasks | Manage all aspects of Fabric and Power BI | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams | > | microsoft.virtualVisits/allEntities/allProperties/allTasks | Manage and share Virtual Visits information and metrics from admin centers or the Virtual Visits app |+> | microsoft.viva.goals/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Goals | +> | microsoft.viva.pulse/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Pulse | > | microsoft.windows.defenderAdvancedThreatProtection/allEntities/allTasks | Manage all aspects of Microsoft Defender for Endpoint | > | microsoft.windows.updatesDeployments/allEntities/allProperties/allTasks | Read and configure all aspects of Windows Update Service | Users with this role **cannot** do the following: > | microsoft.directory/pendingExternalUserProfiles/standard/read | Read standard properties of external user profiles in the extended directory for Teams | > | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies |-> | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of Conditional Access policies | +> | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies | > | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy | > | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy | > | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | Users with this role **cannot** do the following: > | microsoft.commerce.billing/allEntities/allProperties/read | Read all resources of Office 365 billing | > | microsoft.commerce.billing/purchases/standard/read | Read purchase services in M365 Admin Center. | > | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge |+> | microsoft.networkAccess/allEntities/allProperties/read | Read all aspects of Entra Network Access | > | microsoft.hardware.support/shippingAddress/allProperties/read | Read shipping addresses for Microsoft hardware warranty claims, including existing shipping addresses created by others | > | microsoft.hardware.support/shippingStatus/allProperties/read | Read shipping status for open Microsoft hardware warranty claims | > | microsoft.hardware.support/warrantyClaims/allProperties/read | Read Microsoft hardware warranty claims | Users with this role **cannot** do the following: > | microsoft.permissionsManagement/allEntities/allProperties/read | Read all aspects of Entra Permissions Management | > | microsoft.teams/allEntities/allProperties/read | Read all properties of Microsoft Teams | > | microsoft.virtualVisits/allEntities/allProperties/read | Read all aspects of Virtual Visits |+> | microsoft.viva.goals/allEntities/allProperties/read | Read all aspects of Microsoft Viva Goals | +> | microsoft.viva.pulse/allEntities/allProperties/read | Read all aspects of Microsoft Viva Pulse | > | microsoft.windows.updatesDeployments/allEntities/allProperties/read | Read all aspects of Windows Update Service | ## Global Secure Access Administrator Users with this role **cannot** do the following: > | microsoft.directory/applications/policies/read | Read policies of applications | > | microsoft.directory/applications/standard/read | Read standard properties of applications | > | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, excluding custom security attributes audit logs |-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies | +> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies | > | microsoft.directory/connectorGroups/allProperties/read | Read all properties of application proxy connector groups | > | microsoft.directory/connectors/allProperties/read | Read all properties of application proxy connectors | > | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy | Users in this role can create, manage and deploy provisioning configuration setu > | microsoft.directory/servicePrincipals/synchronizationCredentials/manage | Manage application provisioning secrets and credentials | > | microsoft.directory/servicePrincipals/synchronizationJobs/manage | Start, restart, and pause application provisioning synchronization jobs | > | microsoft.directory/servicePrincipals/synchronizationSchema/manage | Create and manage application provisioning synchronization jobs and schema |+> | microsoft.directory/servicePrincipals/appRoleAssignedTo/update | Update service principal role assignments | > | microsoft.directory/servicePrincipals/audience/update | Update audience properties on service principals | > | microsoft.directory/servicePrincipals/authentication/update | Update authentication properties on service principals | > | microsoft.directory/servicePrincipals/basic/update | Update basic properties on service principals | Azure Advanced Threat Protection | Monitor and respond to suspicious security ac > | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners | > | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |+> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/create | Create cross-tenant sync policy for partners | +> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/basic/update | Update basic settings of cross-tenant sync policy | +> | microsoft.directory/crossTenantAccessPolicy/partners/identitySynchronization/standard/read | Read basic properties of cross-tenant sync policy | > | microsoft.directory/deviceLocalCredentials/standard/read | Read all properties of the backed up local administrator account credentials for Azure AD joined devices, except the password | > | microsoft.directory/domains/federation/update | Update federation property of domains | > | microsoft.directory/domains/federationConfiguration/standard/read | Read standard properties of federation configuration for domains | Azure Advanced Threat Protection | Monitor and respond to suspicious security ac > | microsoft.directory/policies/basic/update | Update basic properties on policies | > | microsoft.directory/policies/owners/update | Update owners of policies | > | microsoft.directory/policies/tenantDefault/update | Update default organization policies |-> | microsoft.directory/conditionalAccessPolicies/create | Create Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/delete | Delete Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies | -> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for Conditional Access policies | +> | microsoft.directory/conditionalAccessPolicies/create | Create conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/delete | Delete conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies | +> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/basic/update | Update basic properties for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/owners/update | Update owners for conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/tenantDefault/update | Update the default tenant for conditional access policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/resourceNamespaces/resourceActions/authenticationContext/update | Update Conditional Access authentication context of Microsoft 365 role-based access control (RBAC) resource actions | Azure Advanced Threat Protection | Monitor and respond to suspicious security ac > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |+> | microsoft.networkAccess/allEntities/allProperties/allTasks | Manage all aspects of Entra Network Access | > | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/allEntities/basic/update | Update basic properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/allTasks | Create and manage attack payloads in Attack Simulator | In | Can do > | microsoft.directory/policies/standard/read | Read basic properties on policies | > | microsoft.directory/policies/owners/read | Read owners of policies | > | microsoft.directory/policies/policyAppliedTo/read | Read policies.policyAppliedTo property |-> | microsoft.directory/conditionalAccessPolicies/standard/read | Read Conditional Access for policies | -> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of Conditional Access policies | -> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for Conditional Access policies | +> | microsoft.directory/conditionalAccessPolicies/standard/read | Read conditional access for policies | +> | microsoft.directory/conditionalAccessPolicies/owners/read | Read the owners of conditional access policies | +> | microsoft.directory/conditionalAccessPolicies/policyAppliedTo/read | Read the "applied to" property for conditional access policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/signInReports/allProperties/read | Read all properties on sign-in reports, including privileged properties | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health |+> | microsoft.networkAccess/allEntities/allProperties/read | Read all aspects of Entra Network Access | > | microsoft.office365.protectionCenter/allEntities/standard/read | Read standard properties of all resources in the Security and Compliance centers | > | microsoft.office365.protectionCenter/attackSimulator/payload/allProperties/read | Read all properties of attack payloads in Attack Simulator | > | microsoft.office365.protectionCenter/attackSimulator/reports/allProperties/read | Read reports of attack simulation, responses, and associated training | For more information, see [Roles and permissions in Viva Goals](/viva/goals/role > | | | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |+> | microsoft.viva.goals/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Viva Goals | ## Viva Pulse Administrator The following roles should not be used. They have been deprecated and will be re Not every role returned by PowerShell or MS Graph API is visible in Azure portal. The following table organizes those differences. -API name | Azure portal name | Notes | - | --Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles) -Device Managers | Deprecated | [Deprecated roles documentation](#deprecated-roles) -Device Users | Deprecated | [Deprecated roles documentation](#deprecated-roles) -Directory Synchronization Accounts | Not shown because it shouldn't be used | [Directory Synchronization Accounts documentation](#directory-synchronization-accounts) -Guest User | Not shown because it can't be used | NA -Partner Tier 1 Support | Not shown because it shouldn't be used | [Partner Tier1 Support documentation](#partner-tier1-support) -Partner Tier 2 Support | Not shown because it shouldn't be used | [Partner Tier2 Support documentation](#partner-tier2-support) -Restricted Guest User | Not shown because it can't be used | NA -User | Not shown because it can't be used | NA -Workplace Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles) +| API name | Azure portal name | Notes | +| | | | +| Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles) | +| Device Managers | Deprecated | [Deprecated roles documentation](#deprecated-roles) | +| Device Users | Deprecated | [Deprecated roles documentation](#deprecated-roles) | +| Directory Synchronization Accounts | Not shown because it shouldn't be used | [Directory Synchronization Accounts documentation](#directory-synchronization-accounts) | +| Guest User | Not shown because it can't be used | NA | +| Partner Tier 1 Support | Not shown because it shouldn't be used | [Partner Tier1 Support documentation](#partner-tier1-support) | +| Partner Tier 2 Support | Not shown because it shouldn't be used | [Partner Tier2 Support documentation](#partner-tier2-support) | +| Restricted Guest User | Not shown because it can't be used | NA | +| User | Not shown because it can't be used | NA | +| Workplace Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles) | ## Who can reset passwords In the following table, the columns list the roles that can reset passwords and The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope). -Role that password can be reset | Password Admin | Helpdesk Admin | Auth Admin | User Admin | Privileged Auth Admin | Global Admin - | | | | | | -Auth Admin | | | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: -Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Global Admin | | | | | :heavy_check_mark: | :heavy_check_mark:\* -Groups Admin | | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Helpdesk Admin | | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Message Center Reader | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Privileged Auth Admin | | | | | :heavy_check_mark: | :heavy_check_mark: -Privileged Role Admin | | | | | :heavy_check_mark: | :heavy_check_mark: -Reports Reader | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | | | | | :heavy_check_mark: | :heavy_check_mark: -User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | | | | | :heavy_check_mark: | :heavy_check_mark: -User Admin | | | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Usage Summary Reports Reader | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -All custom roles | | | | | :heavy_check_mark: | :heavy_check_mark: +| Role that password can be reset | Password Admin | Helpdesk Admin | Auth Admin | User Admin | Privileged Auth Admin | Global Admin | +| | | | | | | | +| Auth Admin | | | :white_check_mark: | | :white_check_mark: | :white_check_mark: | +| Directory Readers | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Global Admin | | | | | :white_check_mark: | :white_check_mark:\* | +| Groups Admin | | | | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Guest Inviter | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Helpdesk Admin | | :white_check_mark: | | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Message Center Reader | | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Password Admin | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Privileged Auth Admin | | | | | :white_check_mark: | :white_check_mark: | +| Privileged Role Admin | | | | | :white_check_mark: | :white_check_mark: | +| Reports Reader | | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| User<br/>(no admin role) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | | | | | :white_check_mark: | :white_check_mark: | +| User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | | | | | :white_check_mark: | :white_check_mark: | +| User Admin | | | | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Usage Summary Reports Reader | | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| All custom roles | | | | | :white_check_mark: | :white_check_mark: | > [!IMPORTANT] > The [Partner Tier2 Support](#partner-tier2-support) role can reset passwords and invalidate refresh tokens for all non-administrators and administrators (including Global Administrators). The [Partner Tier1 Support](#partner-tier1-support) role can reset passwords and invalidate refresh tokens for only non-administrators. These roles should not be used because they are deprecated. In the following table, the columns list the roles that can perform sensitive ac The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope). -Role that sensitive action can be performed upon | Auth Admin | User Admin | Privileged Auth Admin | Global Admin - | | | | -Auth Admin | :heavy_check_mark: | | :heavy_check_mark: | :heavy_check_mark: -Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Global Admin | | | :heavy_check_mark: | :heavy_check_mark: -Groups Admin | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Helpdesk Admin | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Message Center Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Privileged Auth Admin | | | :heavy_check_mark: | :heavy_check_mark: -Privileged Role Admin | | | :heavy_check_mark: | :heavy_check_mark: -Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | | | :heavy_check_mark: | :heavy_check_mark: -User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | | | :heavy_check_mark: | :heavy_check_mark: -User Admin | | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: -All custom roles | | | :heavy_check_mark: | :heavy_check_mark: +| Role that sensitive action can be performed upon | Auth Admin | User Admin | Privileged Auth Admin | Global Admin | +| | | | | | +| Auth Admin | :white_check_mark: | | :white_check_mark: | :white_check_mark: | +| Directory Readers | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Global Admin | | | :white_check_mark: | :white_check_mark: | +| Groups Admin | | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Guest Inviter | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Helpdesk Admin | | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Message Center Reader | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Password Admin | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Privileged Auth Admin | | | :white_check_mark: | :white_check_mark: | +| Privileged Role Admin | | | :white_check_mark: | :white_check_mark: | +| Reports Reader | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| User<br/>(no admin role) | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| User<br/>(no admin role, but member or owner of a [role-assignable group](groups-concept.md)) | | | :white_check_mark: | :white_check_mark: | +| User with a role scoped to a [restricted management administrative unit](./admin-units-restricted-management.md) | | | :white_check_mark: | :white_check_mark: | +| User Admin | | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| Usage Summary Reports Reader | :white_check_mark: | :white_check_mark: | :white_check_mark: | :white_check_mark: | +| All custom roles | | | :white_check_mark: | :white_check_mark: | ## Next steps |
active-directory | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/prerequisites.md | |
active-directory | Protected Actions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/protected-actions-overview.md | |
active-directory | Quickstart App Registration Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/quickstart-app-registration-limits.md | |
active-directory | Role Definitions List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/role-definitions-list.md | |
active-directory | Security Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-planning.md | |
active-directory | View Assignments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/view-assignments.md | This section describes how to list role assignments with single-application scop ## PowerShell -This section describes viewing assignments of a role with organization-wide scope. This article uses the [Azure Active Directory PowerShell Version 2](/powershell/module/azuread/#directory_roles) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](custom-assign-powershell.md). +This section describes viewing assignments of a role with organization-wide scope. This article uses the [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) module. To view single-application scope assignments using PowerShell, you can use the cmdlets in [Assign custom roles with PowerShell](custom-assign-powershell.md). -Use the [Get-AzureADMSRoleDefinition](/powershell/module/azuread/get-azureadmsroledefinition) and [Get-AzureADMSRoleAssignment](/powershell/module/azuread/get-azureadmsroleassignment) commands to list role assignments. +Use the [Get-MgRoleManagementDirectoryRoleDefinition](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroledefinition) and [Get-MgRoleManagementDirectoryRoleAssignment](/powershell/module/microsoft.graph.identity.governance/get-mgrolemanagementdirectoryroleassignment) commands to list role assignments. The following example shows how to list the role assignments for the [Groups Administrator](permissions-reference.md#groups-administrator) role. ```powershell # Fetch list of all directory roles with template ID-Get-AzureADMSRoleDefinition +Get-MgRoleManagementDirectoryRoleDefinition # Fetch a specific directory role by ID-$role = Get-AzureADMSRoleDefinition -Id "fdd7a751-b60b-444a-984c-02652fe8fa1c" +$role = Get-MgRoleManagementDirectoryRoleDefinition -UnifiedRoleDefinitionId fdd7a751-b60b-444a-984c-02652fe8fa1c # Fetch membership for a role-Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'" +Get-MgRoleManagementDirectoryRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'" ``` ```Example-RoleDefinitionId PrincipalId DirectoryScopeId -- -- --fdd7a751-b60b-444a-984c-02652fe8fa1c 04f632c3-8065-4466-9e30-e71ec81b3c36 /administrativeUnits/3883b136-67f0-412c-9b... +Id PrincipalId RoleDefinitionId DirectoryScopeId AppScop + eId +-- -- - - - +lAPpYvVpN0KRkAEhdxReEH2Fs3EjKm1BvSKkcYVN2to-1 71b3857d-2a23-416d-bd22-a471854ddada 62e90394-69f5-4237-9190-012177145e10 / +lAPpYvVpN0KRkAEhdxReEMdXLf2tIs1ClhpzQPsutrQ-1 fd2d57c7-22ad-42cd-961a-7340fb2eb6b4 62e90394-69f5-4237-9190-012177145e10 / ``` The following example shows how to list all active role assignments across all roles, including built-in and custom roles (currently in Preview). ```powershell-$roles = Get-AzureADMSRoleDefinition +$roles = Get-MgRoleManagementDirectoryRoleDefinition foreach ($role in $roles) {- Get-AzureADMSRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'" + Get-MgRoleManagementDirectoryRoleAssignment -Filter "roleDefinitionId eq '$($role.Id)'" } ``` ```Example-RoleDefinitionId PrincipalId DirectoryScopeId Id -- -- - ---e8611ab8-c189-46e8-94e1-60213ab1f814 9f9fb383-3148-46a7-9cec-5bf93f8a879c / uB2o6InB6EaU4WAhOrH4FHwni... -e8611ab8-c189-46e8-94e1-60213ab1f814 027c8aba-2e94-49a8-974b-401e5838b2a0 / uB2o6InB6EaU4WAhOrH4FEqdn... -fdd7a751-b60b-444a-984c-02652fe8fa1c 04f632c3-8065-4466-9e30-e71ec81b3c36 /administrati... UafX_Qu2SkSYTAJlL-j6HL5Dr... -... +Id PrincipalId RoleDefinitionId DirectoryScopeId AppScop + eId +-- -- - - - +lAPpYvVpN0KRkAEhdxReEH2Fs3EjKm1BvSKkcYVN2to-1 71b3857d-2a23-416d-bd22-a471854ddada 62e90394-69f5-4237-9190-012177145e10 / +lAPpYvVpN0KRkAEhdxReEMdXLf2tIs1ClhpzQPsutrQ-1 fd2d57c7-22ad-42cd-961a-7340fb2eb6b4 62e90394-69f5-4237-9190-012177145e10 / +4-PYiFWPHkqVOpuYmLiHa3ibEcXLJYtFq5x3Kkj2TkA-1 c5119b78-25cb-458b-ab9c-772a48f64e40 88d8e3e3-8f55-4a1e-953a-9b9898b8876b / +4-PYiFWPHkqVOpuYmLiHa2hXf3b8iY5KsVFjHNXFN4c-1 767f5768-89fc-4a8e-b151-631cd5c53787 88d8e3e3-8f55-4a1e-953a-9b9898b8876b / +BSub0kaAukSHWB4mGC_PModww03rMgNOkpK77ePhDnI-1 4dc37087-32eb-4e03-9292-bbede3e10e72 d29b2b05-8046-44ba-8758-1e26182fcf32 / +BSub0kaAukSHWB4mGC_PMgzOWSgXj8FHusA4iaaTyaI-1 2859ce0c-8f17-47c1-bac0-3889a693c9a2 d29b2b05-8046-44ba-8758-1e26182fcf32 / ``` ## Microsoft Graph API |
active-directory | Adobe Identity Management Provisioning Oidc Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-oidc-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning |name.givenName|String|| |name.familyName|String|| |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String||+ |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:eduRole|String|| ++ > [!NOTE] + > The **eduRole** field accepts values like `Teacher or Student`, anything else will be ignored. 1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management (OIDC)**. Once you've configured provisioning, use the following resources to monitor your * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion-* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). +* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## Change log +08/15/2023 - Added support for Schema Discovery. ## More resources |
active-directory | Adobe Identity Management Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/adobe-identity-management-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi > [!NOTE] > If your organization uses the User Sync Tool or a UMAPI integration, you must first pause the integration. Then, add Azure AD automatic provisioning to automate user management from the Azure portal. Once Azure AD automatic provisioning is configured and running, you can completely remove the User Sync Tool or UMAPI integration. +> [!NOTE] +> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud. + ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). This section guides you through the steps to configure the Azure AD provisioning 9. Review the user attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Adobe Identity Management for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Adobe Identity Management API supports filtering users based on that attribute. Select the **Save** button to commit any changes. - |Attribute|Type| - ||| - |userName|String| - |emails[type eq "work"].value|String| - |active|Boolean| - |addresses[type eq "work"].country|String| - |name.givenName|String| - |name.familyName|String| - |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String| + |Attribute|Type|Supported for filtering|Required by Adobe Identity Management + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |emails[type eq "work"].value|String|| + |addresses[type eq "work"].country|String|| + |name.givenName|String|| + |name.familyName|String|| + |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:emailAliases|String|| + |urn:ietf:params:scim:schemas:extension:Adobe:2.0:User:eduRole|String|| ++ > [!NOTE] + > The **eduRole** field accepts values like `Teacher or Student`, anything else will be ignored. 10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Adobe Identity Management**. 11. Review the group attributes that are synchronized from Azure AD to Adobe Identity Management in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Adobe Identity Management for update operations. Select the **Save** button to commit any changes. - |Attribute|Type| - ||| - |displayName|String| - |members|Reference| + |Attribute|Type|Supported for filtering|Required by Adobe Identity Management + ||||| + |displayName|String|✓|✓ + |members|Reference|| 12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). This operation starts the initial synchronization cycle of all users and groups ## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment: -1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully -2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion -3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). +* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully. +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion. +* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## Change log +* 07/18/2023 - The app was added to Gov Cloud. +* 08/15/2023 - Added support for Schema Discovery. -## Additional resources +## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) |
active-directory | Azure Databricks With Private Link Workspace Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/azure-databricks-with-private-link-workspace-provisioning-tutorial.md | + + Title: Azure AD on-premises app provisioning to Azure Databricks with Private Link Workspace +description: This article describes how to use the Azure AD provisioning service to provision users into Azure Databricks with Private Link Workspace. ++++++ Last updated : 08/10/2023+++++# Microsoft Entra ID Application Provisioning to Azure Databricks with Private Link Workspace ++The Azure Active Directory (Azure AD) provisioning service supports a [SCIM 2.0](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) client that can be used to automatically provision users into cloud or on-premises applications. This article outlines how you can use the Azure AD provisioning service to provision users into Azure Databricks workspaces with no public access. ++[ ![Diagram that shows SCIM architecture.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/scim-architecture.png)](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/scim-architecture.png#lightbox) ++## Prerequisites +* An Azure AD tenant with Microsoft Entra ID Governance and Azure AD Premium P1 or Premium P2 (or EMS E3 or E5). To find the right license for your requirements, see [Compare generally available features of Azure AD](https://www.microsoft.com/security/business/microsoft-entra-pricing). +* Administrator role for installing the agent. This task is a one-time effort and should be an Azure account that's either a hybrid administrator or a global administrator. +* Administrator role for configuring the application in the cloud (application administrator, cloud application administrator, global administrator, or a custom role with permissions). +* A computer with at least 3 GB of RAM, to host a provisioning agent. The computer should have Windows Server 2016 or a later version of Windows Server, with connectivity to the target application, and with outbound connectivity to login.microsoftonline.com, other Microsoft Online Services and Azure domains. An example is a Windows Server 2016 virtual machine hosted in Azure IaaS or behind a proxy. ++## Download, install, and configure the Azure AD Connect Provisioning Agent Package ++If you have already downloaded the provisioning agent and configured it for another on-premises application, then continue reading in the next section. ++ 1. In the Azure portal, select **Azure Active Directory**. + 1. On the left, select **Azure AD Connect**. + 1. On the left, select **Cloud sync**. + [![Screenshot of new UX screen.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/azure-active-directory-connect-new-ux.png)](media/azure-databricks-with-private-link-workspace-provisioning-tutorial/azure-active-directory-connect-new-ux.png#lightbox) ++ 1. On the left, select **Agent**. + 1. Select **Download on-premises agent**, and select **Accept terms & download**. + >[!NOTE] + >Please use different provisioning agents for on-premises application provisioning and Azure AD Connect Cloud Sync / HR-driven provisioning. All three scenarios should not be managed on the same agent. + 1. Open the provisioning agent installer, agree to the terms of service, and select **next**. + 1. When the provisioning agent wizard opens, continue to the **Select Extension** tab and select **On-premises application provisioning** when prompted for the extension you want to enable. + 1. The provisioning agent uses the operating system's web browser to display a popup window for you to authenticate to Azure AD, and potentially also your organization's identity provider. If you're using Internet Explorer as the browser on Windows Server, then you may need to add Microsoft web sites to your browser's trusted site list to allow JavaScript to run correctly. + 1. Provide credentials for an Azure AD administrator when you're prompted to authorize. The user is required to have the Hybrid Identity Administrator or Global Administrator role. + 1. Select **Confirm** to confirm the setting. Once installation is successful, you can select **Exit**, and also close the Provisioning Agent Package installer. + +## Provisioning to SCIM-enabled Workspace +Once the agent is installed, no further configuration is necessary on-premises, and all provisioning configurations are then managed from the Azure portal. + + 1. In the Azure portal, navigate to the Enterprise applications and add the **On-premises SCIM app** from the [gallery](../manage-apps/add-application-portal.md). + 1. From the left hand menu, navigate to the **Provisioning** option and select **Get started**. + 1. Select **Automatic** from the dropdown list and expand the **On-Premises Connectivity** option. + 1. Select the agent that you installed from the dropdown list and select **Assign Agent(s)**. + 1. Now either wait 10 minutes or restart the **Microsoft Azure AD Connect Provisioning Agent** before proceeding to the next step & testing the connection. + 1. In the **Tenant URL** field, provide the SCIM endpoint URL for your application. The URL is typically unique to each target application and must be resolvable by DNS. An example for a scenario where the agent is installed on the same host as the application is `https://localhost:8585/scim` + ![Screenshot that shows assigning an agent.](media/azure-databricks-with-private-link-workspace-provisioning-tutorial//on-premises-assign-agents.png) ++ 1. Create an Admin Token in Azure Databricks User Settings Console and enter the same in the **Secret Token** field + 1. Select **Test Connection**, and save the credentials. The application SCIM endpoint must be actively listening for inbound provisioning requests, otherwise the test fails. Use the steps [here](../app-provisioning/on-premises-ecma-troubleshoot.md#troubleshoot-test-connection-issues) if you run into connectivity issues. + >[!NOTE] + > If the test connection fails, you will see the request made. Please note that while the URL in the test connection error message is truncated, the actual request sent to the application contains the entire URL provided above. ++ 1. Configure any [attribute mappings](../app-provisioning/customize-application-attributes.md) or [scoping](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md) rules required for your application. + 1. Add users to scope by [assigning users and groups](../manage-apps/add-application-portal-assign-users.md) to the application. + 1. Test provisioning a few users [on demand](../app-provisioning/provision-on-demand.md). + 1. Add more users into scope by assigning them to your application. + 1. Go to the **Provisioning** pane, and select **Start provisioning**. + 1. Monitor using the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md). ++The following video provides an overview of on-premises provisioning. +> [!VIDEO https://www.youtube.com/embed/QdfdpaFolys] ++## More requirements +* Ensure your [SCIM](https://techcommunity.microsoft.com/t5/identity-standards-blog/provisioning-with-scim-getting-started/ba-p/880010) implementation meets the [Azure AD SCIM requirements](../app-provisioning/use-scim-to-provision-users-and-groups.md). + Azure AD offers open-source [reference code](https://github.com/AzureAD/SCIMReferenceCode/wiki) that developers can use to bootstrap their SCIM implementation. +* Support the /schemas endpoint to reduce configuration required in the Azure portal. ++## Next steps ++* [App provisioning](../app-provisioning/user-provisioning.md) +* [Generic SQL connector](../app-provisioning/on-premises-sql-connector-configure.md) +* [Tutorial: ECMA Connector Host generic SQL connector](../app-provisioning/tutorial-ecma-sql-connector.md) +* [Known issues](../app-provisioning/known-issues.md) |
active-directory | Canva Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/canva-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Canva for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Canva. +++writer: twimmers ++ms.assetid: 9bf62920-d9e0-4ed4-a4f6-860cb9563b00 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Canva for automatic user provisioning ++This tutorial describes the steps you need to perform in both Canva and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Canva](https://www.canva.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Canva. +> * Remove users in Canva when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Canva. +> * Provision groups and group memberships in Canva. +> * [Single sign-on](canva-tutorial.md) to Canva (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Canva tenant. +* A user account in Canva with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Canva](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Canva to support provisioning with Azure AD +Contact Canva support to configure Canva to support provisioning with Azure AD. ++## Step 3. Add Canva from the Azure AD application gallery ++Add Canva from the Azure AD application gallery to start managing provisioning to Canva. If you have previously setup Canva for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Canva ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Canva in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Canva**. ++ ![Screenshot of the Canva link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Canva Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Canva. If the connection fails, ensure your Canva account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Canva**. ++1. Review the user attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Canva for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Canva API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Canva| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |externalId|String|| + |emails[type eq "work"].value|String||✓ + |name.givenName|String|| + |name.familyName|String|| + |displayName|String|| + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Canva**. ++1. Review the group attributes that are synchronized from Azure AD to Canva in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Canva for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Canva| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Canva, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to Canva by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Cisco Webex Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-webex-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in Ci > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Cloudbees Ci Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cloudbees-ci-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port | `https://cjoc.<CustomerDomain>/securityRealm/finishLogin` | | `https://<Environment>.<CustomerDomain>/securityRealm/finishLogin` | -1. Perform the following step, if you wish to configure the application in **SP** initiated mode: -- In the **Sign on URL** textbox, type the URL using one of the following patterns: + c. In the **Sign on URL** textbox, type the URL using one of the following patterns: | **Sign on URL** | || |
active-directory | Dialpad Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/dialpad-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in Di > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). -> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Document360 Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/document360-tutorial.md | Title: Azure Active Directory SSO integration with Document360 -description: Learn how to configure single sign-on between Azure Active Directory and Document360. +description: Learn how to configure single sign-on (SSO) between Azure Active Directory (AD) and Document360. -In this article, you learn how to integrate Document360 with Azure Active Directory (Azure AD). Document360 is an online self-service knowledge base software. When you integrate Document360 with Azure AD, you can: +This article teaches you how to integrate Document360 with Azure AD. Document360 is an online self-service knowledge base software. When you integrate Document360 with Azure AD, you can: * Control in Azure AD who has access to Document360.-* Enable your users to be automatically signed-in to Document360 with their Azure AD accounts. +* Enable your users to be automatically signed in to Document360 with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal. -You configure and test Azure AD single sign-on for Document360 in a test environment. Document360 supports **SP** and **IDP** initiated single sign-on. +You configure and test Azure AD single sign-on for Document360 in a test environment. Document360 supports **Service Provider (SP)** and **Identity Provider (IdP)** initiated SSO. > [!NOTE]-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant. +> Identifier of this application is a fixed string value, so only one instance can be configured in one tenant. ## Prerequisites -To integrate Azure Active Directory with Document360, you need: +To integrate Azure AD with Document360, you need the following: * An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -* Document360 single sign-on (SSO) enabled subscription. +* An Azure AD subscription. If you don't have a subscription, you can [get a free account](https://azure.microsoft.com/free/). +* Document360 subscription with SSO enabled. If you don't have a subscription, you can [Sign up for a new account](https://document360.com/signup/). ## Add application and assign a test user -Before you begin the process of configuring single sign-on, you need to add the Document360 application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. +Before configuring SSO, add the Document360 application from the Azure AD gallery. You need a test user account to assign to the application and test the SSO configuration. ### Add Document360 from the Azure AD gallery -Add Document360 from the Azure AD application gallery to configure single sign-on with Document360. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). +Add Document360 from the Azure AD application gallery to configure SSO with Document360. For more information on adding an application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ### Create and assign Azure AD test user Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. -Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). +Alternatively, you can use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ## Configure Azure AD SSO Complete the following steps to enable Azure AD single sign-on in the Azure portal. 1. In the Azure portal, on the **Document360** application integration page, find the **Manage** section and select **single sign-on**.-1. On the **Select a single sign-on method** page, select **SAML**. -1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. +2. On the **Select a single sign-on method** page, select **SAML**. +3. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") -1. On the **Basic SAML Configuration** section, perform the following steps: +4. On the **Basic SAML Configuration** section, perform the following steps. Choose any one of the Identifiers, Reply URL, and Sign on URL based on your Data center region. - a. In the **Identifier** textbox, type one of the following URLs: + a. In the **Identifier** textbox, type/copy & paste one of the following URLs: | **Identifier** | |--| | `https://identity.document360.io/saml` |+ | **(or)** | | `https://identity.us.document360.io/saml` | - b. In the **Reply URL** textbox, type a URL using one of the following patterns: + b. In the **Reply URL** textbox, type/copy & paste a URL using one of the following patterns: | **Reply URL** | | -| | `https://identity.document360.io/signin-saml-<ID>` |+ | **(or)** | | `https://identity.us.document360.io/signin-saml-<ID>` | -1. If you wish to configure the application in **SP** initiated mode, then perform the following step: +5. If you wish to configure the application in **SP** initiated mode, then perform the following step: - In the **Sign on URL** textbox, type one of the following URLs: + In the **Sign on URL** textbox, type/copy & paste one of the following URLs: | **Sign on URL** | |--| | `https://identity.document360.io ` |+ | **(or)** | | `https://identity.us.document360.io` | > [!NOTE]- > The Reply URL is not real. Update this value with the actual Reply URL. Contact [Document360 Client support team](mailto:support@document360.com) to get the value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > The Reply URL is not real. Update this value with the actual Reply URL. You can also refer to the patterns shown in the Azure portal's **Basic SAML Configuration** section. -1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer. +6. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. - ![Screenshot shows the Certificate download link.](common/certificateraw.png "Certificate") + ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") -1. On the **Set up Document360** section, copy the appropriate URL(s) based on your requirement. +7. On the **Set up Document360** section, copy the appropriate URL(s) based on your requirement. ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ## Configure Document360 SSO -To configure single sign-on on **Document360** side, you need to send the downloaded **Certificate (Raw)** and appropriate copied URLs from Azure portal to [Document360 support team](mailto:support@document360.com). They set this setting to have the SAML SSO connection set properly on both sides. +1. In a different web browser window, log in to your Document360 portal as an administrator. ++1. To configure SSO on the **Document360** portal, you need to navigate to **Settings** → **Users & Security** → **SAML/OpenID** → **SAML** and perform the following steps: ++ [![Screenshot shows the Document360 configuration.](./media/document360-tutorial/configuration.png "Document360")](./media/document360-tutorial/configuration.png#lightbox) ++1. Click on the Edit icon in **SAML basic configuration** on the Document360 portal side and paste the values from Azure AD portal based on the below mentioned field associations. ++ | Document360 portal fields | Azure AD portal values | + | | | + | Email domains | Domains of emails you have under active directory | + | Sign On URL | Login URL | + | Entity ID | Azure AD identifier | + | Sign Out URL | Logout URL | + | SAML certificate | Download Certificate (Base64) from Azure AD side and upload in Document360 | ++1. Click on the **Save** button when you’re done with the values. + ### Create Document360 test user -In this section, you create a user called Britta Simon at Document360. Work with [Document360 support team](mailto:support@document360.com) to add the users in the Document360 platform. Users must be created and activated before you use single sign-on. +1. In a different web browser window, log in to your Document360 portal as an administrator. ++1. From the Document360 portal, go to **Settings → Users & Security → Team accounts & groups → Team account**. Click the **New team account** button and type in the required details, specify the roles, and follow the module steps to add a user to Document360. ++ [![Screenshot shows the Document360 test user.](./media/document360-tutorial/add-user.png "Document360")](./media/document360-tutorial/add-user.png#lightbox) ## Test SSO -In this section, you test your Azure AD single sign-on configuration with following options. +In this section, you test your Azure AD single sign-on configuration with the following options. #### SP initiated: -* Click on **Test this application** in Azure portal. This will redirect to Document360 Sign-on URL where you can initiate the login flow. +* Click on **Test this application** in Azure portal. This will redirect to the Document360 Sign-on URL, where you can initiate the login flow. -* Go to Document360 Sign-on URL directly and initiate the login flow from there. +* Go to Document360 Sign-on URL directly and initiate the login flow. #### IDP initiated: -* Click on **Test this application** in Azure portal and you should be automatically signed in to the Document360 for which you set up the SSO. +* Click on **Test this application** in the Azure portal, and you should be automatically signed in to the Document360 for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Document360 tile in the My Apps if configured in SP mode, you will be redirected to the application sign-on page for initiating the login flow. If configured in IDP mode, you should be automatically signed in to the Document360 for which you set up the SSO. -You can also use Microsoft My Apps to test the application in any mode. When you click the Document360 tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Document360 for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). +For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ## Additional resources You can also use Microsoft My Apps to test the application in any mode. When you ## Next steps -Once you configure Document360 you can enforce session control, which protects exfiltration and infiltration of your organization’s sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). +Once you configure Document360, you can enforce session control, which protects the exfiltration and infiltration of your organization's sensitive data in real-time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Elium Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/elium-provisioning-tutorial.md | This tutorial shows how to configure Elium and Azure Active Directory (Azure AD) > [!NOTE] > This tutorial describes a connector that's built on top of the Azure AD User Provisioning service. For important details about what this service does and how it works, and for frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in preview. For the general terms of use for Azure features in preview, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Foodee Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/foodee-provisioning-tutorial.md | This article shows you how to configure Azure Active Directory (Azure AD) in Foo > [!NOTE] > The article describes a connector that's built on top of the Azure AD User Provisioning service. To learn what this service does and how it works, and to get answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in preview. For more information about the Azure terms-of-use feature for preview features, go to [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Forcepoint Cloud Security Gateway Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/forcepoint-cloud-security-gateway-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Forcepoint Cloud Security Gateway - User Authentication. +++writer: twimmers ++ms.assetid: 415b2ba3-a9a5-439a-963a-7c2c0254ced1 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Forcepoint Cloud Security Gateway - User Authentication for automatic user provisioning ++This tutorial describes the steps you need to perform in both Forcepoint Cloud Security Gateway - User Authentication and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Forcepoint Cloud Security Gateway - User Authentication](https://admin.forcepoint.net) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Forcepoint Cloud Security Gateway - User Authentication. +> * Remove users in Forcepoint Cloud Security Gateway - User Authentication when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Forcepoint Cloud Security Gateway - User Authentication. +> * Provision groups and group memberships in Forcepoint Cloud Security Gateway - User Authentication. +> * [Single sign-on](forcepoint-cloud-security-gateway-tutorial.md) to Forcepoint Cloud Security Gateway - User Authentication (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Forcepoint Cloud Security Gateway - User Authentication tenant. +* A user account in Forcepoint Cloud Security Gateway - User Authentication with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Forcepoint Cloud Security Gateway - User Authentication](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD +Contact Forcepoint Cloud Security Gateway - User Authentication support to configure Forcepoint Cloud Security Gateway - User Authentication to support provisioning with Azure AD. ++## Step 3. Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery ++Add Forcepoint Cloud Security Gateway - User Authentication from the Azure AD application gallery to start managing provisioning to Forcepoint Cloud Security Gateway - User Authentication. If you have previously setup Forcepoint Cloud Security Gateway - User Authentication for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Forcepoint Cloud Security Gateway - User Authentication ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Forcepoint Cloud Security Gateway - User Authentication in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Forcepoint Cloud Security Gateway - User Authentication**. ++ ![Screenshot of the Forcepoint Cloud Security Gateway - User Authentication link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Forcepoint Cloud Security Gateway - User Authentication Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Forcepoint Cloud Security Gateway - User Authentication. If the connection fails, ensure your Forcepoint Cloud Security Gateway - User Authentication account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Forcepoint Cloud Security Gateway - User Authentication**. ++1. Review the user attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Forcepoint Cloud Security Gateway - User Authentication for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Forcepoint Cloud Security Gateway - User Authentication API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication| + ||||| + |userName|String|✓|✓ + |externalId|String||✓ + |displayName|String||✓ + |urn:ietf:params:scim:schemas:extension:forcepoint:2.0:User:ntlmId|String|| + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Forcepoint Cloud Security Gateway - User Authentication**. ++1. Review the group attributes that are synchronized from Azure AD to Forcepoint Cloud Security Gateway - User Authentication in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Forcepoint Cloud Security Gateway - User Authentication for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Forcepoint Cloud Security Gateway - User Authentication| + ||||| + |displayName|String|✓|✓ + |externalId|String|| + |members|Reference|| ++ +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Forcepoint Cloud Security Gateway - User Authentication, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to Forcepoint Cloud Security Gateway - User Authentication by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | G Suite Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/g-suite-provisioning-tutorial.md | This section guides you through the steps to configure the Azure AD provisioning ### To configure automatic user provisioning for G Suite in Azure AD: -1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. Users will need to log in to `portal.azure.com` and won't be able to use `aad.portal.azure.com`. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Browse to **Azure Active Directory** > **Enterprise Applications** > **All applications**. ![Enterprise applications blade](./media/g-suite-provisioning-tutorial/enterprise-applications.png) ![All applications blade](./media/g-suite-provisioning-tutorial/all-applications.png) -2. In the applications list, select **G Suite**. +1. In the applications list, select **G Suite**. ![The G Suite link in the Applications list](common/all-applications.png) -3. Select the **Provisioning** tab. Click on **Get started**. +1. Select the **Provisioning** tab. Click on **Get started**. ![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png) ![Get started blade](./media/g-suite-provisioning-tutorial/get-started.png) -4. Set the **Provisioning Mode** to **Automatic**. +1. Set the **Provisioning Mode** to **Automatic**. ![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png) -5. Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to a Google authorization dialog box in a new browser window. +1. Under the **Admin Credentials** section, click on **Authorize**. You'll be redirected to a Google authorization dialog box in a new browser window. ![G Suite authorize](./media/g-suite-provisioning-tutorial/authorize-1.png) -6. Confirm that you want to give Azure AD permissions to make changes to your G Suite tenant. Select **Accept**. +1. Confirm that you want to give Azure AD permissions to make changes to your G Suite tenant. Select **Accept**. ![G Suite Tenant Auth](./media/g-suite-provisioning-tutorial/gapps-auth.png) -7. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to G Suite. If the connection fails, ensure your G Suite account has Admin permissions and try again. Then try the **Authorize** step again. +1. In the Azure portal, click **Test Connection** to ensure Azure AD can connect to G Suite. If the connection fails, ensure your G Suite account has Admin permissions and try again. Then try the **Authorize** step again. -6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. +1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ![Notification Email](common/provisioning-notification-email.png) -7. Select **Save**. +1. Select **Save**. -8. Under the **Mappings** section, select **Provision Azure Active Directory Users**. +1. Under the **Mappings** section, select **Provision Azure Active Directory Users**. -9. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. Select the **Save** button to commit any changes. +1. Review the user attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. Select the **Save** button to commit any changes. > [!NOTE] > GSuite Provisioning currently only supports the use of primaryEmail as the matching attribute. This section guides you through the steps to configure the Azure AD provisioning |websites.[type eq "work"].value|String| -10. Under the **Mappings** section, select **Provision Azure Active Directory Groups**. +1. Under the **Mappings** section, select **Provision Azure Active Directory Groups**. -11. Review the group attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in G Suite for update operations. Select the **Save** button to commit any changes. +1. Review the group attributes that are synchronized from Azure AD to G Suite in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in G Suite for update operations. Select the **Save** button to commit any changes. |Attribute|Type| ||| This section guides you through the steps to configure the Azure AD provisioning |name|String| |description|String| -12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). -13. To enable the Azure AD provisioning service for G Suite, change the **Provisioning Status** to **On** in the **Settings** section. +1. To enable the Azure AD provisioning service for G Suite, change the **Provisioning Status** to **On** in the **Settings** section. ![Provisioning Status Toggled On](common/provisioning-toggle-on.png) -14. Define the users and/or groups that you would like to provision to G Suite by choosing the desired values in **Scope** in the **Settings** section. +1. Define the users and/or groups that you would like to provision to G Suite by choosing the desired values in **Scope** in the **Settings** section. ![Provisioning Scope](common/provisioning-scope.png) -15. When you're ready to provision, click **Save**. +1. When you're ready to provision, click **Save**. ![Saving Provisioning Configuration](common/provisioning-configuration-save.png) |
active-directory | Gainsight Saml Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gainsight-saml-tutorial.md | - Title: Azure Active Directory SSO integration with Gainsight SAML -description: Learn how to configure single sign-on between Azure Active Directory and Gainsight SAML. -------- Previously updated : 07/14/2023-----# Azure Active Directory SSO integration with Gainsight SAML --In this article, you'll learn how to integrate Gainsight SAML with Azure Active Directory (Azure AD). Use Azure AD to manage user access and enable single sign-on with Gainsight SAML. Requires an existing Gainsight SAML subscription. When you integrate Gainsight SAML with Azure AD, you can: --* Control in Azure AD who has access to Gainsight SAML. -* Enable your users to be automatically signed-in to Gainsight SAML with their Azure AD accounts. -* Manage your accounts in one central location - the Azure portal. --You'll configure and test Azure AD single sign-on for Gainsight SAML in a test environment. Gainsight SAML supports both **SP** and **IDP** initiated single sign-on. --## Prerequisites --To integrate Azure Active Directory with Gainsight SAML, you need: --* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. -* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). -* Gainsight SAML single sign-on (SSO) enabled subscription. --## Add application and assign a test user --Before you begin the process of configuring single sign-on, you need to add the Gainsight SAML application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. --### Add Gainsight SAML from the Azure AD gallery --Add Gainsight SAML from the Azure AD application gallery to configure single sign-on with Gainsight SAML. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). --### Create and assign Azure AD test user --Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. --Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). --## Configure Azure AD SSO --Complete the following steps to enable Azure AD single sign-on in the Azure portal. --1. In the Azure portal, on the **Gainsight SAML** application integration page, find the **Manage** section and select **single sign-on**. -1. On the **Select a single sign-on method** page, select **SAML**. -1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. -- ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") --1. On the **Basic SAML Configuration** section, perform the following steps: -- a. In the **Identifier** textbox, type a value using one of the following patterns: -- | **Identifier** | - |--| - | `urn:auth0:gainsight:<ID>` | - | `urn:auth0:gainsight-eu:<ID>` | - - b. In the **Reply URL** textbox, type a URL using one of the following patterns: - - | **Reply URL** | - || - | `https://secured.gainsightcloud.com/login/callback?connection=<ID>` | - | `https://secured.eu.gainsightcloud.com/login/callback?connection=<ID>` | --1. Perform the following step, if you wish to configure the application in **SP** initiated mode: -- In the **Sign on URL** textbox, type a URL using one of the following patterns: -- | **Sign on URL** | - || - | `https://secured.gainsightcloud.com/samlp/<ID>` | - | `https://secured.eu.gainsightcloud.com/samlp/<ID>` | -- > [!NOTE] - > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Gainsight SAML support team](mailto:support@gainsight.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. --1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. -- ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") --1. On the **Set up Gainsight SAML** section, copy the appropriate URL(s) based on your requirement. -- ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") --## Configure Gainsight SAML SSO --To configure single sign-on on **Gainsight SAML** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Gainsight SAML support team](mailto:support@gainsight.com). They set this setting to have the SAML SSO connection set properly on both sides. --### Create Gainsight SAML test user --In this section, you create a user called Britta Simon at Gainsight SAML SSO. Work with [Gainsight SAML support team](mailto:support@gainsight.com) to add the users in the Gainsight SAML SSO platform. Users must be created and activated before you use single sign-on. --## Test SSO --In this section, you test your Azure AD single sign-on configuration with following options. --#### SP initiated: --* Click on **Test this application** in Azure portal. This will redirect to Gainsight SAML Sign-on URL where you can initiate the login flow. --* Go to Gainsight SAML Sign-on URL directly and initiate the login flow from there. --#### IDP initiated: --* Click on **Test this application** in Azure portal and you should be automatically signed in to the Gainsight SAML for which you set up the SSO. --You can also use Microsoft My Apps to test the application in any mode. When you click the Gainsight SAML tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Gainsight SAML for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). --## Additional resources --* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) -* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). --## Next steps --Once you configure Gainsight SAML you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Gainsight Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gainsight-tutorial.md | + + Title: Azure Active Directory SSO integration with Gainsight +description: Learn how to configure single sign-on between Azure Active Directory and Gainsight. ++++++++ Last updated : 08/22/2023+++++# Azure Active Directory SSO integration with Gainsight ++In this article, you'll learn how to integrate Gainsight with Azure Active Directory (Azure AD). Use Azure AD to manage user access and enable single sign-on with Gainsight. Requires an existing Gainsight subscription. When you integrate Gainsight with Azure AD, you can: ++* Control in Azure AD who has access to Gainsight. +* Enable your users to be automatically signed-in to Gainsight with their Azure AD accounts. +* Manage your accounts in one central location - the Azure portal. ++You'll configure and test Azure AD single sign-on for Gainsight in a test environment. Gainsight supports both **SP** and **IDP** initiated single sign-on. ++## Prerequisites ++To integrate Azure Active Directory with Gainsight, you need: ++* An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal. +* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/). +* Gainsight single sign-on (SSO) enabled subscription. ++## Add application and assign a test user ++Before you begin the process of configuring single sign-on, you need to add the Gainsight application from the Azure AD gallery. You need a test user account to assign to the application and test the single sign-on configuration. ++### Add Gainsight from the Azure AD gallery ++Add Gainsight from the Azure AD application gallery to configure single sign-on with Gainsight. For more information on how to add application from the gallery, see the [Quickstart: Add application from the gallery](../manage-apps/add-application-portal.md). ++### Create and assign Azure AD test user ++Follow the guidelines in the [create and assign a user account](../manage-apps/add-application-portal-assign-users.md) article to create a test user account in the Azure portal called B.Simon. ++Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, and assign roles. The wizard also provides a link to the single sign-on configuration pane in the Azure portal. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides). ++## Configure Azure AD SSO ++Complete the following steps to enable Azure AD single sign-on in the Azure portal. ++1. In the Azure portal, on the **Gainsight** application integration page, find the **Manage** section and select **single sign-on**. +1. On the **Select a single sign-on method** page, select **SAML**. +1. On the **Set up single sign-on with SAML** page, select the pencil icon for **Basic SAML Configuration** to edit the settings. ++ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration") ++1. On the **Basic SAML Configuration** section, perform the following steps: ++ a. In the **Identifier** textbox, type a value using one of the following patterns: ++ | **Identifier** | + | - | + | `urn:auth0:gainsight:<ID>` | + | `urn:auth0:gainsight-eu:<ID>` | + + b. In the **Reply URL** textbox, type a URL using one of the following patterns: + + | **Reply URL** | + | - | + | `https://secured.gainsightcloud.com/login/callback connection=<ID>` | + | `https://secured.eu.gainsightcloud.com/login/callback?connection=<ID>` | ++1. Perform the following step, if you wish to configure the application in **SP** initiated mode: ++ In the **Sign on URL** textbox, type a URL using one of the following patterns: ++ | **Sign on URL** | + || + | `https://secured.gainsightcloud.com/samlp/<ID>` | + | `https://secured.eu.gainsightcloud.com/samlp/<ID>` | ++ > [!NOTE] + > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Gainsight support team](mailto:support@gainsight.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. ++1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer. ++ ![Screenshot shows the Certificate download link.](common/certificatebase64.png "Certificate") ++1. On the **Set up Gainsight SAML** section, copy the appropriate URL(s) based on your requirement. ++ ![Screenshot shows to copy configuration appropriate URL.](common/copy-configuration-urls.png "Metadata") ++## Setup SAML 2.0 Authentication in Gainsight ++> [!NOTE] +> SAML 2.0 Authentication allows the users to login to Gainsight via Azure AD. Once Gainsight is configured to authenticate via SAML 2.0, users who want to access Gainsight will no longer be prompted to enter a username or password. Instead, an exchange between Gainsight and Azure AD occurs that grants Gainsight access to the users. ++**To configure SAML 2.0 Authentication:** ++1. Log in to your **Gainsight** company site as an administrator. ++1. Click **search bar** on the left side menu and select **User Management**. ++ ![Screenshot shows the Gainsight Left Nav Search Bar.](media/gainsight-tutorial/search-bar.png "Search bar") ++1. In the **User Management** page, navigate to **Authentication** tab and click **Add Authentication** > **SAML**. ++ [ ![Screenshot shows the Gainsight User Management Authentication Page.](media/gainsight-tutorial/authentication.png "Authentication Page") ](media/gainsight-tutorial/authentication.png#lightbox) ++1. In the **SAML Mechanism** page, perform the following steps: ++ ![Screenshot shows how to edit SAML configuration in Gainsight.](media/gainsight-tutorial/connection.png "Connection Edit") ++ 1. Enter a unique connection **Name** in the textbox. + 1. Enter a valid **Email Domain** in the textbox. + 1. In the **Sign In URL** textbox, paste the **Login URL** value, which you have copied from the Azure portal. + 1. In the **Sign Out URL** textbox, paste the **Logout URL** value, which you have copied from the Azure portal. + 1. Open the downloaded **Certificate (Base64)** from the Azure portal and upload it into the **Certificate** by clicking **Browse** option. + 1. Click **Save**. ++ > [!Note] + > For more information on SAML creation, please refer [GAINSIGHT SAML](https://support.gainsight.com/Gainsight_NXT/01Onboarding_and_Implementation/Onboarding_for_Gainsight_NXT/Login_and_Permissions/03Gainsight_Authentication). ++## Create Gainsight test user ++1. In a different web browser window, sign in to your Gainsight website as an administrator. ++1. In the **User Management** page, navigate to **Users** > **Add User**. + + [ ![Screenshot shows how to add users in Gainsight.](media/gainsight-tutorial/user.png "Add Users") ](media/gainsight-tutorial/user.png#lightbox) ++1. Fill required fields and click **Save**. Users must be created and activated before you use single sign-on. ++## Test SSO ++In this section, you test your Azure AD single sign-on configuration with following options. ++#### SP initiated: ++* Click on **Test this application** in Azure portal. This will redirect to Gainsight Sign-on URL where you can initiate the login flow. ++* Go to Gainsight Sign-on URL directly and initiate the login flow from there. ++#### IDP initiated: ++* Click on **Test this application** in Azure portal and you should be automatically signed in to the Gainsight for which you set up the SSO. ++You can also use Microsoft My Apps to test the application in any mode. When you click the Gainsight tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Gainsight for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md). ++## Additional resources ++* [What is single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) +* [Plan a single sign-on deployment](../manage-apps/plan-sso-deployment.md). ++## Next steps ++Once you configure Gainsight you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad). |
active-directory | Google Apps Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md | To configure the integration of Google Cloud / G Suite Connector by Microsoft in Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides) -Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide&preserve-view=true). - ## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft Configure and test Azure AD SSO with Google Cloud / G Suite Connector by Microsoft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Google Cloud / G Suite Connector by Microsoft. |
active-directory | Harness Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/harness-provisioning-tutorial.md | In this article, you learn how to configure Azure Active Directory (Azure AD) to > [!NOTE] > This article describes a connector that's built on top of the Azure AD user provisioning service. For important information about this service and answers to frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in preview. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Hive Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hive-tutorial.md | In this section, you'll enable B.Simon to use Azure single sign-on by granting a 1. In a different web browser window, sign in to Hive website as an administrator. -1. Click on the **User Profile** and click **Your workspace**. +1. Click on the **User Profile** and click your workspace **Settings**. ![Screenshot shows the Hive website with Your workspace selected from the menu.](./media/hive-tutorial/profile.png) -1. Click **Auth** and perform the following steps: +1. Click **Enterprise Security** and perform the following steps: - ![Screenshot shows the Auth page where do the tasks described.](./media/hive-tutorial/authentication.png) + [![Screenshot shows the Auth page where do the tasks described.](./media/hive-tutorial/authentication.png)](./media/hive-tutorial/authentication.png#lightbox) a. Copy **Your Workspace ID** and append it to the **SignOn URL** and **Reply URL** in the **Basic SAML Configuration Section** in the Azure portal. In this section, you test your Azure AD single sign-on configuration with follow * Click on **Test this application** in Azure portal and you should be automatically signed in to the Hive for which you set up the SSO. -You can also use Microsoft My Apps to test the application in any mode. When you click the Hive tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). +You can also use Microsoft My Apps to test the application in any mode. When you click the Hive tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hive for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510). ## Next steps |
active-directory | Hornbill Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hornbill-tutorial.md | Follow these steps to enable Azure AD SSO in the Azure portal. 4. On the **Basic SAML Configuration** section, perform the following steps: a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:- `https://sso.hornbill.com/<INSTANCE_NAME>/<SUBDOMAIN>` +`https://sso.hornbill.com/<INSTANCE_NAME>/live` - b. In the **Sign on URL** text box, type a URL using the following pattern: - `https://<SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/` + > [!NOTE] + > If you are deploying the Hornbill Mobile Catalog to your organization, you will need to add an additional identifier URL, as so: +`https://sso.hornbill.com/hornbill/mcatalog` + + b. In the **Reply URL (Assertion Consumer Service URL)** section, add the following: +`https://<API_SUBDOMAIN>.hornbill.com/<INSTANCE_NAME>/xmlmc/sso/saml2/authorize/user/live` ++ > [!NOTE] + > If you are deploying the Hornbill Mobile Catalog to your organization, you will need to add an additional Reply URL, as so: +`https://<API_SUBDOMAIN>.hornbill.com/hornbill/xmlmc/sso/saml2/authorize/user/mcatalog` + + c. In the **Sign on URL** text box, type a URL using the following pattern: +`https://live.hornbill.com/<INSTANCE_NAME>/` > [!NOTE]- > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Hornbill Client support team](https://www.hornbill.com/support/?request/) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. + > These values are not real. Update the <INSTANCE_NAME> and <API_SUBDOMAIN> values with the actual values in the Identifier(s), Reply URL(s) and Sign on URL. These values can be retrieved from the Hornbill Solution Center in your Hornbill instance, under **_Your usage > Support_**. Contact [Hornbill Support](https://www.hornbill.com/support) for assistance in getting these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. -5. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. +6. On the **Set up Single Sign-On with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. ![The Certificate download link](common/copy-metadataurl.png) |
active-directory | Hypervault Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hypervault-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Hypervault for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Hypervault. +++writer: twimmers ++ms.assetid: eca2ff9e-a09d-4bb4-88f6-6021a93d2c9d ++++ Last updated : 08/16/2023++++# Tutorial: Configure Hypervault for automatic user provisioning ++This tutorial describes the steps you need to perform in both Hypervault and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users to [Hypervault](https://hypervault.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Hypervault. +> * Remove users in Hypervault when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Hypervault. +> * [Single sign-on](../manage-apps/add-application-portal-setup-oidc-sso.md) to Hypervault (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* A user account in Hypervault with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Hypervault](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Hypervault to support provisioning with Azure AD +Contact Hypervault support to configure Hypervault to support provisioning with Azure AD. ++## Step 3. Add Hypervault from the Azure AD application gallery ++Add Hypervault from the Azure AD application gallery to start managing provisioning to Hypervault. If you have previously setup Hypervault for SSO, you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who is in scope for provisioning ++The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who is provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Hypervault ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. ++### To configure automatic user provisioning for Hypervault in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Hypervault**. ++ ![Screenshot of the Hypervault link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Hypervault Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Hypervault. If the connection fails, ensure your Hypervault account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Hypervault**. ++1. Review the user attributes that are synchronized from Azure AD to Hypervault in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Hypervault for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Hypervault API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Hypervault| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |displayName|String||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + |emails[type eq "work"].value|String||✓ ++1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Hypervault, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users that you would like to provision to Hypervault by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Leapsome Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/leapsome-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in Le > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Oneflow Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oneflow-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Oneflow for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Oneflow. +++writer: twimmers ++ms.assetid: 6af89cdd-956c-4cc2-9a61-98afe7814470 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Oneflow for automatic user provisioning ++This tutorial describes the steps you need to perform in both Oneflow and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Oneflow](https://oneflow.com) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Oneflow. +> * Remove users in Oneflow when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Oneflow. +> * Provision groups and group memberships in Oneflow. +> * [Single sign-on](oneflow-tutorial.md) to Oneflow (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Oneflow tenant. +* A user account in Oneflow with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Oneflow](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Oneflow to support provisioning with Azure AD +Contact Oneflow support to configure Oneflow to support provisioning with Azure AD. ++## Step 3. Add Oneflow from the Azure AD application gallery ++Add Oneflow from the Azure AD application gallery to start managing provisioning to Oneflow. If you have previously setup Oneflow for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Oneflow ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Oneflow in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Oneflow**. ++ ![Screenshot of the Oneflow link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Oneflow Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Oneflow. If the connection fails, ensure your Oneflow account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Oneflow**. ++1. Review the user attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Oneflow for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Oneflow API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Oneflow| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |externalId|String|| + |emails[type eq "work"].value|String|| + |name.givenName|String|| + |name.familyName|String|| + |phoneNumbers[type eq \"work\"].value|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |nickName|String|| + |title|String|| + |profileUrl|String|| + |displayName|String|| + |addresses[type eq \"work\"].streetAddress|String|| + |addresses[type eq \"work\"].locality|String|| + |addresses[type eq \"work\"].region|String|| + |addresses[type eq \"work\"].postalCode|String|| + |addresses[type eq \"work\"].country|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:adSourceAnchor|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute1|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute2|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute3|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute4|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:customAttribute5|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:distinguishedName|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:domain|String|| + |urn:ietf:params:scim:schemas:extension:ws1b:2.0:User:userPrincipalName|String|| + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Oneflow**. ++1. Review the group attributes that are synchronized from Azure AD to Oneflow in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Oneflow for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Oneflow| + ||||| + |displayName|String|✓|✓ + |externalId|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Oneflow, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to Oneflow by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Oracle Cloud Infrastructure Console Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * An Oracle Cloud Infrastructure Console [tenant](https://www.oracle.com/cloud/sign-in.html?intcmp=OcomFreeTier&source=:ow:o:p:nav:0916BCButton). * A user account in Oracle Cloud Infrastructure Console with Admin permissions. +> [!NOTE] +> This integration is also available to use from Azure AD US Government Cloud environment. You can find this application in the Azure AD US Government Cloud Application Gallery and configure it in the same way as you do from public cloud + ## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). 2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). Once you've configured provisioning, use the following resources to monitor your * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). -## Additional resources +## Change log +08/15/2023 - The app was added to Gov Cloud. ++## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) |
active-directory | Oreilly Learning Platform Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oreilly-learning-platform-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi * [An Azure AD tenant](../develop/quickstart-create-new-tenant.md) * A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). * A user account in O'Reilly learning platform with Admin permissions.+* An O'Reilly learning platform single sign-on (SSO) enabled subscription. ## Step 1. Plan your provisioning deployment * Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). The scenario outlined in this tutorial assumes that you already have the followi * Determine what data to [map between Azure AD and O'Reilly learning platform](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure O'Reilly learning platform to support provisioning with Azure AD-Contact O'Reilly learning platform support to configure O'Reilly learning platform to support provisioning with Azure AD. ++Before you begin to configure the O'Reilly learning platform to support provisioning with Azure AD, youΓÇÖll need to generate a SCIM API token within the OΓÇÖReilly Admin Console. ++1. Navigate to [OΓÇÖReilly Admin Console](https://learning.oreilly.com/) by logging in to your OΓÇÖReilly account. +1. Once youΓÇÖve logged in, click **Admin** in the top navigation and select **Integrations**. +1. Scroll down to the **API tokens** section. Under API tokens, click **Create token** and select the **SCIM API**. Then give your token a name and expiration date, and click Continue. YouΓÇÖll receive your API key in a pop-up message prompting you to store a copy of it in a secure place. Once youΓÇÖve saved a copy of your key, click the checkbox and Continue. +1. You will use the OΓÇÖReilly SCIM API token in Step 5. ## Step 3. Add O'Reilly learning platform from the Azure AD application gallery -Add O'Reilly learning platform from the Azure AD application gallery to start managing provisioning to O'Reilly learning platform. If you have previously setup O'Reilly learning platform for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). +Add O'Reilly learning platform from the Azure AD application gallery to start managing provisioning to O'Reilly learning platform. If you have previously [set up O'Reilly learning platform for SSO](oreilly-learning-platform-tutorial.md), you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). -## Step 4. Define who will be in scope for provisioning +## Step 4. Define who will be in scope for provisioning -The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users to the application. If you choose to scope who will be provisioned based solely on attributes of the user, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * Start small. Test with a small set of users before rolling out to everyone. When scope for provisioning is set to assigned users, you can control this by assigning one or two users to the app. When scope is set to all users, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). * If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +## Step 5. Configure automatic user provisioning to O'Reilly learning platform -## Step 5. Configure automatic user provisioning to O'Reilly learning platform --This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. +This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in OΓÇÖReilly learning platform based on user assignments in Azure AD. ### To configure automatic user provisioning for O'Reilly learning platform in Azure AD: This section guides you through the steps to configure the Azure AD provisioning ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) -1. Under the **Admin Credentials** section, input your O'Reilly learning platform Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to O'Reilly learning platform. If the connection fails, ensure your O'Reilly learning platform account has Admin permissions and try again. +1. Under the **Admin Credentials** section, input your O'Reilly learning platform Tenant URL, which is `https://api.oreilly.com/api/scim/v2`, and Secret Token, which you generated in Step 2. Click **Test Connection** to ensure Azure AD can connect to O'Reilly learning platform. If the connection fails, double-check that your token is correct or [contact the OΓÇÖReilly platform integration team](mailto:platform-integration@oreilly.com) for help. ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) This section guides you through the steps to configure the Azure AD provisioning This operation starts the initial synchronization cycle of all users defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ## Step 6. Monitor your deployment+ Once you've configured provisioning, use the following resources to monitor your deployment: * Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully |
active-directory | Peakon Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/peakon-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in Pe > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). + ## Prerequisites The scenario outlined in this tutorial assumes that you already have the following prerequisites |
active-directory | Postman Provisioning Tutorialy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/postman-provisioning-tutorialy.md | + + Title: 'Tutorial: Configure Postman for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and de-provision user accounts from Azure AD to Postman. +++writer: twimmers ++ms.assetid: f3687101-9bec-4f18-9884-61833f4f58c3 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Postman for automatic user provisioning ++This tutorial describes the steps you need to perform in both Postman and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Postman](https://www.postman.com/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Postman. +> * Remove users in Postman when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Postman. +> * Provision groups and group memberships in Postman. +> * [Single sign-on](postman-tutorial.md) to Postman (recommended). ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Postman tenant. +* A user account in Postman with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Postman](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Postman to support provisioning with Azure AD +Contact Postman support to configure Postman to support provisioning with Azure AD. ++## Step 3. Add Postman from the Azure AD application gallery ++Add Postman from the Azure AD application gallery to start managing provisioning to Postman. If you have previously setup Postman for SSO you can use the same application. However it's recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who will be in scope for provisioning ++The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Postman ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Postman in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Postman**. ++ ![Screenshot of the Postman link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Postman Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Postman. If the connection fails, ensure your Postman account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Postman**. ++1. Review the user attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Postman for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you'll need to ensure that the Postman API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Postman| + ||||| + |userName|String|✓|✓ + |active|Boolean||✓ + |name.givenName|String||✓ + |name.familyName|String||✓ + +1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Postman**. ++1. Review the group attributes that are synchronized from Azure AD to Postman in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Postman for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Postman| + ||||| + |displayName|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Postman, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to Postman by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Reward Gateway Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reward-gateway-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in Re > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in public preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in public preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Sap Cloud Platform Identity Authentication Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-cloud-platform-identity-authentication-provisioning-tutorial.md | Title: 'Tutorial: Configure SAP Cloud Identity Services for automatic user provisioning with Microsoft Entra ID' -description: Learn how to configure Microsoft Entra ID to automatically provision and de-provision user accounts to SAP Cloud Identity Services. +description: Learn how to configure Microsoft Entra ID to automatically provision and deprovision user accounts to SAP Cloud Identity Services. writer: twimmers-The objective of this tutorial is to demonstrate the steps to be performed in SAP Cloud Identity Services and Microsoft Entra ID (Azure AD) to configure Microsoft Entra ID to automatically provision and de-provision users to SAP Cloud Identity Services. +This tutorial aims to demonstrate the steps for configuring Microsoft Entra ID (Azure AD) and SAP Cloud Identity Services. The goal is to set up Microsoft Entra ID to automatically provision and deprovision users to SAP Cloud Identity Services. > [!NOTE] > This tutorial describes a connector built on top of the Microsoft Entra ID User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Microsoft Entra ID](../app-provisioning/user-provisioning.md). Before configuring and enabling automatic user provisioning, you should decide w ## Important tips for assigning users to SAP Cloud Identity Services -* It is recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. Additional users may be assigned later. +* It's recommended that a single Microsoft Entra ID user is assigned to SAP Cloud Identity Services to test the automatic user provisioning configuration. More users may be assigned later. * When assigning a user to SAP Cloud Identity Services, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning. Before configuring and enabling automatic user provisioning, you should decide w ![Screenshot of the SAP Cloud Identity Services Add SCIM.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/configurationauth.png) -1. You will receive an email to activate your account and set a password for **SAP Cloud Identity Services Service**. +1. You'll get an email to activate your account and set up a password for the **SAP Cloud Identity Services Service**. -1. Copy the **User ID** and **Password**. These values will be entered in the Admin Username and Admin Password fields respectively in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal. +1. Copy the **User ID** and **Password**. These values are entered in the Admin Username and Admin Password fields respectively. +This is done in the Provisioning tab of your SAP Cloud Identity Services application in the Azure portal. ## Add SAP Cloud Identity Services from the gallery This section guides you through the steps to configure the Microsoft Entra ID pr 1. Review the user attributes that are synchronized from Microsoft Entra ID to SAP Cloud Identity Services in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in SAP Cloud Identity Services for update operations. Select the **Save** button to commit any changes. - ![Screenshot of the SAP Business Technology Platform Identity Authentication User Attributes.](media/sap-cloud-platform-identity-authentication-provisioning-tutorial/userattributes.png) + |Attribute|Type|Supported for filtering|Required by SAP Cloud Identity Services| + ||||| + |userName|String|✓|✓ + |emails[type eq "work"].value|String||✓ + |active|Boolean|| + |displayName|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|Reference|| + |addresses[type eq "work"].country|String|| + |addresses[type eq "work"].locality|String|| + |addresses[type eq "work"].postalCode|String|| + |addresses[type eq "work"].region|String|| + |addresses[type eq "work"].streetAddress|String|| + |name.givenName|String|| + |name.familyName|String|| + |name.honorificPrefix|String|| + |phoneNumbers[type eq "fax"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| + |phoneNumbers[type eq "work"].value|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:costCenter|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:division|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:organization|String|| + |locale|String|| + |timezone|String|| + |userType|String|| + |company|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute1|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute2|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute3|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute4|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute5|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute6|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute7|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute8|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute9|String|| + |urn:sap:cloud:scim:schemas:extension:custom:2.0:User:attributes:customAttribute10|String|| + |sendMail|String|| + |mailVerified|String|| 1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). This section guides you through the steps to configure the Microsoft Entra ID pr ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) -1. When you are ready to provision, click **Save**. +1. When you're ready to provision, click **Save**. ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) For more information on how to read the Microsoft Entra ID provisioning logs, se * SAP Cloud Identity Services's SCIM endpoint requires certain attributes to be of specific format. You can know more about these attributes and their specific format [here](https://help.sap.com/viewer/6d6d63354d1242d185ab4830fc04feb1/Cloud/en-US/b10fc6a9a37c488a82ce7489b1fab64c.html#). -## Additional resources +## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Microsoft Entra ID?](../manage-apps/what-is-single-sign-on.md) |
active-directory | Sap Fiori Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-fiori-tutorial.md | |
active-directory | Sap Netweaver Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sap-netweaver-tutorial.md | |
active-directory | Servicely Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicely-provisioning-tutorial.md | + + Title: 'Tutorial: Configure Servicely for automatic user provisioning with Azure Active Directory' +description: Learn how to automatically provision and deprovision user accounts from Azure AD to Servicely. +++writer: twimmers ++ms.assetid: be3af02b-da77-4a88-bec3-e634e2af38b3 ++++ Last updated : 08/16/2023++++# Tutorial: Configure Servicely for automatic user provisioning ++This tutorial describes the steps you need to perform in both Servicely and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and deprovisions users and groups to [Servicely](https://servicely.ai/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). +++## Supported capabilities +> [!div class="checklist"] +> * Create users in Servicely. +> * Remove users in Servicely when they do not require access anymore. +> * Keep user attributes synchronized between Azure AD and Servicely. +> * Provision groups and group memberships in Servicely. ++## Prerequisites ++The scenario outlined in this tutorial assumes that you already have the following prerequisites: ++* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md). +* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator). +* An Servicely tenant. +* A user account in Servicely with Admin permissions. ++## Step 1. Plan your provisioning deployment +1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md). +1. Determine who is in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). +1. Determine what data to [map between Azure AD and Servicely](../app-provisioning/customize-application-attributes.md). ++## Step 2. Configure Servicely to support provisioning with Azure AD +Contact Servicely support to configure Servicely to support provisioning with Azure AD. ++## Step 3. Add Servicely from the Azure AD application gallery ++Add Servicely from the Azure AD application gallery to start managing provisioning to Servicely. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md). ++## Step 4. Define who is in scope for provisioning ++The Azure AD provisioning service allows you to scope who is provisioned based on assignment to the application and/or based on attributes of the user. If you choose to scope who is provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who is provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++* If you need more roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles. +++## Step 5. Configure automatic user provisioning to Servicely ++This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in TestApp based on user and/or group assignments in Azure AD. ++### To configure automatic user provisioning for Servicely in Azure AD: ++1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**. ++ ![Screenshot of Enterprise applications blade.](common/enterprise-applications.png) ++1. In the applications list, select **Servicely**. ++ ![Screenshot of the Servicely link in the Applications list.](common/all-applications.png) ++1. Select the **Provisioning** tab. ++ ![Screenshot of Provisioning tab.](common/provisioning.png) ++1. Set the **Provisioning Mode** to **Automatic**. ++ ![Screenshot of Provisioning tab automatic.](common/provisioning-automatic.png) ++1. Under the **Admin Credentials** section, input your Servicely Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to Servicely. If the connection fails, ensure your Servicely account has Admin permissions and try again. ++ ![Screenshot of Token.](common/provisioning-testconnection-tenanturltoken.png) ++1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box. ++ ![Screenshot of Notification Email.](common/provisioning-notification-email.png) ++1. Select **Save**. ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Servicely**. ++1. Review the user attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Servicely for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you need to ensure that the Servicely API supports filtering users based on that attribute. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Servicely| + ||||| + |userName|String|✓|✓ + |active|Boolean|| + |externalId|String|| + |emails[type eq "work"].value|String|| + |name.givenName|String|| + |name.familyName|String|| + |title|String|| + |preferredLanguage|String|| + |phoneNumbers[type eq "work"].value|String|| + |phoneNumbers[type eq "mobile"].value|String|| + |timezone|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|String|| + |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String|| ++1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Servicely**. ++1. Review the group attributes that are synchronized from Azure AD to Servicely in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Servicely for update operations. Select the **Save** button to commit any changes. ++ |Attribute|Type|Supported for filtering|Required by Servicely| + ||||| + |displayName|String|✓|✓ + |externalId|String|✓|✓ + |members|Reference|| + +1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). ++1. To enable the Azure AD provisioning service for Servicely, change the **Provisioning Status** to **On** in the **Settings** section. ++ ![Screenshot of Provisioning Status Toggled On.](common/provisioning-toggle-on.png) ++1. Define the users and/or groups that you would like to provision to Servicely by choosing the desired values in **Scope** in the **Settings** section. ++ ![Screenshot of Provisioning Scope.](common/provisioning-scope.png) ++1. When you're ready to provision, click **Save**. ++ ![Screenshot of Saving Provisioning Configuration.](common/provisioning-configuration-save.png) ++This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. ++## Step 6. Monitor your deployment +Once you've configured provisioning, use the following resources to monitor your deployment: ++* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully +* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it's to completion +* If the provisioning configuration seems to be in an unhealthy state, the application goes into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md). ++## More resources ++* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) +* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md) ++## Next steps ++* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md) |
active-directory | Sharepoint On Premises Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md | |
active-directory | Starleaf Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/starleaf-provisioning-tutorial.md | The objective of this tutorial is to demonstrate the steps to be performed in St > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Symantec Web Security Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/symantec-web-security-service.md | The objective of this tutorial is to demonstrate the steps to be performed in Sy > [!NOTE] > This tutorial describes a connector built on top of the Azure AD User Provisioning Service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md). >-> This connector is currently in Public Preview. For more information on the general Microsoft Azure terms of use for Preview features, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +> This connector is currently in Public Preview. For more information about previews, see [Universal License Terms For Online Services](https://www.microsoft.com/licensing/terms/product/ForOnlineServices/all). ## Prerequisites |
active-directory | Tailscale Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tailscale-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Tailscale](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Tailscale to support provisioning with Azure AD-Contact Tailscale support to configure Tailscale to support provisioning with Azure AD. ++You need to be an [Owner, Admin, or IT admin](https://tailscale.com/kb/1138/user-roles/) in Tailscale to complete these steps. See [Tailscale plans](https://tailscale.com/pricing/) +to find out which plans make user & group provisioning for Azure AD available. ++### Generate a SCIM API key in Tailscale. ++In the **[User management](https://login.tailscale.com/admin/settings/user-management/)** page of the admin console, ++1. Click **Enable Provisioning**. +1. Copy the generated key to the clipboard. ++Save the key information in a secure spot. This is the Secret Token you will need to use it when you configure provisioning in Azure AD. ## Step 3. Add Tailscale from the Azure AD application gallery The Azure AD provisioning service allows you to scope who is provisioned based o ## Step 5. Configure automatic user provisioning to Tailscale -This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in TestApp based on user assignments in Azure AD. +This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users in Tailscale based on user assignments in Azure AD. ### To configure automatic user provisioning for Tailscale in Azure AD: |
active-directory | Tanium Sso Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tanium-sso-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port > [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [Tanium SSO support team](mailto:integrations@tanium.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal. -1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. + > [!NOTE] + > If deploying Tanium in an on-premises configuration, your values may look different than those shown above. The values to use can be retrieved from the **Administration > SAML Configuration** menu in the Tanium console. Details can be found in the [Tanium Console User Guide: Integrating with a SAML IdP](https://docs.tanium.com/platform_user/platform_user/console_using_saml.html?cloud=false "Integrating with a SAML IdP Guide"). ++1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer. If deploying to Tanium in an on-premises configuration, click the edit button and set the **Response Signing Option** to "Sign response and assertion". [ ![Screenshot shows the Certificate download link.](common/copy-metadataurl.png "Certificate") ](common/copy-metadataurl.png#lightbox) |
active-directory | Tutorial List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tutorial-list.md | Title: SaaS App Integration Tutorials for use with Azure AD + Title: App Integration Tutorials for use with Azure AD description: Configure Azure Active Directory single sign-on integration with a variety of third-party software as a service applications. -# Tutorials for integrating SaaS applications with Azure Active Directory +# Tutorials for integrating applications with Azure Active Directory -To help integrate your cloud-enabled [software as a service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) applications with Azure Active Directory, we have developed a collection of tutorials that walk you through configuration. +To help integrate your cloud-enabled [software as a service (SaaS)](https://azure.microsoft.com/overview/what-is-saas/) and on-premises applications with Azure Active Directory, we have developed a collection of tutorials that walk you through configuration. For a list of all SaaS apps that have been pre-integrated into Azure AD, see the [Active Directory Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/azure-active-directory-apps). |
active-directory | Uber Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/uber-tutorial.md | Complete the following steps to enable Azure AD single sign-on in the Azure port ## Configure Uber SSO -To configure single sign-on on **Uber** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Uber support team](mailto:business-api-support@uber.com). They set this setting to have the SAML SSO connection set properly on both sides. +To configure single sign-on on **Uber** side, you need to send the downloaded **Certificate (PEM)** and appropriate copied URLs from Azure portal to [Uber support team](mailto:business-support@uber.com). They set this setting to have the SAML SSO connection set properly on both sides. ### Create Uber test user -In this section, you create a user called Britta Simon in Uber. Work with [Uber support team](mailto:business-api-support@uber.com) to add the users in the Uber platform. Users must be created and activated before you use single sign-on. Uber also supports automatic user provisioning, you can find more details [here](uber-provisioning-tutorial.md) on how to configure automatic user provisioning. +In this section, you create a user called Britta Simon in Uber. Work with [Uber support team or your Uber POC](mailto:business-support@uber.com) to add the users in the Uber platform. Users must be created and activated before you use single sign-on. Uber also supports automatic user provisioning, you can find more details [here](uber-provisioning-tutorial.md) on how to configure automatic user provisioning. ## Test SSO |
active-directory | Vbrick Rev Cloud Provisioning Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/vbrick-rev-cloud-provisioning-tutorial.md | The scenario outlined in this tutorial assumes that you already have the followi 1. Determine what data to [map between Azure AD and Vbrick Rev Cloud](../app-provisioning/customize-application-attributes.md). ## Step 2. Configure Vbrick Rev Cloud to support provisioning with Azure AD-Contact Vbrick Rev Cloud support to configure Vbrick Rev Cloud to support provisioning with Azure AD. ++1. Sign in to your **Rev Tenant**. Navigate to **Admin > Security Settings > User Security** in the navigation pane. ++ ![Screenshot of Vbrick Rev User Security Settings.](./media/vbrick-rev-cloud-provisioning-tutorial/app-navigations.png) ++1. Navigate to **Microsoft Azure AD SCIM** section of the page. ++ ![Screenshot of the Vbrick Rev User Security Settings with the Microsoft AD SCIM section called out.](./media/vbrick-rev-cloud-provisioning-tutorial/enable-azure-ad-scim.png) ++1. Enable **Microsoft Azure AD SCIM** and click on **Generate Token** button. + ![Screenshot of the Vbrick Rev User Security Settings with the Microsoft AD SCIM enable.](./media/vbrick-rev-cloud-provisioning-tutorial/rev-scim-manage.png) ++1. It will open a popup with the **URL** and the **JWT token**. Copy and save the **JWT token** and **URL** for next steps. ++ ![Screenshot of the Vbrick Rev User Security Settings with the Scim Token section called out.](./media/vbrick-rev-cloud-provisioning-tutorial/copy-token.png) ++1. Once you have a copy of the **JWT token** and **URL**, click **OK** to close the popup and then click on the **Save** button at the bottom of the settings page to enable SCIM for your tenant. ## Step 3. Add Vbrick Rev Cloud from the Azure AD application gallery |
active-directory | Configure Cmmc Level 2 Identification And Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/configure-cmmc-level-2-identification-and-authentication.md | |
active-directory | Admin Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/admin-api.md | If the property `customStatusEndpoint` property isn't specified, then the `anony | -- | -- | -- | | `url` | string (url)| the url of the custom status endpoint | | `type` | string | the type of the endpoint |+ example: ``` |
active-directory | How To Issuer Revoke | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-to-issuer-revoke.md | Verifiable credential data isn't stored by Microsoft. Therefore, the issuer need ## How does revocation work? -Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c-ccg/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status. +Microsoft Entra Verified ID implements the [W3C StatusList2021](https://github.com/w3c/vc-status-list-2021/tree/343b8b59cddba4525e1ef355356ae760fc75904e). When presentation to the Request Service API happens, the API will do the revocation check for you. The revocation check happens over an anonymous API call to Identity Hub and does not contain any data who is checking if the verifiable credential is still valid or revoked. With the **statusList2021**, Microsoft Entra Verified ID just keeps a flag by the hashed value of the indexed claim to keep track of the revocation status. ### Verifiable credential data |
active-directory | How Use Vcnetwork | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/how-use-vcnetwork.md | To use the Entra Verified ID Network, you need to have completed the following. ## What is the Entra Verified ID Network? -In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, but this approach would be both a manual and a complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API. +In our scenario, Proseware is a verifier. Woodgrove is the issuer. The verifier needs to know Woodgrove's issuer DID and the verifiable credential (VC) type that represents Woodgrove employees before it can create a presentation request for a verified credential for Woodgrove employees. The necessary information may come from some kind of manual exchange between the companies, but this approach would be both manual and complex. The Entra Verified ID Network makes this process much easier. Woodgrove, as an issuer, can publish credential types to the Entra Verified ID Network and Proseware, as the verifier, can search for published credential types and schemas in the Entra Verified ID Network. Using this information, Woodgrove can create a [presentation request](presentation-request-api.md#presentation-request-payload) and easily invoke the Request Service API. :::image type="content" source="media/decentralized-identifier-overview/did-overview.png" alt-text="Diagram of Microsoft DID implementation overview."::: |
active-directory | Howto Verifiable Credentials Partner Au10tix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-au10tix.md | For incorporating identity verification into your Apps, using AU10TIX ΓÇ£Govern As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users. -1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade). +1. Go to [Microsoft Entra admin center -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade). >[!NOTE] > Make sure this is the tenant you set up for Verified ID per the pre-requisites. |
active-directory | Howto Verifiable Credentials Partner Lexisnexis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/howto-verifiable-credentials-partner-lexisnexis.md | To incorporate identity verification into your Apps using LexisNexis Verified ID As a developer you'll provide the steps below to your tenant administrator. The instructions help them obtain the verification request URL, and application body or website to request verifiable credentials from your users. -1. Go to [Microsoft Entra portal -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade). +1. Go to [Microsoft Entra admin center -> Verified ID](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade). >[!Note] > Make sure this is the tenant you set up for Verified ID per the pre-requisites. 1. Go to [Quickstart-> Verification Request -> Start](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/QuickStartVerifierBlade). |
active-directory | Partner Vu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/partner-vu.md | Follow these steps to incorporate VU Identity Card solution into your Apps. As a developer you can share these steps with your tenant administrator to obtain the verification request URL, and body for your application or website to request Verified IDs from your users. -1. Go to Microsoft Entra portal - [**Verified ID**](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade) +1. Go to Microsoft Entra admin center - [**Verified ID**](https://entra.microsoft.com/#view/Microsoft_AAD_DecentralizedIdentity/ResourceOverviewBlade) >[!NOTE] >Verify that the tenant configured for Verified ID meets the prerequisites. |
active-directory | Using Wallet Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/using-wallet-library.md | Then, you have to handle the following major tasks in your app. - User Interface. Any visual representation of stored credentials and the UI for driving the issuance and presentation process must be implemented by you. ## Wallet Library Demo app-The Wallet Library comes with a demo app in the github repo that is ready to use without any modifications. You just have to build and deploy it. The demo app is a lightweight and simple implementation that illustrates issuance and presentation at its minimum. To quickly get going, you can use the QR Code Reader app to scan the QR code, and then copy and paste it into the demo app. +The Wallet Library comes with a demo app in the GitHub repo that is ready to use without any modifications. You just have to build and deploy it. The demo app is a lightweight and simple implementation that illustrates issuance and presentation at its minimum. To quickly get going, you can use the QR Code Reader app to scan the QR code, and then copy and paste it into the demo app. In order to test the demo app, you need a webapp that issues credentials and makes presentation requests for credentials. The [Woodgrove public demo webapp](https://aka.ms/vcdemo) is used for this purpose in this tutorial. ## Building the Android sample On your developer machine with Android Studio, do the following: -1. Download or clone the Android Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip). +1. Download or clone the Android Wallet Library [GitHub repo](https://github.com/microsoft/entra-verifiedid-wallet-library-android/archive/refs/heads/dev.zip). You donΓÇÖt need the walletlibrary folder and you can delete it if you like. 1. Start Android Studio and open the parent folder of walletlibrarydemo The sample app holds the issued credential in memory, so after issuance, you can ## Building the iOS sample On your Mac developer machine with Xcode, do the following:-1. Download or clone the iOS Wallet Library [github repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip). +1. Download or clone the iOS Wallet Library [GitHub repo](https://github.com/microsoft/entra-verifiedid-wallet-library-ios/archive/refs/heads/dev.zip). 1. Start Xcode and open the top level folder for the WalletLibrary 1. Set focus on WalletLibraryDemo project |
active-directory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md | Microsoft Entra Verified ID is now generally available (GA) as the new member of ### Known issues -- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Entra portal. A fix for this issue should be available by August 20, 2022.+- Tenants that [opt-out](verifiable-credentials-faq.md?#how-do-i-reset-the-entra-verified-id-service) without issuing any Verifiable Credential gets a `Specified resource does not exist` error from the Admin API and/or the Microsoft Entra admin center. A fix for this issue should be available by August 20, 2022. ## July 2022 |
active-directory | Workload Identities Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identities-faqs.md | pricing](https://www.microsoft.com/security/business/identity-access/microsoft-e | Conditional Access policies for workload identities |Define the condition in which a workload can access a resource, such as an IP range | | Yes | |**Lifecycle Management**| | | | |Access reviews for service provider-assigned privileged roles | Closely monitor workload identities with impactful permissions | | Yes |+| Application authentication methods API | Allows IT admins to enforce best practices for how apps in their organizations use application authentication methods. | | Yes | |**Identity Protection** | | |-|Identity Protection for workload identities | Detect and remediate compromised workload identities | | Yes | +|Identity Protection for workload identities | Detect and remediate compromised workload identities | | Yes | ## What is the cost of Workload Identities Premium plan? |
active-directory | Workload Identity Federation Create Trust | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/workload-identities/workload-identity-federation-create-trust.md | Use the following values from your Azure AD application registration for your Gi The following screenshot demonstrates how to copy the application ID and tenant ID. - ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra portal.](./media/workload-identity-federation-create-trust/copy-client-id.png) + ![Screenshot that demonstrates how to copy the application ID and tenant ID from Microsoft Entra admin center.](./media/workload-identity-federation-create-trust/copy-client-id.png) - `AZURE_SUBSCRIPTION_ID` your subscription ID. To get the subscription ID, open **Subscriptions** in Azure portal and find your subscription. Then, copy the **Subscription ID**. |
ai-services | Anomaly Detector Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/Anomaly-Detector/anomaly-detector-container-howto.md | In this article, you learned concepts and workflow for downloading, installing, * You must specify billing information when instantiating a container. > [!IMPORTANT]-> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft. +> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g., the time series data that is being analyzed) to Microsoft. ## Next steps |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md | These custom roles only apply to authoring (Language Understanding Authoring) an > * If you are assigned as a *Contributor* on Azure, your role will be shown as *Owner* in LUIS portal. -### Cognitive Services LUIS reader +### Cognitive Services LUIS Reader A user that should only be validating and reviewing LUIS applications, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets (utterances, intents, entities) to notify the app developers of any changes that need to be made, but do not have direct access to make them. A user that should only be validating and reviewing LUIS applications, typically :::column-end::: :::row-end::: -### Cognitive Services LUIS writer +### Cognitive Services LUIS Writer A user that is responsible for building and modifying LUIS application, as a collaborator in a larger team. The collaborator can modify the LUIS application in any way, train those changes, and validate/test those changes in the portal. However, this user wouldn't have access to deploying this application to the runtime, as they may accidentally reflect their changes in a production environment. They also wouldn't be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in a production environment. They may also create new applications under this resource, but with the restrictions mentioned. A user that is responsible for building and modifying LUIS application, as a col :::column-end::: :::row-end::: -### Cognitive Services LUIS owner +### Cognitive Services LUIS Owner > [!NOTE] > * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal. |
ai-services | Cognitive Services Container Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md | Title: Use Azure AI services containers on-premises + Title: Use Azure AI containers on-premises description: Learn how to use Docker containers to use Azure AI services on-premises. keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service. -# What are Azure AI services containers? +# What are Azure AI containers? Azure AI services provides several [Docker containers](https://www.docker.com/what-container) that let you use the same APIs that are available in Azure, on-premises. Using these containers gives you the flexibility to bring Azure AI services closer to your data for compliance, security or other operational reasons. Container support is currently available for a subset of Azure AI services. Containerization is an approach to software distribution in which an application ## Containers in Azure AI services -Azure AI services containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure AI services. You can find instructions and image locations in the tables below. +Azure AI containers provide the following set of Docker containers, each of which contains a subset of functionality from services in Azure AI services. You can find instructions and image locations in the tables below. > [!NOTE] > See [Install and run Document Intelligence containers](document-intelligence/containers/install-run.md) for **Azure AI Document Intelligence** container instructions and image locations. Additionally, some containers are supported in the Azure AI services [multi-serv ## Prerequisites -You must satisfy the following prerequisites before using Azure AI services containers: +You must satisfy the following prerequisites before using Azure AI containers: **Docker Engine**: You must have Docker Engine installed locally. Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Linux](https://docs.docker.com/engine/installation/#supported-platforms), and [Windows](https://docs.docker.com/docker-for-windows/). On Windows, Docker must be configured to support Linux containers. Docker containers can also be deployed directly to [Azure Kubernetes Service](../aks/index.yml) or [Azure Container Instances](../container-instances/index.yml). |
ai-services | Cognitive Services Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md | -Azure AI services provides a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications requesting data over the specified set of networks can access the account. You can limit access to your resources with request filtering. Allowing only requests originating from specified IP addresses, IP ranges or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). +Azure AI services provide a layered security model. This model enables you to secure your Azure AI services accounts to a specific subset of networksΓÇï. When network rules are configured, only applications that request data over the specified set of networks can access the account. You can limit access to your resources with *request filtering*, which allows requests that originate only from specified IP addresses, IP ranges, or from a list of subnets in [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). An application that accesses an Azure AI services resource when network rules are in effect requires authorization. Authorization is supported with [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) credentials or with a valid API key. > [!IMPORTANT]-> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. In order to allow requests through, one of the following conditions needs to be met: +> Turning on firewall rules for your Azure AI services account blocks incoming requests for data by default. To allow requests through, one of the following conditions needs to be met: >-> * The request should originate from a service operating within an Azure Virtual Network (VNet) on the allowed subnet list of the target Azure AI services account. The endpoint in requests originated from VNet needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account. -> * Or the request should originate from an allowed list of IP addresses. +> - The request originates from a service that operates within an Azure Virtual Network on the allowed subnet list of the target Azure AI services account. The endpoint request that originated from the virtual network needs to be set as the [custom subdomain](cognitive-services-custom-subdomains.md) of your Azure AI services account. +> - The request originates from an allowed list of IP addresses. >-> Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on. +> Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services. [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Scenarios -To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks (including internet traffic) by default. Then, you should configure rules that grant access to traffic from specific VNets. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges, enabling connections from specific internet or on-premises clients. +To secure your Azure AI services resource, you should first configure a rule to deny access to traffic from all networks, including internet traffic, by default. Then, configure rules that grant access to traffic from specific virtual networks. This configuration enables you to build a secure network boundary for your applications. You can also configure rules to grant access to traffic from select public internet IP address ranges and enable connections from specific internet or on-premises clients. -Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. Once network rules are applied, they're enforced for all requests. +Network rules are enforced on all network protocols to Azure AI services, including REST and WebSocket. To access data by using tools such as the Azure test consoles, explicit network rules must be configured. You can apply network rules to existing Azure AI services resources, or when you create new Azure AI services resources. After network rules are applied, they're enforced for all requests. ## Supported regions and service offerings -Virtual networks (VNETs) are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services supports service tags for network rules configuration. The services listed below are included in the **CognitiveServicesManagement** service tag. +Virtual networks are supported in [regions where Azure AI services are available](https://azure.microsoft.com/global-infrastructure/services/). Azure AI services support service tags for network rules configuration. The services listed here are included in the `CognitiveServicesManagement` service tag. > [!div class="checklist"]-> * Anomaly Detector -> * Azure OpenAI -> * Azure AI Vision -> * Content Moderator -> * Custom Vision -> * Face -> * Language Understanding (LUIS) -> * Personalizer -> * Speech service -> * Language service -> * QnA Maker -> * Translator Text -+> - Anomaly Detector +> - Azure OpenAI +> - Content Moderator +> - Custom Vision +> - Face +> - Language Understanding (LUIS) +> - Personalizer +> - Speech service +> - Language +> - QnA Maker +> - Translator > [!NOTE]-> If you're using, Azure OpenAI, LUIS, Speech Services, or Language services, the **CognitiveServicesManagement** tag only enables you use the service using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal , Speech Studio or Language Studio from a virtual network, you will need to use the following tags: +> If you use Azure OpenAI, LUIS, Speech Services, or Language services, the `CognitiveServicesManagement` tag only enables you to use the service by using the SDK or REST API. To access and use Azure OpenAI Studio, LUIS portal, Speech Studio, or Language Studio from a virtual network, you need to use the following tags: >-> * **AzureActiveDirectory** -> * **AzureFrontDoor.Frontend** -> * **AzureResourceManager** -> * **CognitiveServicesManagement** -> * **CognitiveServicesFrontEnd** -+> - `AzureActiveDirectory` +> - `AzureFrontDoor.Frontend` +> - `AzureResourceManager` +> - `CognitiveServicesManagement` +> - `CognitiveServicesFrontEnd` ## Change the default network access rule By default, Azure AI services resources accept connections from clients on any network. To limit access to selected networks, you must first change the default action. > [!WARNING]-> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to **deny** blocks all access to the data unless specific network rules that **grant** access are also applied. Be sure to grant access to any allowed networks using network rules before you change the default rule to deny access. If you are allow listing IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network. +> Making changes to network rules can impact your applications' ability to connect to Azure AI services. Setting the default network rule to *deny* blocks all access to the data unless specific network rules that *grant* access are also applied. +> +> Before you change the default rule to deny access, be sure to grant access to any allowed networks by using network rules. If you allow listing for the IP addresses for your on-premises network, be sure to add all possible outgoing public IP addresses from your on-premises network. -### Managing default network access rules +### Manage default network access rules You can manage default network access rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. You can manage default network access rules for Azure AI services resources thro 1. Go to the Azure AI services resource you want to secure. -1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**. +1. Select **Resource Management** to expand it, then select **Networking**. - ![Virtual network option](media/vnet/virtual-network-blade.png) + :::image type="content" source="media/vnet/virtual-network-blade.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected." lightbox="media/vnet/virtual-network-blade.png"::: -1. To deny access by default, choose to allow access from **Selected networks**. With the **Selected networks** setting alone, unaccompanied by configured **Virtual networks** or **Address ranges** - all access is effectively denied. When all access is denied, requests attempting to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell or, Azure CLI can still be used to configure the Azure AI services resource. -1. To allow traffic from all networks, choose to allow access from **All networks**. +1. To deny access by default, under **Firewalls and virtual networks**, select **Selected Networks and Private Endpoints**. - ![Virtual networks deny](media/vnet/virtual-network-deny.png) + With this setting alone, unaccompanied by configured virtual networks or address ranges, all access is effectively denied. When all access is denied, requests that attempt to consume the Azure AI services resource aren't permitted. The Azure portal, Azure PowerShell, or the Azure CLI can still be used to configure the Azure AI services resource. ++1. To allow traffic from all networks, select **All networks**. ++ :::image type="content" source="media/vnet/virtual-network-deny.png" alt-text="Screenshot shows the Networking page with All networks selected." lightbox="media/vnet/virtual-network-deny.png"::: 1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell) -1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**. +1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**. 1. Display the status of the default rule for the Azure AI services resource. - ```azurepowershell-interactive - $parameters = @{ - "ResourceGroupName"= "myresourcegroup" - "Name"= "myaccount" -} - (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction - ``` + ```azurepowershell-interactive + $parameters = @{ + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + } + (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).DefaultAction + ``` -1. Set the default rule to deny network access by default. + You can get values for your resource group `myresourcegroup` and the name of your Azure services resource `myaccount` from the Azure portal. ++1. Set the default rule to deny network access. ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -DefaultAction Deny + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "DefaultAction" = "Deny" } Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` -1. Set the default rule to allow network access by default. +1. Set the default rule to allow network access. ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -DefaultAction Allow + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "DefaultAction" = "Allow" } Update-AzCognitiveServicesAccountNetworkRuleSet @parameters ``` # [Azure CLI](#tab/azure-cli) -1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**. 1. Display the status of the default rule for the Azure AI services resource. ```azurecli-interactive az cognitiveservices account show \- -g "myresourcegroup" -n "myaccount" \ - --query networkRuleSet.defaultAction + --resource-group "myresourcegroup" --name "myaccount" \ + --query properties.networkAcls.defaultAction ``` +1. Get the resource ID for use in the later steps. ++ ```azurecli-interactive + resourceId=$(az cognitiveservices account show + --resource-group "myresourcegroup" \ + --name "myaccount" --query id --output tsv) + ``` + 1. Set the default rule to deny network access by default. ```azurecli-interactive az resource update \- --ids {resourceId} \ + --ids $resourceId \ --set properties.networkAcls="{'defaultAction':'Deny'}" ``` You can manage default network access rules for Azure AI services resources thro ```azurecli-interactive az resource update \- --ids {resourceId} \ + --ids $resourceId \ --set properties.networkAcls="{'defaultAction':'Allow'}" ``` You can manage default network access rules for Azure AI services resources thro ## Grant access from a virtual network -You can configure Azure AI services resources to allow access only from specific subnets. The allowed subnets may belong to a VNet in the same subscription, or in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. +You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Azure AD tenant. ++Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI services service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). -Enable a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) for Azure AI services within the VNet. The service endpoint routes traffic from the VNet through an optimal path to the Azure AI services service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource that allow requests to be received from specific subnets in a VNet. Clients granted access via these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data. +The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the Azure AI services resource to allow requests from specific subnets in a virtual network. Clients granted access by these network rules must continue to meet the authorization requirements of the Azure AI services resource to access the data. -Each Azure AI services resource supports up to 100 virtual network rules, which may be combined with [IP network rules](#grant-access-from-an-internet-ip-range). +Each Azure AI services resource supports up to 100 virtual network rules, which can be combined with IP network rules. For more information, see [Grant access from an internet IP range](#grant-access-from-an-internet-ip-range) later in this article. -### Required permissions +### Set required permissions -To apply a virtual network rule to an Azure AI services resource, the user must have the appropriate permissions for the subnets being added. The required permission is the default *Contributor* role, or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions. +To apply a virtual network rule to an Azure AI services resource, you need the appropriate permissions for the subnets to add. The required permission is the default *Contributor* role or the *Cognitive Services Contributor* role. Required permissions can also be added to custom role definitions. -Azure AI services resource and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. +The Azure AI services resource and the virtual networks that are granted access might be in different subscriptions, including subscriptions that are part of a different Azure AD tenant. > [!NOTE]-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. Such rules cannot be configured through the Azure portal, though they may be viewed in the portal. +> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure AD tenant are currently supported only through PowerShell, the Azure CLI, and the REST APIs. You can view these rules in the Azure portal, but you can't configure them. -### Managing virtual network rules +### Configure virtual network rules You can manage virtual network rules for Azure AI services resources through the Azure portal, PowerShell, or the Azure CLI. # [Azure portal](#tab/portal) +To grant access to a virtual network with an existing network rule: + 1. Go to the Azure AI services resource you want to secure. -1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**. +1. Select **Resource Management** to expand it, then select **Networking**. -1. Check that you've selected to allow access from **Selected networks**. +1. Confirm that you selected **Selected Networks and Private Endpoints**. -1. To grant access to a virtual network with an existing network rule, under **Virtual networks**, select **Add existing virtual network**. +1. Under **Allow access from**, select **Add existing virtual network**. - ![Add existing vNet](media/vnet/virtual-network-add-existing.png) + :::image type="content" source="media/vnet/virtual-network-add-existing.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add existing virtual network highlighted." lightbox="media/vnet/virtual-network-add-existing.png"::: 1. Select the **Virtual networks** and **Subnets** options, and then select **Enable**. - ![Add existing vNet details](media/vnet/virtual-network-add-existing-details.png) + :::image type="content" source="media/vnet/virtual-network-add-existing-details.png" alt-text="Screenshot shows the Add networks dialog box where you can enter a virtual network and subnet."::: -1. To create a new virtual network and grant it access, select **Add new virtual network**. + > [!NOTE] + > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. + > + > Currently, only virtual networks that belong to the same Azure AD tenant are available for selection during rule creation. To grant access to a subnet in a virtual network that belongs to another tenant, use PowerShell, the Azure CLI, or the REST APIs. - ![Add new vNet](media/vnet/virtual-network-add-new.png) +1. Select **Save** to apply your changes. ++To create a new virtual network and grant it access: ++1. On the same page as the previous procedure, select **Add new virtual network**. ++ :::image type="content" source="media/vnet/virtual-network-add-new.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and Add new virtual network highlighted." lightbox="media/vnet/virtual-network-add-new.png"::: 1. Provide the information necessary to create the new virtual network, and then select **Create**. - ![Create vNet](media/vnet/virtual-network-create.png) + :::image type="content" source="media/vnet/virtual-network-create.png" alt-text="Screenshot shows the Create virtual network dialog box."::: - > [!NOTE] - > If a service endpoint for Azure AI services wasn't previously configured for the selected virtual network and subnets, you can configure it as part of this operation. - > - > Presently, only virtual networks belonging to the same Azure Active Directory tenant are shown for selection during rule creation. To grant access to a subnet in a virtual network belonging to another tenant, please use PowerShell, CLI or REST APIs. +1. Select **Save** to apply your changes. -1. To remove a virtual network or subnet rule, select **...** to open the context menu for the virtual network or subnet, and select **Remove**. +To remove a virtual network or subnet rule: - ![Remove vNet](media/vnet/virtual-network-remove.png) +1. On the same page as the previous procedures, select **...(More options)** to open the context menu for the virtual network or subnet, and select **Remove**. ++ :::image type="content" source="media/vnet/virtual-network-remove.png" alt-text="Screenshot shows the option to remove a virtual network." lightbox="media/vnet/virtual-network-remove.png"::: 1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell) -1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**. +1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**. -1. List virtual network rules. +1. List the configured virtual network rules. ```azurepowershell-interactive- $parameters = @{ - "ResourceGroupName"= "myresourcegroup" - "Name"= "myaccount" -} + $parameters = @{ + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + } (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).VirtualNetworkRules ``` -1. Enable service endpoint for Azure AI services on an existing virtual network and subnet. +1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet. ```azurepowershell-interactive Get-AzVirtualNetwork -ResourceGroupName "myresourcegroup" ` -Name "myvnet" | Set-AzVirtualNetworkSubnetConfig -Name "mysubnet" `- -AddressPrefix "10.0.0.0/24" ` + -AddressPrefix "CIDR" ` -ServiceEndpoint "Microsoft.CognitiveServices" | Set-AzVirtualNetwork ``` You can manage virtual network rules for Azure AI services resources through the ```azurepowershell-interactive $subParameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myvnet" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myvnet" } $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" You can manage virtual network rules for Azure AI services resources through the ``` > [!TIP]- > To add a network rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified **VirtualNetworkResourceId** parameter in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name". + > To add a network rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified `VirtualNetworkResourceId` parameter in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`. 1. Remove a network rule for a virtual network and subnet. ```azurepowershell-interactive $subParameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myvnet" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myvnet" } $subnet = Get-AzVirtualNetwork @subParameters | Get-AzVirtualNetworkSubnetConfig -Name "mysubnet" $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -VirtualNetworkResourceId $subnet.Id + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "VirtualNetworkResourceId" = $subnet.Id } Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli) -1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**. -1. List virtual network rules. +1. List the configured virtual network rules. ```azurecli-interactive az cognitiveservices account network-rule list \- -g "myresourcegroup" -n "myaccount" \ + --resource-group "myresourcegroup" --name "myaccount" \ --query virtualNetworkRules ``` -1. Enable service endpoint for Azure AI services on an existing virtual network and subnet. +1. Enable a service endpoint for Azure AI services on an existing virtual network and subnet. ```azurecli-interactive- az network vnet subnet update -g "myresourcegroup" -n "mysubnet" \ + az network vnet subnet update --resource-group "myresourcegroup" --name "mysubnet" \ --vnet-name "myvnet" --service-endpoints "Microsoft.CognitiveServices" ``` 1. Add a network rule for a virtual network and subnet. ```azurecli-interactive- $subnetid=(az network vnet subnet show \ - -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \ + subnetid=$(az network vnet subnet show \ + --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \ --query id --output tsv) # Use the captured subnet identifier as an argument to the network rule addition az cognitiveservices account network-rule add \- -g "myresourcegroup" -n "myaccount" \ + --resource-group "myresourcegroup" --name "myaccount" \ --subnet $subnetid ``` > [!TIP]- > To add a rule for a subnet in a VNet belonging to another Azure AD tenant, use a fully-qualified subnet ID in the form "/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name". + > To add a rule for a subnet in a virtual network that belongs to another Azure AD tenant, use a fully-qualified subnet ID in the form `/subscriptions/subscription-ID/resourceGroups/resourceGroup-Name/providers/Microsoft.Network/virtualNetworks/vNet-name/subnets/subnet-name`. > - > You can use the **subscription** parameter to retrieve the subnet ID for a VNet belonging to another Azure AD tenant. + > You can use the `--subscription` parameter to retrieve the subnet ID for a virtual network that belongs to another Azure AD tenant. 1. Remove a network rule for a virtual network and subnet. ```azurecli-interactive $subnetid=(az network vnet subnet show \- -g "myresourcegroup" -n "mysubnet" --vnet-name "myvnet" \ + --resource-group "myresourcegroup" --name "mysubnet" --vnet-name "myvnet" \ --query id --output tsv) # Use the captured subnet identifier as an argument to the network rule removal az cognitiveservices account network-rule remove \- -g "myresourcegroup" -n "myaccount" \ + --resource-group "myresourcegroup" --name "myaccount" \ --subnet $subnetid ``` *** > [!IMPORTANT]-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect. +> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect. ## Grant access from an internet IP range -You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, effectively blocking general internet traffic. +You can configure Azure AI services resources to allow access from specific public internet IP address ranges. This configuration grants access to specific services and on-premises networks, which effectively block general internet traffic. -Provide allowed internet address ranges using [CIDR notation](https://tools.ietf.org/html/rfc4632) in the form `16.17.18.0/24` or as individual IP addresses like `16.17.18.19`. +You can specify the allowed internet address ranges by using [CIDR format (RFC 4632)](https://tools.ietf.org/html/rfc4632) in the form `192.168.0.0/16` or as individual IP addresses like `192.168.0.1`. > [!Tip]- > Small address ranges using "/31" or "/32" prefix sizes are not supported. These ranges should be configured using individual IP address rules. + > Small address ranges that use `/31` or `/32` prefix sizes aren't supported. Configure these ranges by using individual IP address rules. ++IP network rules are only allowed for *public internet* IP addresses. IP address ranges reserved for private networks aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. For more information, see [Private Address Space (RFC 1918)](https://tools.ietf.org/html/rfc1918#section-3). ++Currently, only IPv4 addresses are supported. Each Azure AI services resource supports up to 100 IP network rules, which can be combined with [virtual network rules](#grant-access-from-a-virtual-network). -IP network rules are only allowed for **public internet** IP addresses. IP address ranges reserved for private networks (as defined in [RFC 1918](https://tools.ietf.org/html/rfc1918#section-3)) aren't allowed in IP rules. Private networks include addresses that start with `10.*`, `172.16.*` - `172.31.*`, and `192.168.*`. +### Configure access from on-premises networks -Only IPV4 addresses are supported at this time. Each Azure AI services resource supports up to 100 IP network rules, which may be combined with [Virtual network rules](#grant-access-from-a-virtual-network). +To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, identify the internet-facing IP addresses used by your network. Contact your network administrator for help. -### Configuring access from on-premises networks +If you use Azure ExpressRoute on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For more information, see [What is Azure ExpressRoute](../expressroute/expressroute-introduction.md). -To grant access from your on-premises networks to your Azure AI services resource with an IP network rule, you must identify the internet facing IP addresses used by your network. Contact your network administrator for help. +For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or supplied by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. -If you're using [ExpressRoute](../expressroute/expressroute-introduction.md) on-premises for public peering or Microsoft peering, you need to identify the NAT IP addresses. For public peering, each ExpressRoute circuit by default uses two NAT IP addresses. Each is applied to Azure service traffic when the traffic enters the Microsoft Azure network backbone. For Microsoft peering, the NAT IP addresses that are used are either customer provided or are provided by the service provider. To allow access to your service resources, you must allow these public IP addresses in the resource IP firewall setting. To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) via the Azure portal. Learn more about [NAT for ExpressRoute public and Microsoft peering.](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering) +To find your public peering ExpressRoute circuit IP addresses, [open a support ticket with ExpressRoute](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) use the Azure portal. For more information, see [NAT requirements for Azure public peering](../expressroute/expressroute-nat.md#nat-requirements-for-azure-public-peering). ### Managing IP network rules You can manage IP network rules for Azure AI services resources through the Azur 1. Go to the Azure AI services resource you want to secure. -1. Select the **RESOURCE MANAGEMENT** menu called **Virtual network**. +1. Select **Resource Management** to expand it, then select **Networking**. -1. Check that you've selected to allow access from **Selected networks**. +1. Confirm that you selected **Selected Networks and Private Endpoints**. -1. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)) under **Firewall** > **Address Range**. Only valid public IP (non-reserved) addresses are accepted. +1. Under **Firewalls and virtual networks**, locate the **Address range** option. To grant access to an internet IP range, enter the IP address or address range (in [CIDR format](https://tools.ietf.org/html/rfc4632)). Only valid public IP (nonreserved) addresses are accepted. - ![Add IP range](media/vnet/virtual-network-add-ip-range.png) + :::image type="content" source="media/vnet/virtual-network-add-ip-range.png" alt-text="Screenshot shows the Networking page with Selected Networks and Private Endpoints selected and the Address range highlighted." lightbox="media/vnet/virtual-network-add-ip-range.png"::: -1. To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range. -- ![Delete IP range](media/vnet/virtual-network-delete-ip-range.png) + To remove an IP network rule, select the trash can <span class="docon docon-delete x-hidden-focus"></span> icon next to the address range. 1. Select **Save** to apply your changes. # [PowerShell](#tab/powershell) -1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Try it**. +1. Install the [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps), or select **Open Cloudshell**. -1. List IP network rules. +1. List the configured IP network rules. - ```azurepowershell-interactive - $parameters = @{ - "ResourceGroupName"= "myresourcegroup" - "Name"= "myaccount" -} + ```azurepowershell-interactive + $parameters = @{ + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + } (Get-AzCognitiveServicesAccountNetworkRuleSet @parameters).IPRules ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.19" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "ipaddress" } Add-AzCognitiveServicesAccountNetworkRule @parameters ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.0/24" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "CIDR" } Add-AzCognitiveServicesAccountNetworkRule @parameters ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.19" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "ipaddress" } Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` You can manage IP network rules for Azure AI services resources through the Azur ```azurepowershell-interactive $parameters = @{- -ResourceGroupName "myresourcegroup" - -Name "myaccount" - -IPAddressOrRange "16.17.18.0/24" + "ResourceGroupName" = "myresourcegroup" + "Name" = "myaccount" + "IPAddressOrRange" = "CIDR" } Remove-AzCognitiveServicesAccountNetworkRule @parameters ``` # [Azure CLI](#tab/azure-cli) -1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Try it**. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) and [sign in](/cli/azure/authenticate-azure-cli), or select **Open Cloudshell**. -1. List IP network rules. +1. List the configured IP network rules. ```azurecli-interactive az cognitiveservices account network-rule list \- -g "myresourcegroup" -n "myaccount" --query ipRules + --resource-group "myresourcegroup" --name "myaccount" --query ipRules ``` 1. Add a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule add \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.19" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "ipaddress" ``` 1. Add a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule add \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.0/24" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "CIDR" ``` 1. Remove a network rule for an individual IP address. ```azurecli-interactive az cognitiveservices account network-rule remove \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.19" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "ipaddress" ``` 1. Remove a network rule for an IP address range. ```azurecli-interactive az cognitiveservices account network-rule remove \- -g "myresourcegroup" -n "myaccount" \ - --ip-address "16.17.18.0/24" + --resource-group "myresourcegroup" --name "myaccount" \ + --ip-address "CIDR" ``` *** > [!IMPORTANT]-> Be sure to [set the default rule](#change-the-default-network-access-rule) to **deny**, or network rules have no effect. +> Be sure to [set the default rule](#change-the-default-network-access-rule) to *deny*, or network rules have no effect. ## Use private endpoints -You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network (VNet) to securely access data over a [Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the VNet address space for your Azure AI services resource. Network traffic between the clients on the VNet and the resource traverses the VNet and a private link on the Microsoft backbone network, eliminating exposure from the public internet. +You can use [private endpoints](../private-link/private-endpoint-overview.md) for your Azure AI services resources to allow clients on a virtual network to securely access data over [Azure Private Link](../private-link/private-link-overview.md). The private endpoint uses an IP address from the virtual network address space for your Azure AI services resource. Network traffic between the clients on the virtual network and the resource traverses the virtual network and a private link on the Microsoft Azure backbone network, which eliminates exposure from the public internet. Private endpoints for Azure AI services resources let you: -* Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service. -* Increase security for the VNet, by enabling you to block exfiltration of data from the VNet. -* Securely connect to Azure AI services resources from on-premises networks that connect to the VNet using [VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering. +- Secure your Azure AI services resource by configuring the firewall to block all connections on the public endpoint for the Azure AI services service. +- Increase security for the virtual network, by enabling you to block exfiltration of data from the virtual network. +- Securely connect to Azure AI services resources from on-premises networks that connect to the virtual network by using [Azure VPN Gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoutes](../expressroute/expressroute-locations.md) with private-peering. -### Conceptual overview +### Understand private endpoints -A private endpoint is a special network interface for an Azure resource in your [VNet](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your VNet and your resource. The private endpoint is assigned an IP address from the IP address range of your VNet. The connection between the private endpoint and the Azure AI services service uses a secure private link. +A private endpoint is a special network interface for an Azure resource in your [virtual network](../virtual-network/virtual-networks-overview.md). Creating a private endpoint for your Azure AI services resource provides secure connectivity between clients in your virtual network and your resource. The private endpoint is assigned an IP address from the IP address range of your virtual network. The connection between the private endpoint and the Azure AI services service uses a secure private link. -Applications in the VNet can connect to the service over the private endpoint seamlessly, using the same connection strings and authorization mechanisms that they would use otherwise. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST. +Applications in the virtual network can connect to the service over the private endpoint seamlessly. Connections use the same connection strings and authorization mechanisms that they would use otherwise. The exception is Speech Services, which require a separate endpoint. For more information, see [Private endpoints with the Speech Services](#use-private-endpoints-with-the-speech-service) in this article. Private endpoints can be used with all protocols supported by the Azure AI services resource, including REST. -Private endpoints can be created in subnets that use [Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. +Private endpoints can be created in subnets that use service endpoints. Clients in a subnet can connect to one Azure AI services resource using private endpoint, while using service endpoints to access others. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md). -When you create a private endpoint for an Azure AI services resource in your VNet, a consent request is sent for approval to the Azure AI services resource owner. If the user requesting the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. +When you create a private endpoint for an Azure AI services resource in your virtual network, Azure sends a consent request for approval to the Azure AI services resource owner. If the user who requests the creation of the private endpoint is also an owner of the resource, this consent request is automatically approved. -Azure AI services resource owners can manage consent requests and the private endpoints, through the '*Private endpoints*' tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com). +Azure AI services resource owners can manage consent requests and the private endpoints through the **Private endpoint connection** tab for the Azure AI services resource in the [Azure portal](https://portal.azure.com). -### Private endpoints +### Specify private endpoints -When creating the private endpoint, you must specify the Azure AI services resource it connects to. For more information on creating a private endpoint, see: +When you create a private endpoint, specify the Azure AI services resource that it connects to. For more information on creating a private endpoint, see: -* [Create a private endpoint using the Private Link Center in the Azure portal](../private-link/create-private-endpoint-portal.md) -* [Create a private endpoint using Azure CLI](../private-link/create-private-endpoint-cli.md) -* [Create a private endpoint using Azure PowerShell](../private-link/create-private-endpoint-powershell.md) +- [Create a private endpoint by using the Azure portal](../private-link/create-private-endpoint-portal.md) +- [Create a private endpoint by using Azure PowerShell](../private-link/create-private-endpoint-powershell.md) +- [Create a private endpoint by using the Azure CLI](../private-link/create-private-endpoint-cli.md) -### Connecting to private endpoints +### Connect to private endpoints > [!NOTE]-> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. Refer to the [Azure services DNS zone configuration article](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration) for the correct zone and forwarder names. +> Azure OpenAI Service uses a different private DNS zone and public DNS zone forwarder than other Azure AI services. For the correct zone and forwarder names, see [Azure services DNS zone configuration](../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration). -Clients on a VNet using the private endpoint should use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech Services, which require a separate endpoint. See the section on [Private endpoints with the Speech Services](#private-endpoints-with-the-speech-services). We rely upon DNS resolution to automatically route the connections from the VNet to the Azure AI services resource over a private link. +Clients on a virtual network that use the private endpoint use the same connection string for the Azure AI services resource as clients connecting to the public endpoint. The exception is the Speech service, which requires a separate endpoint. For more information, see [Use private endpoints with the Speech service](#use-private-endpoints-with-the-speech-service) in this article. DNS resolution automatically routes the connections from the virtual network to the Azure AI services resource over a private link. -We create a [private DNS zone](../dns/private-dns-overview.md) attached to the VNet with the necessary updates for the private endpoints, by default. However, if you're using your own DNS server, you may need to make more changes to your DNS configuration. The section on [DNS changes](#dns-changes-for-private-endpoints) below describes the updates required for private endpoints. +By default, Azure creates a [private DNS zone](../dns/private-dns-overview.md) attached to the virtual network with the necessary updates for the private endpoints. If you use your own DNS server, you might need to make more changes to your DNS configuration. For updates that might be required for private endpoints, see [Apply DNS changes for private endpoints](#apply-dns-changes-for-private-endpoints) in this article. -### Private endpoints with the Speech Services +### Use private endpoints with the Speech service -See [Using Speech Services with private endpoints provided by Azure Private Link](Speech-Service/speech-services-private-link.md). +See [Use Speech service through a private endpoint](Speech-Service/speech-services-private-link.md). -### DNS changes for private endpoints +### Apply DNS changes for private endpoints -When you create a private endpoint, the DNS CNAME resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, we also create a [private DNS zone](../dns/private-dns-overview.md), corresponding to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. +When you create a private endpoint, the DNS `CNAME` resource record for the Azure AI services resource is updated to an alias in a subdomain with the prefix `privatelink`. By default, Azure also creates a private DNS zone that corresponds to the `privatelink` subdomain, with the DNS A resource records for the private endpoints. For more information, see [What is Azure Private DNS](../dns/private-dns-overview.md). -When you resolve the endpoint URL from outside the VNet with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When resolved from the VNet hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address. +When you resolve the endpoint URL from outside the virtual network with the private endpoint, it resolves to the public endpoint of the Azure AI services resource. When it's resolved from the virtual network hosting the private endpoint, the endpoint URL resolves to the private endpoint's IP address. -This approach enables access to the Azure AI services resource using the same connection string for clients in the VNet hosting the private endpoints and clients outside the VNet. +This approach enables access to the Azure AI services resource using the same connection string for clients in the virtual network that hosts the private endpoints and clients outside the virtual network. -If you're using a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the VNet. +If you use a custom DNS server on your network, clients must be able to resolve the fully qualified domain name (FQDN) for the Azure AI services resource endpoint to the private endpoint IP address. Configure your DNS server to delegate your private link subdomain to the private DNS zone for the virtual network. > [!TIP]-> When using a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the 'privatelink' subdomain to the private endpoint IP address. You can do this by delegating the 'privatelink' subdomain to the private DNS zone of the VNet, or configuring the DNS zone on your DNS server and adding the DNS A records. +> When you use a custom or on-premises DNS server, you should configure your DNS server to resolve the Azure AI services resource name in the `privatelink` subdomain to the private endpoint IP address. Delegate the `privatelink` subdomain to the private DNS zone of the virtual network. Alternatively, configure the DNS zone on your DNS server and add the DNS A records. -For more information on configuring your own DNS server to support private endpoints, see the following articles: +For more information on configuring your own DNS server to support private endpoints, see the following resources: -* [Name resolution for resources in Azure virtual networks](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) -* [DNS configuration for private endpoints](../private-link/private-endpoint-overview.md#dns-configuration) +- [Name resolution that uses your own DNS server](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) +- [DNS configuration](../private-link/private-endpoint-overview.md#dns-configuration) ### Pricing For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co ## Next steps -* Explore the various [Azure AI services](./what-are-ai-services.md) -* Learn more about [Azure Virtual Network Service Endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) +- Explore the various [Azure AI services](./what-are-ai-services.md) +- Learn more about [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md) |
ai-services | Storage Lab Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/Tutorials/storage-lab-tutorial.md | Navigate to the *Web.config* file at the root of the project. Add the following <add key="VisionEndpoint" value="VISION_ENDPOINT" /> ``` -Then in the Solution Explorer, right-click the project and use the **Manage NuGet Packages** command to install the package **Microsoft.Azure.CognitiveServices.Vision.ComputerVision**. This package contains the types needed to call the Azure AI Vision API. +In the Solution Explorer. right-click on the project solution and select **Manage NuGet Packages**. In the package manager that opens select **Browse**, check **Include prerelease**, and search for **Azure.AI.Vision.ImageAnalysis**. Select **Install**. ### Add metadata generation code Next, you'll add the code that actually uses the Azure AI Vision service to crea 1. Open the *HomeController.cs* file in the project's **Controllers** folder and add the following `using` statements at the top of the file: ```csharp- using Microsoft.Azure.CognitiveServices.Vision.ComputerVision; - using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models; + using Azure.AI.Vision.Common; + using Azure.AI.Vision.ImageAnalysis; ``` 1. Then, go to the **Upload** method; this method converts and uploads images to blob storage. Add the following code immediately after the block that begins with `// Generate a thumbnail` (or at the end of your image-blob-creation process). This code takes the blob containing the image (`photo`), and uses Azure AI Vision to generate a description for that image. The Azure AI Vision API also generates a list of keywords that apply to the image. The generated description and keywords are stored in the blob's metadata so that they can be retrieved later on. ```csharp // Submit the image to the Azure AI Vision API- ComputerVisionClient vision = new ComputerVisionClient( - new ApiKeyServiceClientCredentials(ConfigurationManager.AppSettings["SubscriptionKey"]), - new System.Net.Http.DelegatingHandler[] { }); - vision.Endpoint = ConfigurationManager.AppSettings["VisionEndpoint"]; + var serviceOptions = new VisionServiceOptions( + Environment.GetEnvironmentVariable(ConfigurationManager.AppSettings["VisionEndpoint"]), + new AzureKeyCredential(ConfigurationManager.AppSettings["SubscriptionKey"])); - List<VisualFeatureTypes?> features = new List<VisualFeatureTypes?>() { VisualFeatureTypes.Description }; - var result = await vision.AnalyzeImageAsync(photo.Uri.ToString(), features); + var analysisOptions = new ImageAnalysisOptions() + { + Features = ImageAnalysisFeature.Caption | ImageAnalysisFeature.Tags, + Language = "en", + GenderNeutralCaption = true + }; ++ using var imageSource = VisionSource.FromUrl( + new Uri(photo.Uri.ToString())); ++ using var analyzer = new ImageAnalyzer(serviceOptions, imageSource, analysisOptions); + var result = analyzer.Analyze(); // Record the image description and tags in blob metadata- photo.Metadata.Add("Caption", result.Description.Captions[0].Text); + photo.Metadata.Add("Caption", result.Caption.ContentCaption.Content); - for (int i = 0; i < result.Description.Tags.Count; i++) + for (int i = 0; i < result.Tags.ContentTags.Count; i++) { string key = String.Format("Tag{0}", i);- photo.Metadata.Add(key, result.Description.Tags[i]); + photo.Metadata.Add(key, result.Tags.ContentTags[i]); } await photo.SetMetadataAsync(); |
ai-services | Computer Vision How To Install Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-how-to-install-containers.md | If you run the container with an output [mount](./computer-vision-resource-conta ## Billing -The Azure AI services containers send billing information to Azure, using the corresponding resource on your Azure account. +The Azure AI containers send billing information to Azure, using the corresponding resource on your Azure account. [!INCLUDE [Container's Billing Settings](../../../includes/cognitive-services-containers-how-to-billing-info.md)] In this article, you learned concepts and workflow for downloading, installing, * You must specify billing information when instantiating a container. > [!IMPORTANT]-> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft. +> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (for example, the image or text that is being analyzed) to Microsoft. ## Next steps In this article, you learned concepts and workflow for downloading, installing, * Review the [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](https://westus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-2/operations/5d986960601faab4bf452005) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.-* Use more [Azure AI services containers](../cognitive-services-container-support.md) +* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Deploy Computer Vision On Premises | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/deploy-computer-vision-on-premises.md | replicaset.apps/read-6cbbb6678 3 3 3 3s For more details on installing applications with Helm in Azure Kubernetes Service (AKS), [visit here][installing-helm-apps-in-aks]. > [!div class="nextstepaction"]-> [Azure AI services containers][cog-svcs-containers] +> [Azure AI containers][cog-svcs-containers] <!-- LINKS - external --> [free-azure-account]: https://azure.microsoft.com/free |
ai-services | Identity Detect Faces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/identity-detect-faces.md | To find faces and get their locations in an image, call the [DetectWithUrlAsync] :::code language="csharp" source="~/cognitive-services-quickstart-code/dotnet/Face/sdk/detect.cs" id="basic1"::: -You can query the returned [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) objects for the rectangles that give the pixel coordinates of each face. If you set _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks. -+The service returns a [DetectedFace](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.detectedface) object, which you can query for different kinds of information, specified below. For information on how to parse the location and dimensions of the face, see [FaceRectangle](/dotnet/api/microsoft.azure.cognitiveservices.vision.face.models.facerectangle). Usually, this rectangle contains the eyes, eyebrows, nose, and mouth. The top of head, ears, and chin aren't necessarily included. To use the face rectangle to crop a complete head or get a mid-shot portrait, you should expand the rectangle in each direction. For information on how to parse the location and dimensions of the face, see [Fa This guide focuses on the specifics of the Detect call, such as what arguments you can pass and what you can do with the returned data. We recommend that you query for only the features you need. Each operation takes more time to complete. +### Get face ID ++If you set the parameter _returnFaceId_ to `true` (approved customers only), you can get the unique ID for each face, which you can use in later face recognition tasks. +++The optional _faceIdTimeToLive_ parameter specifies how long (in seconds) the face ID should be stored on the server. After this time expires, the face ID is removed. The default value is 86400 (24 hours). + ### Get face landmarks [Face landmarks](../concept-face-detection.md#face-landmarks) are a set of easy-to-find points on a face, such as the pupils or the tip of the nose. To get face landmark data, set the _detectionModel_ parameter to `DetectionModel.Detection01` and the _returnFaceLandmarks_ parameter to `true`. |
ai-services | Use Large Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/use-large-scale.md | and [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/56 To better utilize the large-scale feature, we recommend the following strategies. -### Step 3.1: Customize time interval +### Step 3a: Customize time interval As is shown in `TrainLargeFaceList()`, there's a time interval in milliseconds to delay the infinite training status checking process. For LargeFaceList with more faces, using a larger interval reduces the call counts and cost. Customize the time interval according to the expected capacity of the LargeFaceList. The same strategy also applies to LargePersonGroup. For example, when you train a LargePersonGroup with 1 million persons, `timeIntervalInMilliseconds` might be 60,000, which is a 1-minute interval. -### Step 3.2: Small-scale buffer +### Step 3b: Small-scale buffer Persons or faces in a LargePersonGroup or a LargeFaceList are searchable only after being trained. In a dynamic scenario, new persons or faces are constantly added and must be immediately searchable, yet training might take longer than desired. An example workflow: 1. When the buffer collection size increases to a threshold or at a system idle time, create a new buffer collection. Trigger the Train operation on the master collection. 1. Delete the old buffer collection after the Train operation finishes on the master collection. -### Step 3.3: Standalone training +### Step 3c: Standalone training If a relatively long latency is acceptable, it isn't necessary to trigger the Train operation right after you add new data. Instead, the Train operation can be split from the main logic and triggered regularly. This strategy is suitable for dynamic scenarios with acceptable latency. It can be applied to static scenarios to further reduce the Train frequency. |
ai-services | Read Container Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/read-container-migration-guide.md | Set the timer with `Queue:Azure:QueueVisibilityTimeoutInMilliseconds`, which set * Review [OCR overview](overview-ocr.md) to learn more about recognizing printed and handwritten text * Refer to the [Read API](//westus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fa) for details about the methods supported by the container. * Refer to [Frequently asked questions (FAQ)](FAQ.yml) to resolve issues related to Azure AI Vision functionality.-* Use more [Azure AI services containers](../cognitive-services-container-support.md) +* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Spatial Analysis Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-container.md | If you encounter issues when starting or running the container, see [Telemetry a The Spatial Analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of Spatial Analysis in public preview is currently free. -Azure AI services containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Azure AI services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft. +Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. You must always enable the containers to communicate billing information with the billing endpoint. Azure AI containers don't send customer data, such as the video or image that's being analyzed, to Microsoft. ## Summary |
ai-services | Vehicle Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/vehicle-analysis.md | -Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you'll learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations. +Vehicle analysis is a set of capabilities that, when used with the Spatial Analysis container, enable you to analyze real-time streaming video to understand vehicle characteristics and placement. In this article, you learn how to use the capabilities of the spatial analysis container to deploy vehicle analysis operations. ## Prerequisites Vehicle analysis is a set of capabilities that, when used with the Spatial Analy ## Vehicle analysis operations -Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis will generate an output stream of JSON messages that are being sent to your instance of Azure IoT Hub. +Similar to Spatial Analysis, vehicle analysis enables the analysis of real-time streaming video from camera devices. For each camera device you configure, the operations for vehicle analysis generates an output stream of JSON messages that are being sent to your instance of Azure IoT Hub. -The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the *.cpu* distinction). +The following operations for vehicle analysis are available in the current Spatial Analysis container. Vehicle analysis offers operations optimized for both GPU and CPU (CPU operations include the ".cpu" distinction). | Operation identifier | Description | | -- | - | The following table shows the parameters required by each of the vehicle analysi | Operation ID | The Operation Identifier from table above.| | enabled | Boolean: true or false| | VIDEO_URL| The RTSP URL for the camera device (for example: `rtsp://username:password@url`). Spatial Analysis supports H.264 encoded streams either through RTSP, HTTP, or MP4. |-| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This will be returned with the event JSON output.| +| VIDEO_SOURCE_ID | A friendly name for the camera device or video stream. This is returned with the event JSON output.| | VIDEO_IS_LIVE| True for camera devices; false for recorded videos.|-| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it is 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.| +| VIDEO_DECODE_GPU_INDEX| Index specifying which GPU will decode the video frame. By default it's 0. This should be the same as the `gpu_index` in other node configurations like `VICA_NODE_CONFIG`, `DETECTOR_NODE_CONFIG`.| | PARKING_REGIONS | JSON configuration for zone and line as outlined below. </br> PARKING_REGIONS must contain four points in normalized coordinates ([0, 1]) that define a convex region (the points follow a clockwise or counterclockwise order).| | EVENT_OUTPUT_MODE | Can be ON_INPUT_RATE or ON_CHANGE. ON_INPUT_RATE will generate an output on every single frame received (one FPS). ON_CHANGE will generate an output when something changes (number of vehicles or parking spot occupancy). |-| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX will use an overlap between the detected bounding box and a reference bounding box. PROJECTIONS will project the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.| +| PARKING_SPOT_METHOD | Can be BOX or PROJECTION. BOX uses an overlap between the detected bounding box and a reference bounding box. PROJECTIONS projects the centroid point into the parking spot polygon drawn on the floor. This is only used for Parking Spot and can be suppressed.| -This is an example of a valid `PARKING_REGIONS` configuration: +Here is an example of a valid `PARKING_REGIONS` configuration: ```json "{\"parking_slot1\": {\"type\": \"SingleSpot\", \"region\": [[0.20833333, 0.46203704], [0.3015625 , 0.66203704], [0.13229167, 0.7287037 ], [0.07395833, 0.51574074]]}}" This is an example of a valid `PARKING_REGIONS` configuration: ### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehiclecount-preview and cognitiveservices.vision.vehicleanalysis-vehiclecount.cpu-preview -This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation. +Here is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation. ```json { This is an example of a JSON input for the `PARKING_REGIONS` parameter that conf ### Zone configuration for cognitiveservices.vision.vehicleanalysis-vehicleinpolygon-preview and cognitiveservices.vision.vehicleanalysis-vehicleinpolygon.cpu-preview -This is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation. +Here is an example of a JSON input for the `PARKING_REGIONS` parameter that configures a zone. You may configure multiple zones for this operation. ```json { The JSON below demonstrates an example of the vehicle count operation graph outp | Attribute | Type | Description | ||||-| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" | -| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" | +| `VehicleType` | float | Detected vehicle types. Possible detections include VehicleType_Bicycle, VehicleType_Bus, VehicleType_Car, VehicleType_Motorcycle, VehicleType_Pickup_Truck, VehicleType_SUV, VehicleType_Truck, VehicleType_Van/Minivan, VehicleType_type_other | +| `VehicleColor` | float | Detected vehicle colors. Possible detections include VehicleColor_Black, VehicleColor_Blue, VehicleColor_Brown/Beige, VehicleColor_Green, VehicleColor_Grey, VehicleColor_Red, VehicleColor_Silver, VehicleColor_White, VehicleColor_Yellow/Gold, VehicleColor_color_other | | `confidence` | float| Algorithm confidence| | SourceInfo Field Name | Type| Description| The JSON below demonstrates an example of the vehicle in polygon operation graph | Attribute | Type | Description | ||||-| `VehicleType` | float | Detected vehicle types. Possible detections include "VehicleType_Bicycle", "VehicleType_Bus", "VehicleType_Car", "VehicleType_Motorcycle", "VehicleType_Pickup_Truck", "VehicleType_SUV", "VehicleType_Truck", "VehicleType_Van/Minivan", "VehicleType_type_other" | -| `VehicleColor` | float | Detected vehicle colors. Possible detections include "VehicleColor_Black", "VehicleColor_Blue", "VehicleColor_Brown/Beige", "VehicleColor_Green", "VehicleColor_Grey", "VehicleColor_Red", "VehicleColor_Silver", "VehicleColor_White", "VehicleColor_Yellow/Gold", "VehicleColor_color_other" | +| `VehicleType` | float | Detected vehicle types. Possible detections include VehicleType_Bicycle, VehicleType_Bus, VehicleType_Car, VehicleType_Motorcycle, VehicleType_Pickup_Truck, VehicleType_SUV, VehicleType_Truck, VehicleType_Van/Minivan, VehicleType_type_other | +| `VehicleColor` | float | Detected vehicle colors. Possible detections include VehicleColor_Black, VehicleColor_Blue, VehicleColor_Brown/Beige, VehicleColor_Green, VehicleColor_Grey, VehicleColor_Red, VehicleColor_Silver, VehicleColor_White, VehicleColor_Yellow/Gold, VehicleColor_color_other | | `confidence` | float| Algorithm confidence| | SourceInfo Field Name | Type| Description| The JSON below demonstrates an example of the vehicle in polygon operation graph ## Zone and line configuration for vehicle analysis -For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone which you're analyzing. +For guidelines on where to place your zones for vehicle analysis, you can refer to the [zone and line placement](spatial-analysis-zone-line-placement.md) guide for spatial analysis. Configuring zones for vehicle analysis can be more straightforward than zones for spatial analysis if the parking spaces are already defined in the zone that you're analyzing. ## Camera placement for vehicle analysis For guidelines on where and how to place your camera for vehicle analysis, refer The vehicle analysis container sends billing information to Azure, using a Vision resource on your Azure account. The use of vehicle analysis in public preview is currently free. -Azure AI services containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint at all times. Azure AI services containers don't send customer data, such as the video or image that's being analyzed, to Microsoft. +Azure AI containers aren't licensed to run without being connected to the metering / billing endpoint. You must enable the containers to communicate billing information with the billing endpoint always. Azure AI containers don't send customer data, such as the video or image that's being analyzed, to Microsoft. ## Next steps |
ai-services | Azure Container Instance Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-container-instance-recipe.md | Title: Azure Container Instance recipe -description: Learn how to deploy Azure AI services containers on Azure Container Instance +description: Learn how to deploy Azure AI containers on Azure Container Instance All variables in angle brackets, `<>`, need to be replaced with your own values. 1. Select **Execute** to send the request to your Container Instance. - You have successfully created and used Azure AI services containers in Azure Container Instance. + You have successfully created and used Azure AI containers in Azure Container Instance. # [CLI](#tab/cli) |
ai-services | Azure Kubernetes Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/azure-kubernetes-recipe.md | This website is equivalent to your own client-side application that makes reques The language detection container, in this specific procedure, is accessible to any external request. The container hasn't been changed in any way so the standard Azure AI services container-specific language detection API is available. -For this container, that API is a POST request for language detection. As with all Azure AI services containers, you can learn more about the container from its hosted Swagger information, `http://<external-IP>:5000/swagger/https://docsupdatetracker.net/index.html`. +For this container, that API is a POST request for language detection. As with all Azure AI containers, you can learn more about the container from its hosted Swagger information, `http://<external-IP>:5000/swagger/https://docsupdatetracker.net/index.html`. -Port 5000 is the default port used with the Azure AI services containers. +Port 5000 is the default port used with the Azure AI containers. ## Create Azure Container Registry service To deploy the container to the Azure Kubernetes Service, the container images ne ## Get website Docker image -1. The sample code used in this procedure is in the Azure AI services containers samples repository. Clone the repository to have a local copy of the sample. +1. The sample code used in this procedure is in the Azure AI containers samples repository. Clone the repository to have a local copy of the sample. ```console git clone https://github.com/Azure-Samples/cognitive-services-containers-samples This section uses the **kubectl** CLI to talk with the Azure Kubernetes Service. 1. Copy the following file and name it `language.yml`. The file has a `service` section and a `deployment` section each for the two container types, the `language-frontend` website container and the `language` detection container. - [!code-yml[Kubernetes orchestration file for the Azure AI services containers sample](~/samples-cogserv-containers/Kubernetes/language/language.yml "Kubernetes orchestration file for the Azure AI services containers sample")] + [!code-yml[Kubernetes orchestration file for the Azure AI containers sample](~/samples-cogserv-containers/Kubernetes/language/language.yml "Kubernetes orchestration file for the Azure AI containers sample")] 1. Change the language-frontend deployment lines of `language.yml` based on the following table to add your own container registry image names, client secret, and Language service settings. az group delete --name cogserv-container-rg ## Next steps -[Azure AI services containers](../cognitive-services-container-support.md) +[Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Container Reuse Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/container-reuse-recipe.md | -Use these container recipes to create Azure AI services containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started. +Use these container recipes to create Azure AI containers that can be reused. Containers can be built with some or all configuration settings so that they are _not_ needed when the container is started. Once you have this new layer of container (with settings), and you have tested it locally, you can store the container in a container registry. When the container starts, it will only need those settings that are not currently stored in the container. The private registry container provides configuration space for you to pass those settings in. |
ai-services | Disconnected Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/disconnected-containers.md | If you run the container with an output mount and logging enabled, the container ## Next steps -[Azure AI services containers overview](../cognitive-services-container-support.md) +[Azure AI containers overview](../cognitive-services-container-support.md) |
ai-services | Docker Compose Recipe | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/containers/docker-compose-recipe.md | Title: Use Docker Compose to deploy multiple containers -description: Learn how to deploy multiple Azure AI services containers. This article shows you how to orchestrate multiple Docker container images by using Docker Compose. +description: Learn how to deploy multiple Azure AI containers. This article shows you how to orchestrate multiple Docker container images by using Docker Compose. -This article shows you how to deploy multiple Azure AI services containers. Specifically, you'll learn how to use Docker Compose to orchestrate multiple Docker container images. +This article shows you how to deploy multiple Azure AI containers. Specifically, you'll learn how to use Docker Compose to orchestrate multiple Docker container images. > [Docker Compose](https://docs.docker.com/compose/) is a tool for defining and running multi-container Docker applications. In Compose, you use a YAML file to configure your application's services. Then, you create and start all the services from your configuration by running a single command. Open a browser on the host machine and go to **localhost** by using the specifie ## Next steps -[Azure AI services containers](../cognitive-services-container-support.md) +[Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Client Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-moderator/client-libraries.md | Title: 'Quickstart: Use the Content Moderator client library' -description: The Content Moderator API offers client libraries that makes it easy to integrate Content Moderator into your applications. +description: The Content Moderator API offers client libraries that make it easy to integrate Content Moderator into your applications. keywords: content moderator, Azure AI Content Moderator, online moderator, conte # Quickstart: Use the Content Moderator client library + ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)] |
ai-services | Changelog Release History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/changelog-release-history.md | This reference article provides a version-based description of Document Intellig [**Changelog/Release History**](https://github.com/Azure/azure-sdk-for-jav) -[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) +[**Package (MVN)**](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) [**ReadMe**](https://github.com/Azure/azure-sdk-for-jav) |
ai-services | Concept Add On Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md | monikerRange: 'doc-intel-3.1.0' # Document Intelligence add-on capabilities > [!NOTE] >-> Add-on capabilities for Document Intelligence Studio are available with the Read and Layout models for the `2023-07-31` (GA)release. +> Add-on capabilities for Document Intelligence Studio are available with the Read and Layout models starting with the `2023-02-28-preview` and later releases. +> +> Add-on capabilities are available within all models except for the [Business card model](concept-business-card.md). -Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are three add-on capabilities available for the `2023-07-31` (GA) release: +Document Intelligence now supports more sophisticated analysis capabilities. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for`2023-02-28-preview` and later releases: * [`ocr.highResolution`](#high-resolution-extraction) Document Intelligence now supports more sophisticated analysis capabilities. The * [`ocr.font`](#font-property-extraction) +* [`ocr.barcode`](#barcode-property-extraction) + ## High resolution extraction The task of recognizing small text from large-size documents, like engineering drawings, is a challenge. Often the text is mixed with other graphical elements and has varying fonts, sizes and orientations. Moreover, the text may be broken into separate parts or connected with other symbols. Document Intelligence now supports extracting content from these types of documents with the `ocr.highResolution` capability. You get improved quality of content extraction from A1/A2/A3 documents by enabling this add-on capability. The `ocr.font` capability extracts all font properties of text extracted in the ] ``` +## Barcode property extraction ++The `ocr.barcode` capability extracts all identified barcodes in the `barcodes` collection as a top level object under `content`. Inside the `content`, detected barcodes are represented as `:barcode:`. Each entry in this collection represents a barcode and includes the barcode type as `kind` and the embedded barcode content as `value` along with its `polygon` coordinates. Initially, barcodes appear at the end of each page. The `confidence` is hard-coded for as 1. ++#### Supported barcode types ++| **Barcode Type** | **Example** | +| | | +| `QR Code` |:::image type="content" source="media/barcodes/qr-code.png" alt-text="Screenshot of the QR Code.":::| +| `Code 39` |:::image type="content" source="media/barcodes/code-39.png" alt-text="Screenshot of the Code 39.":::| +| `Code 93` |:::image type="content" source="media/barcodes/code-93.gif" alt-text="Screenshot of the Code 93.":::| +| `Code 128` |:::image type="content" source="media/barcodes/code-128.png" alt-text="Screenshot of the Code 128.":::| +| `UPC (UPC-A & UPC-E)` |:::image type="content" source="media/barcodes/upc.png" alt-text="Screenshot of the UPC.":::| +| `PDF417` |:::image type="content" source="media/barcodes/pdf-417.png" alt-text="Screenshot of the PDF417.":::| +| `EAN-8` |:::image type="content" source="media/barcodes/european-article-number-8.gif" alt-text="Screenshot of the European-article-number barcode ean-8.":::| +| `EAN-13` |:::image type="content" source="media/barcodes/european-article-number-13.gif" alt-text="Screenshot of the European-article-number barcode ean-13.":::| +| `Codabar` |:::image type="content" source="media/barcodes/codabar.png" alt-text="Screenshot of the Codabar.":::| +| `Databar` |:::image type="content" source="media/barcodes/databar.png" alt-text="Screenshot of the Data bar.":::| +| `Databar` Expanded |:::image type="content" source="media/barcodes/databar-expanded.gif" alt-text="Screenshot of the Data bar Expanded.":::| +| `ITF` |:::image type="content" source="media/barcodes/interleaved-two-five.png" alt-text="Screenshot of the interleaved-two-of-five barcode (ITF).":::| +| `Data Matrix` |:::image type="content" source="media/barcodes/datamatrix.gif" alt-text="Screenshot of the Data Matrix.":::| + ## Next steps > [!div class="nextstepaction"] |
ai-services | Concept Custom Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md | monikerRange: 'doc-intel-3.1.0' # Document Intelligence custom classification model -**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-07-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)**. +**This article applies to:** ![Document Intelligence checkmark](medi) supported by Document Intelligence REST API version [2023-07-31](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)**. > [!IMPORTANT] > |
ai-services | Install Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/containers/install-run.md | monikerRange: '<=doc-intel-3.1.0' [!INCLUDE [applies to v2.1](../includes/applies-to-v2-1.md)] ::: moniker-end -Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file. +Azure AI Document Intelligence is an Azure AI service that lets you build automated data processing software using machine-learning technology. Document Intelligence enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your documents. The results are delivered as structured data that includes the relationships in the original file. ::: moniker range=">=doc-intel-3.0.0" In this article you learn how to download, install, and run Document Intelligence containers. Containers enable you to run the Document Intelligence service in your own environment. Containers are great for specific security and data governance requirements. http { ```yml version: '3.3' - nginx: - image: nginx:alpine - container_name: reverseproxy - volumes: - - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf - ports: - - "5000:5000" - layout: - container_name: azure-cognitive-service-layout - image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest - environment: - eula: accept - apikey: ${FORM_RECOGNIZER_KEY} - billing: ${FORM_RECOGNIZER_ENDPOINT_URI} - Logging:Console:LogLevel:Default: Information - SharedRootFolder: /shared - Mounts:Shared: /shared - Mounts:Output: /logs - volumes: - - type: bind - source: ${SHARED_MOUNT_PATH} - target: /shared - - type: bind - source: ${OUTPUT_MOUNT_PATH} - target: /logs - expose: - - "5000" + nginx: + image: nginx:alpine + container_name: reverseproxy + volumes: + - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf + ports: + - "5000:5000" + layout: + container_name: azure-cognitive-service-layout + image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0:latest + environment: + eula: accept + apikey: ${FORM_RECOGNIZER_KEY} + billing: ${FORM_RECOGNIZER_ENDPOINT_URI} + Logging:Console:LogLevel:Default: Information + SharedRootFolder: /shared + Mounts:Shared: /shared + Mounts:Output: /logs + volumes: + - type: bind + source: ${SHARED_MOUNT_PATH} + target: /shared + - type: bind + source: ${OUTPUT_MOUNT_PATH} + target: /logs + expose: + - "5000" - custom-template: - container_name: azure-cognitive-service-custom-template - image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest - restart: always - depends_on: - - layout - environment: - AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000 - eula: accept - apikey: ${FORM_RECOGNIZER_KEY} - billing: ${FORM_RECOGNIZER_ENDPOINT_URI} - Logging:Console:LogLevel:Default: Information - SharedRootFolder: /shared - Mounts:Shared: /shared - Mounts:Output: /logs - volumes: - - type: bind - source: ${SHARED_MOUNT_PATH} - target: /shared - - type: bind - source: ${OUTPUT_MOUNT_PATH} - target: /logs - expose: - - "5000" + custom-template: + container_name: azure-cognitive-service-custom-template + image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-template-3.0:latest + restart: always + depends_on: + - layout + environment: + AzureCognitiveServiceLayoutHost: http://azure-cognitive-service-layout:5000 + eula: accept + apikey: ${FORM_RECOGNIZER_KEY} + billing: ${FORM_RECOGNIZER_ENDPOINT_URI} + Logging:Console:LogLevel:Default: Information + SharedRootFolder: /shared + Mounts:Shared: /shared + Mounts:Output: /logs + volumes: + - type: bind + source: ${SHARED_MOUNT_PATH} + target: /shared + - type: bind + source: ${OUTPUT_MOUNT_PATH} + target: /logs + expose: + - "5000" - studio: - container_name: form-recognizer-studio - image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0 - environment: - ONPREM_LOCALFILE_BASEPATH: /onprem_folder - STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db - volumes: - - type: bind - source: ${FILE_MOUNT_PATH} # path to your local folder - target: /onprem_folder - - type: bind - source: ${DB_MOUNT_PATH} # path to your local folder - target: /onprem_db - ports: - - "5001:5001" - user: "1000:1000" # echo $(id -u):$(id -g) + studio: + container_name: form-recognizer-studio + image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/studio:3.0 + environment: + ONPREM_LOCALFILE_BASEPATH: /onprem_folder + STORAGE_DATABASE_CONNECTION_STRING: /onprem_db/Application.db + volumes: + - type: bind + source: ${FILE_MOUNT_PATH} # path to your local folder + target: /onprem_folder + - type: bind + source: ${DB_MOUNT_PATH} # path to your local folder + target: /onprem_db + ports: + - "5001:5001" + user: "1000:1000" # echo $(id -u):$(id -g) ``` http { 2. The following code sample is a self-contained `docker compose` example to run Document Intelligence Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. ```yml- version: '3.3' +version: '3.3' nginx: image: nginx:alpine |
ai-services | Create Sas Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md | To get started, you need: :::image type="content" source="media/sas-tokens/upload-blob-window.png" alt-text="Screenshot that shows the Upload blob window in the Azure portal."::: > [!NOTE]- > By default, the REST API uses form documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional). + > By default, the REST API uses documents located at the root of your container. You can also use data organized in subfolders if specified in the API call. For more information, see [Organize your data in subfolders](how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional). ## Use the Azure portal |
ai-services | Build A Custom Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md | Follow these tips to further optimize your data set for training. ## Upload your training data -When you've put together the set of form documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier. +When you've put together the set of documents for training, you need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, follow the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). Use the standard performance tier. If you want to use manually labeled data, upload the *.labels.json* and *.ocr.json* files that correspond to your training documents. You can use the [Sample Labeling tool](../label-tool.md) (or your own UI) to generate these files. |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md | monikerRange: '<=doc-intel-3.1.0' > [!NOTE] > Form Recognizer is now **Azure AI Document Intelligence**! >-> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs. +> * As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. +> * There are no changes to pricing. +> * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. +> * There are no breaking changes to application programming interfaces (APIs) or SDKs. +> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service. ::: moniker range=">=doc-intel-3.0.0" [!INCLUDE [applies to v3.1, v3.0, and v2.1](includes/applies-to-v3-1-v3-0-v2-1.md)] |
ai-services | Get Started Sdks Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md | monikerRange: '<=doc-intel-3.1.0' # Get started with Document Intelligence [!INCLUDE [applies to v3.1 and v3.0](../includes/applies-to-v3-1-v3-0.md)]++> [!IMPORTANT] +> +> * Azure Cognitive Services Form Recognizer is now Azure AI Document Intelligence. +> * Some platforms are still awaiting the renaming update. +> * All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service. +++* Get started with Azure AI Document Intelligence latest GA version (v3.1). ++* Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. ++* You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. ++* For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month. ++To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page. + ::: moniker-end -Get started with the latest version of Azure AI Document Intelligence. Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month. +Get started with Azure AI Document Intelligence GA version (3.0). Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key-value pairs, text, tables and key data from your documents. You can easily integrate Document Intelligence models into your workflows and applications by using an SDK in the programming language of your choice or calling the REST API. For this quickstart, we recommend that you use the free service while you're learning the technology. Remember that the number of free pages is limited to 500 per month. To learn more about Document Intelligence features and development options, visit our [Overview](../overview.md) page. +> [!TIP] +> +> * For an enhance experience and advanced model quality, try the [Document Intelligence v3.1 (GA) quickstart](?view=doc-intel-3.1.0&preserve-view=true#get-started-with-document-intelligence) and [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) API version: 2023-07-31 (3.1 General Availability). + ::: moniker-end ::: zone pivot="programming-language-csharp" To learn more about Document Intelligence features and development options, visi ::: zone-end ::: moniker range=">=doc-intel-3.0.0"+ That's it, congratulations! -In this quickstart, you used a form Document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth. +In this quickstart, you used a document Intelligence model to analyze various forms and documents. Next, explore the Document Intelligence Studio and reference documentation to learn about Document Intelligence API in depth. ## Next steps To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-csharp" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-java" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-javascript" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-python" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end To learn more about Document Intelligence features and development options, visi ::: zone pivot="programming-language-rest-api" ::: moniker range="doc-intel-2.1.0" ::: moniker-end ::: zone-end That's it, congratulations! In this quickstart, you used Document Intelligence m ## Next steps -* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio ](https://formrecognizer.appliedai.azure.com/studio). +* For an enhanced experience and advanced model quality, try the [Document Intelligence v3.0 Studio](https://formrecognizer.appliedai.azure.com/studio). * The v3.0 Studio supports any model trained with v2.1 labeled data. |
ai-services | Try Document Intelligence Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md | CORS should now be configured to use the storage account from Document Intellige :::image border="true" type="content" source="../media/sas-tokens/upload-blob-window.png" alt-text="Screenshot of upload blob window in the Azure portal."::: > [!NOTE]-> By default, the Studio will use form documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional) +> By default, the Studio will use documents that are located at the root of your container. However, you can use data organized in folders by specifying the folder path in the Custom form project creation steps. *See* [**Organize your data in subfolders**](../how-to-guides/build-a-custom-model.md?view=doc-intel-2.1.0&preserve-view=true#organize-your-data-in-subfolders-optional) ## Custom models |
ai-services | Try Sample Label Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-sample-label-tool.md | Title: "Quickstart: Label forms, train a model, and analyze forms using the Sample Labeling tool - Document Intelligence (formerly Form Recognizer)" -description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label form documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs. +description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs. |
ai-services | Sdk Overview V3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md | |
ai-services | Sdk Overview V3 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md | monikerRange: '>=doc-intel-3.0.0' # Document Intelligence SDK v3.1 (GA) -**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31 ΓÇö v3.1 (GA)**. +**The SDKs referenced in this article are supported by:** ![Document Intelligence checkmark](media/yes-icon.png) **Document Intelligence REST API version 2023-07-31ΓÇöv3.1 (GA)**. Azure AI Document Intelligence is a cloud service that uses machine learning to analyze text and structured data from documents. The Document Intelligence software development kit (SDK) is a set of libraries and tools that enable you to easily integrate Document Intelligence models and capabilities into your applications. Document Intelligence SDK is available across platforms in C#/.NET, Java, JavaScript, and Python programming languages. Document Intelligence SDK supports the following languages and platforms: | Language ΓåÆ Document Intelligence SDK version           | Package| Supported API version          | Platform support | |:-:|:-|:-| :-:| | [**.NET/C# ΓåÆ 4.1.0 ΓåÆ latest GA release </br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.1.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)|-|[**Java ΓåÆ 4.1.0 ΓåÆ latest GA release</br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer) |[• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| +|[**Java ΓåÆ 4.1.0 ΓåÆ latest GA release</br>(2023-08-10)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| |[**JavaScript ΓåÆ 5.0.0 ΓåÆ latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> • [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | |[**Python ΓåÆ 3.3.0 ΓåÆ latest GA release</br> (2023-08-08)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> • [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) |
ai-services | Tutorial Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/tutorial-azure-function.md | Next, you'll add your own code to the Python script to call the Document Intelli The following code parses the returned Document Intelligence response, constructs a .csv file, and uploads it to the **output** container. > [!IMPORTANT]- > You will likely need to edit this code to match the structure of your own form documents. + > You will likely need to edit this code to match the structure of your own documents. ```python # The code below extracts the json format into tabular data. |
ai-services | V3 1 Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-1-migration-guide.md | monikerRange: '<=doc-intel-3.1.0' ## Migrating from v3.1 preview API version -Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2023-02-28-preview API version to the `2023-07-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview.md). +Preview APIs are periodically deprecated. If you're using a preview API version, update your application to target the GA API version. To migrate from the 2023-02-28-preview API version to the `2023-07-31` (GA) API version using the SDK, update to the [current version of the language specific SDK](sdk-overview-v3-1.md). The `2023-07-31` (GA) API has a few updates and changes from the preview API version: |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md | Document Intelligence service is updated on an ongoing basis. Bookmark this page ## July 2023 > [!NOTE]-> Form Recognizer is now Azure AI Document Intelligence! +> Form Recognizer is now **Azure AI Document Intelligence**! >-> As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. There are no changes to pricing. The names _Cognitive Services_ and _Azure Applied AI_ continue to be used in Azure billing, cost analysis, price list, and price APIs. There are no breaking changes to application programming interfaces (APIs) or SDKs. +> * Document, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. +> * There are no changes to pricing. +> * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs. +> * There are no breaking changes to application programming interfaces (APIs) or SDKs. +> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service. **Document Intelligence v3.1 (GA)** The v3.1 API introduces new and updated capabilities: ## March 2023 > [!IMPORTANT]-> [**`2023-07-31`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) capabilities are currently only available in the following regions: +> [**`2023-02-28-preview`**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-02-28-preview/operations/AnalyzeDocument) capabilities are currently only available in the following regions: > > * West Europe > * West US2 The v3.1 API introduces new and updated capabilities: * Document Intelligence SDK version `4.0.0 GA` release * **Document Intelligence SDKs version 4.0.0 (.NET/C#, Java, JavaScript) and version 3.2.0 (Python) are generally available and ready for use in production applications!**- * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview.md). + * For more information on Document Intelligence SDKs, see the [**SDK overview**](sdk-overview-v3-1.md). * Update your applications using your programming language's **migration guide**. This release introduces the Document Intelligence 2.0. In the next sections, you * Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice. |
ai-services | Model Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/model-lifecycle.md | Language service features utilize AI models. We update the language service with ## Prebuilt features - Our standard (not customized) language service features are built on AI models that we call pre-trained models. We regularly update the language service with new model versions to improve model accuracy, support, and quality. Preview models used for preview features do not maintain a minimum retirement pe By default, API and SDK requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended). > [!NOTE]-> * If you are using an model version that is not listed in the table, then it was subjected to the expiration policy. +> * If you are using a model version that is not listed in the table, then it was subjected to the expiration policy. > * Abstractive document and conversation summarization do not provide model versions other than the latest available. Use the table below to find which model versions are supported by each feature: Use the table below to find which model versions are supported by each feature: ### Expiration timeline -As new training configs and new functionality become available; older and less accurate configs are retired, see the following timelines for configs expiration: +For custom features, there are two key parts of the AI implementation: training and deployment. New configurations are released regularly with regular AI improvements, so older and less accurate configurations are retired. ++Use the table below to find which model versions are supported by each feature: -New configs are being released every few months. So, training configs expiration of any publicly available config is **six months** after its release. If you've assigned a trained model to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version. +| Feature | Supported Training Config Versions | Training Config Expiration | Deployment Expiration | +||--||| +| Conversational language understanding | `2022-05-01` | October 28, 2022 | October 28, 2023 | +| Conversational language understanding | `2022-09-01` (latest)** | February 28, 2024 | February 27, 2025 | +| Orchestration workflow | `2022-09-01` (latest)** | April 30, 2024 | April 30, 2025 | +| Custom named entity recognition | `2022-05-01` (latest)** | April 30, 2024 | April 30, 2025 | +| Custom text classification | `2022-05-01` (latest)** | April 30, 2024 | April 30, 2025 | -After training config version expires, API calls will return an error when called or used if called with an expired config version. By default, training requests use the latest available training config version. To change the config version, use `trainingConfigVersion` when submitting a training job and assign the version you want. +** *For latest training configuration versions, the posted expiration dates are subject to availability of a newer model version. If no newer model versions are available, the expiration date may be extended.* -> [!Tip] -> It's recommended to use the latest supported config version +Training configurations are typically available for **six months** after its release. If you've assigned a trained configuration to a deployment, this deployment expires after **twelve months** from the training config expiration. If your models are about to expire, you can retrain and redeploy your models with the latest training configuration version. -You can train and deploy a custom AI model from the date of training config version release, up until the **Training config expiration** date. After this date, you'll have to use another supported training config version for submitting any training or deployment jobs. +> [!TIP] +> It's recommended to use the latest supported configuration version. -Deployment expiration is when your deployed model will be unavailable to be used for prediction. +After the **training config expiration** date, you'll have to use another supported training configuration version to submit any training or deployment jobs. After the **deployment expiration** date, your deployed model will be unavailable to be used for prediction. -Use the table below to find which model versions are supported by each feature: +After training config version expires, API calls will return an error when called or used if called with an expired configuration version. By default, training requests use the latest available training configuration version. To change the configuration version, use the `trainingConfigVersion` parameter when submitting a training job and assign the version you want. -| Feature | Supported Training config versions | Training config expiration | Deployment expiration | -||--||| -| Custom text classification | `2022-05-01` | `2023-05-01` | `2024-04-30` | -| Conversational language understanding | `2022-05-01` | `2022-10-28` | `2023-10-28` | -| Conversational language understanding | `2022-09-01` | `2023-02-28` | `2024-02-28` | -| Custom named entity recognition | `2022-05-01` | `2023-05-01` | `2024-04-30` | -| Orchestration workflow | `2022-05-01` | `2023-05-01` | `2024-04-30` | ## API versions |
ai-services | Regional Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/regional-support.md | + + Title: Regional support for Azure AI Language ++description: Learn which Azure regions are supported by the Language service. ++++++ Last updated : 08/23/2023+++++# Language service supported regions ++The Language service is available for use in several Azure regions. Use this article to learn about the regional support and limitations. ++## Region support overview ++Typically you can refer to the [region support](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cognitive-services) for details, and most Language service capabilities are available in all supported regions. Some Language service capabilities, however, are only available in select regions which are listed below. ++> [!NOTE] +> Language service doesn't store or process customer data outside the region you deploy the service instance in. ++## Conversational language understanding and orchestration workflow ++Conversational language understanding, orchestration workflow, and custom sentiment analysis are only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md). ++| Region | Authoring | Prediction | +|--|--|-| +| Australia East | Γ£ô | Γ£ô | +| Brazil South | | Γ£ô | +| Canada Central | | Γ£ô | +| Central India | Γ£ô | Γ£ô | +| Central US | | Γ£ô | +| China East 2 | Γ£ô | Γ£ô | +| China North 2 | | Γ£ô | +| East Asia | | Γ£ô | +| East US | Γ£ô | Γ£ô | +| East US 2 | Γ£ô | Γ£ô | +| France Central | | Γ£ô | +| Japan East | | Γ£ô | +| Japan West | | Γ£ô | +| Jio India West | | Γ£ô | +| Korea Central | | Γ£ô | +| North Central US | | Γ£ô | +| North Europe | Γ£ô | Γ£ô | +| Norway East | | Γ£ô | +| Qatar Central | | Γ£ô | +| South Africa North | | Γ£ô | +| South Central US | Γ£ô | Γ£ô | +| Southeast Asia | | Γ£ô | +| Sweden Central | | Γ£ô | +| Switzerland North | Γ£ô | Γ£ô | +| UAE North | | Γ£ô | +| UK South | Γ£ô | Γ£ô | +| West Central US | | Γ£ô | +| West Europe | Γ£ô | Γ£ô | +| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | +++## Custom sentiment analysis ++Custom sentiment analysis is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md). ++| Region | Authoring | Prediction | +|--|--|-| +| Australia East | Γ£ô | Γ£ô | +| Brazil South | | Γ£ô | +| Canada Central | | Γ£ô | +| Central India | Γ£ô | Γ£ô | +| Central US | | Γ£ô | +| China East 2 | Γ£ô | Γ£ô | +| China North 2 | | Γ£ô | +| East Asia | | Γ£ô | +| East US | Γ£ô | Γ£ô | +| East US 2 | Γ£ô | Γ£ô | +| France Central | | Γ£ô | +| Japan East | | Γ£ô | +| Japan West | | Γ£ô | +| Jio India West | | Γ£ô | +| Korea Central | | Γ£ô | +| North Central US | | Γ£ô | +| North Europe | Γ£ô | Γ£ô | +| Norway East | | Γ£ô | +| Qatar Central | | Γ£ô | +| South Africa North | | Γ£ô | +| South Central US | Γ£ô | Γ£ô | +| Southeast Asia | | Γ£ô | +| Sweden Central | | Γ£ô | +| Switzerland North | Γ£ô | Γ£ô | +| UAE North | | Γ£ô | +| UK South | Γ£ô | Γ£ô | +| West Central US | | Γ£ô | +| West Europe | Γ£ô | Γ£ô | +| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | ++## Custom named entity recognition ++Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md). ++| Region | Authoring | Prediction | +|--|--|-| +| Australia East | Γ£ô | Γ£ô | +| Brazil South | | Γ£ô | +| Canada Central | | Γ£ô | +| Central India | Γ£ô | Γ£ô | +| Central US | | Γ£ô | +| East Asia | | Γ£ô | +| East US | Γ£ô | Γ£ô | +| East US 2 | Γ£ô | Γ£ô | +| France Central | | Γ£ô | +| Japan East | | Γ£ô | +| Japan West | | Γ£ô | +| Jio India West | | Γ£ô | +| Korea Central | | Γ£ô | +| North Central US | | Γ£ô | +| North Europe | Γ£ô | Γ£ô | +| Norway East | | Γ£ô | +| Qatar Central | | Γ£ô | +| South Africa North | | Γ£ô | +| South Central US | Γ£ô | Γ£ô | +| Southeast Asia | | Γ£ô | +| Sweden Central | | Γ£ô | +| Switzerland North | Γ£ô | Γ£ô | +| UAE North | | Γ£ô | +| UK South | Γ£ô | Γ£ô | +| West Central US | | Γ£ô | +| West Europe | Γ£ô | Γ£ô | +| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | +++## Custom text classification ++Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](./custom-features/multi-region-deployment.md). ++| Region | Authoring | Prediction | +|--|--|-| +| Australia East | Γ£ô | Γ£ô | +| Brazil South | | Γ£ô | +| Canada Central | | Γ£ô | +| Central India | Γ£ô | Γ£ô | +| Central US | | Γ£ô | +| East Asia | | Γ£ô | +| East US | Γ£ô | Γ£ô | +| East US 2 | Γ£ô | Γ£ô | +| France Central | | Γ£ô | +| Japan East | | Γ£ô | +| Japan West | | Γ£ô | +| Jio India West | | Γ£ô | +| Korea Central | | Γ£ô | +| North Central US | | Γ£ô | +| North Europe | Γ£ô | Γ£ô | +| Norway East | | Γ£ô | +| Qatar Central | | Γ£ô | +| South Africa North | | Γ£ô | +| South Central US | Γ£ô | Γ£ô | +| Southeast Asia | | Γ£ô | +| Sweden Central | | Γ£ô | +| Switzerland North | Γ£ô | Γ£ô | +| UAE North | | Γ£ô | +| UK South | Γ£ô | Γ£ô | +| West Central US | | Γ£ô | +| West Europe | Γ£ô | Γ£ô | +| West US | | Γ£ô | +| West US 2 | Γ£ô | Γ£ô | +| West US 3 | Γ£ô | Γ£ô | ++## Summarization ++|Region|Document abstractive summarization|Conversation issue and resolution summarization|Conversation narrative summarization with chapters|Custom summarization| +||||||-| +|North Europe| Γ£ô | Γ£ô | Γ£ô | | +|East US| Γ£ô | Γ£ô | Γ£ô | Γ£ô | +|UK South| Γ£ô | Γ£ô | Γ£ô | | +|Southeast Asia| Γ£ô | Γ£ô | Γ£ô | | +++## Custom Text Analytics for health ++Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment. ++| Region | Authoring | Prediction | +|--|--|-| +| East US | Γ£ô | Γ£ô | +| UK South | Γ£ô | Γ£ô | +| North Europe | Γ£ô | Γ£ô | ++### Next steps ++* [Language support](./language-support.md) +* [Quotas and limits](./data-limits.md) |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/service-limits.md | See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan ### Regional availability -Conversational language understanding is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md). --| Region | Authoring | Prediction | -|--|--|-| -| Australia East | Γ£ô | Γ£ô | -| Brazil South | | Γ£ô | -| Canada Central | | Γ£ô | -| Central India | Γ£ô | Γ£ô | -| Central US | | Γ£ô | -| China East 2 | Γ£ô | Γ£ô | -| China North 2 | | Γ£ô | -| East Asia | | Γ£ô | -| East US | Γ£ô | Γ£ô | -| East US 2 | Γ£ô | Γ£ô | -| France Central | | Γ£ô | -| Japan East | | Γ£ô | -| Japan West | | Γ£ô | -| Jio India West | | Γ£ô | -| Korea Central | | Γ£ô | -| North Central US | | Γ£ô | -| North Europe | Γ£ô | Γ£ô | -| Norway East | | Γ£ô | -| Qatar Central | | Γ£ô | -| South Africa North | | Γ£ô | -| South Central US | Γ£ô | Γ£ô | -| Southeast Asia | | Γ£ô | -| Sweden Central | | Γ£ô | -| Switzerland North | Γ£ô | Γ£ô | -| UAE North | | Γ£ô | -| UK South | Γ£ô | Γ£ô | -| West Central US | | Γ£ô | -| West Europe | Γ£ô | Γ£ô | -| West US | | Γ£ô | -| West US 2 | Γ£ô | Γ£ô | -| West US 3 | Γ£ô | Γ£ô | +See [Language service regional availability](../concepts/regional-support.md#conversational-language-understanding-and-orchestration-workflow). ## API limits |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-named-entity-recognition/service-limits.md | Use this article to learn about the data and service limits when using custom NE ## Regional availability -Custom named entity recognition is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md). --| Region | Authoring | Prediction | -|--|--|-| -| Australia East | Γ£ô | Γ£ô | -| Brazil South | | Γ£ô | -| Canada Central | | Γ£ô | -| Central India | Γ£ô | Γ£ô | -| Central US | | Γ£ô | -| East Asia | | Γ£ô | -| East US | Γ£ô | Γ£ô | -| East US 2 | Γ£ô | Γ£ô | -| France Central | | Γ£ô | -| Japan East | | Γ£ô | -| Japan West | | Γ£ô | -| Jio India West | | Γ£ô | -| Korea Central | | Γ£ô | -| North Central US | | Γ£ô | -| North Europe | Γ£ô | Γ£ô | -| Norway East | | Γ£ô | -| Qatar Central | | Γ£ô | -| South Africa North | | Γ£ô | -| South Central US | Γ£ô | Γ£ô | -| Southeast Asia | | Γ£ô | -| Sweden Central | | Γ£ô | -| Switzerland North | Γ£ô | Γ£ô | -| UAE North | | Γ£ô | -| UK South | Γ£ô | Γ£ô | -| West Central US | | Γ£ô | -| West Europe | Γ£ô | Γ£ô | -| West US | | Γ£ô | -| West US 2 | Γ£ô | Γ£ô | -| West US 3 | Γ£ô | Γ£ô | -+See [Language service regional availability](../concepts/regional-support.md#custom-named-entity-recognition). ## API limits |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-analytics-for-health/reference/service-limits.md | Use this article to learn about the data and service limits when using custom Te ## Regional availability -Custom Text Analytics for health is only available in some Azure regions since it is a preview service. Some regions may be available for **both authoring and prediction**, while other regions may be for **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get predictions from a deployment. --| Region | Authoring | Prediction | -|--|--|-| -| East US | Γ£ô | Γ£ô | -| UK South | Γ£ô | Γ£ô | -| North Europe | Γ£ô | Γ£ô | +See [Language service regional availability](../../concepts/regional-support.md#custom-text-analytics-for-health). ## API limits |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/service-limits.md | description: Learn about the data and rate limits when using custom text classif Previously updated : 08/09/2023 Last updated : 08/23/2023 See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan ## Regional availability -Custom text classification is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md). --| Region | Authoring | Prediction | -|--|--|-| -| Australia East | Γ£ô | Γ£ô | -| Brazil South | | Γ£ô | -| Canada Central | | Γ£ô | -| Central India | Γ£ô | Γ£ô | -| Central US | | Γ£ô | -| East Asia | | Γ£ô | -| East US | Γ£ô | Γ£ô | -| East US 2 | Γ£ô | Γ£ô | -| France Central | | Γ£ô | -| Japan East | | Γ£ô | -| Japan West | | Γ£ô | -| Jio India West | | Γ£ô | -| Korea Central | | Γ£ô | -| North Central US | | Γ£ô | -| North Europe | Γ£ô | Γ£ô | -| Norway East | | Γ£ô | -| Qatar Central | | Γ£ô | -| South Africa North | | Γ£ô | -| South Central US | Γ£ô | Γ£ô | -| Southeast Asia | | Γ£ô | -| Sweden Central | | Γ£ô | -| Switzerland North | Γ£ô | Γ£ô | -| UAE North | | Γ£ô | -| UK South | Γ£ô | Γ£ô | -| West Central US | | Γ£ô | -| West Europe | Γ£ô | Γ£ô | -| West US | | Γ£ô | -| West US 2 | Γ£ô | Γ£ô | -| West US 3 | Γ£ô | Γ£ô | +See [Language service regional availability](../concepts/regional-support.md#custom-text-classification). ## API limits |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/orchestration-workflow/service-limits.md | See [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/lan ## Regional availability -Orchestration workflow is only available in some Azure regions. Some regions are available for **both authoring and prediction**, while other regions are **prediction only**. Language resources in authoring regions allow you to create, edit, train, and deploy your projects. Language resources in prediction regions allow you to get [predictions from a deployment](../concepts/custom-features/multi-region-deployment.md). --| Region | Authoring | Prediction | -|--|--|-| -| Australia East | Γ£ô | Γ£ô | -| Brazil South | | Γ£ô | -| Canada Central | | Γ£ô | -| Central India | Γ£ô | Γ£ô | -| Central US | | Γ£ô | -| China East 2 | Γ£ô | Γ£ô | -| China North 2 | | Γ£ô | -| East Asia | | Γ£ô | -| East US | Γ£ô | Γ£ô | -| East US 2 | Γ£ô | Γ£ô | -| France Central | | Γ£ô | -| Japan East | | Γ£ô | -| Japan West | | Γ£ô | -| Jio India West | | Γ£ô | -| Korea Central | | Γ£ô | -| North Central US | | Γ£ô | -| North Europe | Γ£ô | Γ£ô | -| Norway East | | Γ£ô | -| Qatar Central | | Γ£ô | -| South Africa North | | Γ£ô | -| South Central US | Γ£ô | Γ£ô | -| Southeast Asia | | Γ£ô | -| Sweden Central | | Γ£ô | -| Switzerland North | Γ£ô | Γ£ô | -| UAE North | | Γ£ô | -| UK South | Γ£ô | Γ£ô | -| West Central US | | Γ£ô | -| West Europe | Γ£ô | Γ£ô | -| West US | | Γ£ô | -| West US 2 | Γ£ô | Γ£ô | -| West US 3 | Γ£ô | Γ£ô | +See [Language service regional availability](../concepts/regional-support.md#conversational-language-understanding-and-orchestration-workflow). ## API limits |
ai-services | Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/network-isolation.md | This will establish a private endpoint connection between language resource and Follow the steps below to restrict public access to question answering language resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to an Azure AI services resource based on VNet, To browse projects on Language Studio from your on-premises network or your local browser.-- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).+- Grant access to [on-premises network](../../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks). - Grant access to your [local browser/machine](../../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**. |
ai-services | Use Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/sentiment-opinion-mining/how-to/use-containers.md | In this article, you learned concepts and workflow for downloading, installing, * You must specify billing information when instantiating a container. > [!IMPORTANT]-> Azure AI services containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI services containers do not send customer data (e.g. text that is being analyzed) to Microsoft. +> Azure AI containers are not licensed to run without being connected to Azure for metering. Customers need to enable the containers to communicate billing information with the metering service at all times. Azure AI containers do not send customer data (e.g. text that is being analyzed) to Microsoft. ## Next steps |
ai-services | Credential Entity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/credential-entity.md | To create service principal for your data source, you can follow detailed instru There are several steps to create a service principal from key vault. -**Step 1. Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source. +**Step 1: Create a Service Principal and grant it access to your database.** You can follow detailed instructions in [Connect different data sources](../data-feeds-from-different-sources.md), in creating service principal section for each data source. After creating a service principal in Azure portal, you can find `Tenant ID` and `Client ID` in **Overview**. The **Directory (tenant) ID** should be `Tenant ID` in credential entity configurations. ![sp client ID and tenant ID](../media/credential-entity/sp-client-tenant-id.png) -**Step 2. Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.) +**Step 2: Create a new client secret.** You should go to **Certificates & Secrets** to create a new client secret, and the **value** will be used in next steps. (Note: The value only appears once, so it's better to store it somewhere.) ![sp Client secret value](../media/credential-entity/sp-secret-value.png) -**Step 3. Create a key vault.** In [Azure portal](https://portal.azure.com/#home), select **Key vaults** to create one. +**Step 3: Create a key vault.** In [Azure portal](https://portal.azure.com/#home), select **Key vaults** to create one. ![create a key vault in azure portal](../media/credential-entity/create-key-vault.png) After creating a key vault, the **Vault URI** is the `Key Vault Endpoint` in MA ![key vault endpoint](../media/credential-entity/key-vault-endpoint.png) -**Step 4. Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**. +**Step 4: Create secrets for Key Vault.** In Azure portal for key vault, generate two secrets in **Settings->Secrets**. The first is for `Service Principal Client Id`, the other is for `Service Principal Client Secret`, both of their name will be used in credential entity configurations. ![generate secrets](../media/credential-entity/generate-secrets.png) The first is for `Service Principal Client Id`, the other is for `Service Princi Until now, the *client ID* and *client secret* of service principal are finally stored in Key Vault. Next, you need to create another service principal to store the key vault. Therefore, you should **create two service principals**, one to save client ID and client secret, which will be stored in a key vault, the other is to store the key vault. -**Step 5. Create a service principal to store the key vault.** +**Step 5: Create a service principal to store the key vault.** 1. Go to [Azure portal AAD (Azure Active Directory)](https://portal.azure.com/?trace=diagnostics&feature.customportal=false#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview) and create a new registration. Until now, the *client ID* and *client secret* of service principal are finally ![add client secret](../media/credential-entity/add-client-secret.png) -**Step 6. Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'. +**Step 6: Grant Service Principal access to Key Vault.** Go to the key vault resource you created, in **Settings->Access polices**, by selecting 'Add Access Policy' to make connection between key vault and the second service principal in **Step 5**, and 'Save'. ![grant sp to key vault](../media/credential-entity/grant-sp-to-kv.png) |
ai-services | Diagnose An Incident | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/how-tos/diagnose-an-incident.md | An alert generated by Metrics Advisor may contain multiple incidents and each in After being directed to the incident detail page, you're able to take advantage of the insights that are automatically analyzed by Metrics Advisor to quickly locate root cause of an issue or use the analysis tool to further evaluate the issue impact. There are three sections in the incident detail page which correspond to three major steps to diagnosing an incident. -### Step 1. Check summary of current incident +### Step 1: Check summary of current incident The first section lists a summary of the current incident, including basic information, actions & tracings, and an analyzed root cause. The first section lists a summary of the current incident, including basic infor For metrics with multiple dimensions, it's a common case that multiple anomalies will be detected at the same time. However, those anomalies may share the same root cause. Instead of analyzing all anomalies one by one, leveraging **Analyzed root cause** should be the most efficient way to diagnose current incident. -### Step 2. View cross-dimension diagnostic insights +### Step 2: View cross-dimension diagnostic insights After getting basic info and automatic analysis insights, you can get more detailed info on abnormal status on other dimensions within the same metric in a holistic way using the **"Diagnostic tree"**. There are two display modes for a diagnostic tree: only show anomaly series or s By using "Diagnostic tree", customers can locate root cause of current incident into specific dimension. This significantly removes customer's effort to view each individual anomalies or pivot through different dimensions to find the major anomaly contribution. -### Step 3. View cross-metrics diagnostic insights using "Metrics graph" +### Step 3: View cross-metrics diagnostic insights using "Metrics graph" Sometimes, it's hard to analyze an issue by checking abnormal status of one single metric, but need to correlate multiple metrics together. Customers are able to configure a **Metrics graph**, which indicates the relationship between metrics. Refer to [How to build a metrics graph](metrics-graph.md) to get started. |
ai-services | Enable Anomaly Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md | Follow the H2 headings with a sentence about how the section contributes to the <!-- Introduction paragraph --> There are two common options to send email notifications that are supported in Metrics Advisor. One is to use webhooks and Azure Logic Apps to send email alerts, the other is to set up an SMTP server and use it to send email alerts directly. This section will focus on the first option, which is easier for customers who don't have an available SMTP server. -**Step 1.** Create a webhook in Metrics Advisor +**Step 1:** Create a webhook in Metrics Advisor A webhook is the entry point for all the information available from the Metrics Advisor service, and calls a user-provided API when an alert is triggered. All alerts can be sent through a webhook. Select the **Hooks** tab in your Metrics Advisor workspace, and select the **Cre There's one extra parameter of **Endpoint** that needs to be filled out, this could be done after completing Step 3 below. -**Step 2.** Create a Consumption logic app resource +**Step 2:** Create a Consumption logic app resource In the [Azure portal](https://portal.azure.com), create a Consumption logic app resource with a blank workflow by following the instructions in [Create an example Consumption logic app workflow](../../../logic-apps/quickstart-create-example-consumption-workflow.md). When you see the workflow designer opens, return to this tutorial. -**Step 3.** Add a trigger of **When an HTTP request is received** +**Step 3:** Add a trigger of **When an HTTP request is received** - Azure Logic Apps uses various actions to trigger workflows that are defined. For this use case, it uses the trigger named **When an HTTP request is received**. In the [Azure portal](https://portal.azure.com), create a Consumption logic app ![Screenshot that highlights the copy icon to copy the URL of your HTTP request trigger.](../media/tutorial/logic-apps-copy-url.png) -**Step 4.** Add a next step using 'HTTP' action +**Step 4:** Add a next step using 'HTTP' action Signals that are pushed through the webhook only contain limited information like timestamp, alertID, configurationID, etc. Detailed information needs to be queried using the callback URL provided in the signal. This step is to query detailed alert info. Signals that are pushed through the webhook only contain limited information lik ![Screenshot that highlights the api-keys](../media/tutorial/logic-apps-api-key.png) -**Step 5.** Add a next step to ΓÇÿparse JSONΓÇÖ +**Step 5:** Add a next step to ΓÇÿparse JSONΓÇÖ You need to parse the response of the API for easier formatting of email content. You need to parse the response of the API for easier formatting of email content } ``` -**Step 6.** Add a next step to ΓÇÿcreate HTML tableΓÇÖ +**Step 6:** Add a next step to ΓÇÿcreate HTML tableΓÇÖ A bunch of information has been returned from the API call, however, depending on your scenarios not all of the information may be useful. Choose the items that you care about and would like included in the alert email. Below is an example of an HTML table that chooses 'timestamp', 'metricGUID' and ![Screenshot of html table example](../media/tutorial/logic-apps-html-table.png) -**Step 7.** Add the final step to ΓÇÿsend an emailΓÇÖ +**Step 7:** Add the final step to ΓÇÿsend an emailΓÇÖ There are several options to send email, both Microsoft hosted and 3rd-party offerings. Customer may need to have a tenant/account for their chosen option. For example, when choosing ΓÇÿOffice 365 OutlookΓÇÖ as the server. Sign in process will be pumped for building connection and authorization. An API connection will be established to use email server to send alert. Fill in the content that you'd like to include to 'Body', 'Subject' in the email ### Send anomaly notification through a Microsoft Teams channel This section will walk through the practice of sending anomaly notifications through a Microsoft Teams channel. This can help enable scenarios where team members are collaborating on analyzing anomalies that are detected by Metrics Advisor. The workflow is easy to configure and doesn't have a large number of prerequisites. -**Step 1.** Add a 'Incoming Webhook' connector to your Teams channel +**Step 1:** Add a 'Incoming Webhook' connector to your Teams channel - Navigate to the Teams channel that you'd like to send notification to, select 'ΓÇóΓÇóΓÇó'(More options). - In the dropdown list, select 'Connectors'. Within the new dialog, search for 'Incoming Webhook' and click 'Add'. This section will walk through the practice of sending anomaly notifications thr ![Screenshot to copy URL](../media/tutorial/webhook-url.png) -**Step 2.** Create a new 'Teams hook' in Metrics Advisor +**Step 2:** Create a new 'Teams hook' in Metrics Advisor - Select 'Hooks' tab in left navigation bar, and select the 'Create hook' button at top right of the page. - Choose hook type of 'Teams', then input a name and paste the URL that you copied from the above step. This section will walk through the practice of sending anomaly notifications thr ![Screenshot to create a Teams hook](../media/tutorial/teams-hook.png) -**Step 3.** Apply the Teams hook to an alert configuration +**Step 3:** Apply the Teams hook to an alert configuration Go and select one of the data feeds that you have onboarded. Select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to anomalies that are detected and notify through a Teams channel. Select the '+' button and choose the hook that you created, fill in other fields This section will share the practice of using an SMTP server to send email notifications on anomalies that are detected. Make sure you have a usable SMTP server and have sufficient permission to get parameters like account name and password. -**Step 1.** Assign your account as the 'Cognitive Services Metrics Advisor Administrator' role +**Step 1:** Assign your account as the 'Cognitive Services Metrics Advisor Administrator' role - A user with the subscription administrator or resource group administrator privileges needs to navigate to the Metrics Advisor resource that was created in the Azure portal, and select the Access control (IAM) tab. - Select 'Add role assignments'. This section will share the practice of using an SMTP server to send email notif ![Screenshot that shows how to assign admin role to a specific role](../media/tutorial/access-control.png) -**Step 2.** Configure SMTP server in Metrics Advisor workspace +**Step 2:** Configure SMTP server in Metrics Advisor workspace After you've completed the above steps and have been successfully added as an administrator of the Metrics Advisor resource. Wait several minutes for the permissions to propagate. Then sign in to your Metrics Advisor workspace, you should be able to view a new tab named 'Email setting' on the left navigation panel. Select it and to continue configuration. Below is an example of a configured SMTP server: ![Screenshot that shows an example of a configured SMTP server](../media/tutorial/email-setting.png) -**Step 3.** Create an email hook in Metrics Advisor +**Step 3:** Create an email hook in Metrics Advisor After successfully configuring an SMTP server, you're set to create an 'email hook' in the 'Hooks' tab in Metrics Advisor. For more about creating an 'email hook', refer to [article on alerts](../how-tos/alerts.md#email-hook) and follow the steps to completion. -**Step 4.** Apply the email hook to an alert configuration +**Step 4:** Apply the email hook to an alert configuration Go and select one of the data feeds that you on-boarded, select a metric within the feed and open the metrics detail page. You can create an 'alerting configuration' to subscribe to the anomalies that have been detected and sent through emails. |
ai-services | Multi Service Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/multi-service-resource.md | You can access Azure AI services through two different resources: A multi-servic Azure AI services are represented by Azure [resources](../azure-resource-manager/management/manage-resources-portal.md) that you create under your Azure subscription. After you create a resource, you can use the keys and endpoint generated to authenticate your applications. +## Supported services with a multi-service resource ++The multi-service resource enables access to the following Azure AI services with a single key and endpoint. Use these links to find quickstart articles, samples, and more to start using your resource. ++| Service | Description | +| | | +| ![Content Moderator icon](./media/service-icons/content-moderator.svg) [Content Moderator](./content-moderator/index.yml) (retired) | Detect potentially offensive or unwanted content | +| ![Custom Vision icon](./media/service-icons/custom-vision.svg) [Custom Vision](./custom-vision-service/index.yml) | Customize image recognition to fit your business | +| ![Document Intelligence icon](./media/service-icons/document-intelligence.svg) [Document Intelligence](./document-intelligence/index.yml) | Turn documents into usable data at a fraction of the time and cost | +| ![Face icon](./medi) | Detect and identify people and emotions in images | +| ![Language icon](./media/service-icons/language.svg) [Language](./language-service/index.yml) | Build apps with industry-leading natural language understanding capabilities | +| ![Speech icon](./media/service-icons/speech.svg) [Speech](./speech-service/index.yml) | Speech to text, text to speech, translation and speaker recognition | +| ![Translator icon](./media/service-icons/translator.svg) [Translator](./translator/index.yml) | Translate more than 100 languages and dialects | +| ![Vision icon](./media/service-icons/vision.svg) [Vision](./computer-vision/index.yml) | Analyze content in images and videos | + ::: zone pivot="azportal" [!INCLUDE [Azure Portal quickstart](includes/quickstarts/management-azportal.md)] Azure AI services are represented by Azure [resources](../azure-resource-manager ## Next steps -* Now that you have a resource, you can authenticate your API requests to the following Azure AI services. Use these links to find quickstart articles, samples and more to start using your resource. - * [Content Moderator](./content-moderator/index.yml) (retired) - * [Custom Vision](./custom-vision-service/index.yml) - * [Document Intelligence](./document-intelligence/index.yml) - * [Face](./computer-vision/overview-identity.md) - * [Language](./language-service/index.yml) - * [Speech](./speech-service/index.yml) - * [Translator](./translator/index.yml) - * [Vision](./computer-vision/index.yml) +* Now that you have a resource, you can authenticate your API requests to one of the [supported Azure AI services](#supported-services-with-a-multi-service-resource). |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | description: Learn about the different model capabilities that are available wit Previously updated : 08/02/2023 Last updated : 08/22/2023 These models can only be used with Embedding API requests. | Model ID | Base model Regions | Fine-Tuning Regions | Max Request (tokens) | Training Data (up to) | | | | | | |-| text-embedding-ada-002 (version 2) | Canada East, East US, France Central, Japan East, North Central US, South Central US, West Europe | N/A |8,191 | Sep 2021 | +| text-embedding-ada-002 (version 2) | Canada East, East US, France Central, Japan East, North Central US, South Central US, UK South, West Europe | N/A |8,191 | Sep 2021 | | text-embedding-ada-002 (version 1) | East US, South Central US, West Europe | N/A |2,046 | Sep 2021 | ### DALL-E models (Preview) |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | Azure OpenAI on your data supports the following filetypes: * Microsoft PowerPoint files * PDF -There are some caveats about document structure and how it might affect the quality of responses from the model: +There is an [upload limit](../quotas-limits.md), and there are some caveats about document structure and how it might affect the quality of responses from the model: * The model provides the best citation titles from markdown (`.md`) files. There are some caveats about document structure and how it might affect the qual This will impact the quality of Azure Cognitive Search and the model response. -## Virtual network support & private link support +## Virtual network support & private network support -Azure OpenAI on your data does not currently support private endpoints. +You can protect Azure OpenAI resources in [virtual networks and private endpoints](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI service. ++If you have Azure OpenAI resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaionyourdata). The application will be reviewed in five business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request. +++Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow). ++After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service. ++> [!NOTE] +> Virtual network support & private networks are only supported for the API, and not currently supported for Azure OpenAI Studio. ++### Storage accounts in private virtual networks ++Storage accounts in virtual networks and private endpoints are currently not supported by Azure OpenAI on your data. ## Azure Role-based access controls (Azure RBAC) To add a new data source to your Azure OpenAI resource, you need the following A |Azure RBAC role |Needed when | |||-|[Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) | You want to use Azure OpenAI on your data. | +|[Cognitive Services Contributor](../how-to/role-based-access-control.md#cognitive-services-contributor) | You want to use Azure OpenAI on your data. | |[Search Index Data Contributor](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) | You have an existing Azure Cognitive Search index that you want to use, instead of creating a new one. | |[Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) | You have an existing Blob storage container that you want to use, instead of creating a new one. | +## Document-level access control ++Azure OpenAI on your data lets you restrict the documents that can be used in responses for different users with Azure Cognitive Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure Cognitive Search and used to generate a response will be trimmed based on user Azure Active Directory (AD) group membership. You can only enable document-level access on existing Azure Cognitive search indexes. To enable document-level access: ++1. Follow the steps in the [Azure Cognitive Search documentation](/azure/search/search-security-trimming-for-azure-search-with-aad) to register your application and create users and groups. +1. [Index your documents with their permitted groups](/azure/search/search-security-trimming-for-azure-search-with-aad#index-document-with-their-permitted-groups). Be sure that your new [security fields](/azure/search/search-security-trimming-for-azure-search#create-security-field) have the schema below: + + ```json + {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true } + ``` ++ `group_ids` is the default field name. If you use a different field name like `my_group_ids`, you can map the field in [index field mapping](#index-field-mapping). ++1. Make sure each sensitive document in the index has the value set correctly on this security field to indicate the permitted groups of the document. +1. In [Azure OpenAI Studio](https://oai.azure.com/portal), add your data source. in the [index field mapping](#index-field-mapping) section, you can map zero or one value to the **permitted groups** field, as long as the schema is compatible. If the **Permitted groups** field isn't mapped, document level access won't be enabled. ++**Azure OpenAI Studio** ++Once the Azure Cognitive Search index is connected, your responses in the studio will have document access based on the Azure AD permissions of the logged in user. ++**Web app** ++If you are using a published [web app](#using-the-web-app), you need to redeploy it to upgrade to the latest version. The latest version of the web app includes the ability to retrieve the groups of the logged in user's Azure AD account, cache it, and include the group IDs in each API request. ++**API** ++When using the API, pass the `filter` parameter in each API request. For example: ++```json +{ + "messages": [ + { + "role": "user", + "content": "who is my manager?" + } + ], + "dataSources": [ + { + "type": "AzureCognitiveSearch", + "parameters": { + "endpoint": "'$SearchEndpoint'", + "key": "'$SearchKey'", + "indexName": "'$SearchIndex'", + "filter": "my_group_ids/any(g:search.in(g, 'group_id1, group_id2'))" + } + } + ] +} +``` +* `my_group_ids` is the field name that you selected for **Permitted groups** during [fields mapping](#index-field-mapping). +* `group_id1, group_id2` are groups attributed to the logged in user. The client application can retrieve and cache users' groups. +++## Schedule automatic index refreshes ++To keep your Azure Cognitive Search index up-to-date with your latest data, you can schedule a refresh for it that runs automatically rather than manually updating it every time your data is updated. to enable an automatic index refresh: ++1. [Add a data source](../quickstart.md) using Azure OpenAI studio. +1. Under **Select or add data source** select **Indexer schedule** and choose the refresh cadence you would like to apply. ++ :::image type="content" source="../media/use-your-data/indexer-schedule.png" alt-text="A screenshot of the indexer schedule in Azure OpenAI Studio." lightbox="../media/use-your-data/indexer-schedule.png"::: ++After the data ingestion is set to a cadence other than once, Azure Cognitive Search indexers will be created with a schedule equivalent to `0.5 * the cadence specified`. This means that at the specified cadence, the indexers will pull the documents that were added, modified, or deleted from the storage container, reprocess and index them. This ensures that the updated data gets preprocessed and indexed in the final index at the desired cadence automatically. The intermediate assets created in the Azure Cognitive Search resource will not be cleaned up after ingestion to allow for future runs. These assets are: + - `{Index Name}-index` + - `{Index Name}-indexer` + - `{Index Name}-indexer-chunk` + - `{Index Name}-datasource` + - `{Index Name}-skillset` ++To modify the schedule, you can use the [Azure portal](https://portal.azure.com/). ++1. Open your search resource page in the Azure portal +1. Select **Indexers** from the left pane + + :::image type="content" source="../media/use-your-data/indexers-azure-portal.png" alt-text="A screenshot of the indexers tab in the Azure portal." lightbox="../media/use-your-data/indexers-azure-portal.png"::: ++1. Perform the following steps on the two indexers that have your index name as a prefix. + 1. Select the indexer to open it. Then select the **settings** tab. + 1. Update the schedule to the desired cadence from "Schedule" or specify a custom cadence from "Interval (minutes)" + + :::image type="content" source="../media/use-your-data/indexer-schedule-azure-portal.png" alt-text="A screenshot of the settings page for an individual indexer." lightbox="../media/use-your-data/indexer-schedule-azure-portal.png"::: ++ 1. Select **Save**. + ## Recommended settings Use the following sections to help you configure Azure OpenAI on your data for optimal results. Set a limit on the number of tokens per model response. The upper limit for Azur This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model may more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario. -### Semantic search +### Search options ++Azure OpenAI on your data provides several search options you can use when you add your data source, leveraging the following types of search. ++* [Simple search](/azure/search/search-lucene-query-architecture) +* [Semantic search](/azure/search/semantic-search-overview) +* [Vector search](/azure/search/vector-search-overview) using Ada [embedding](./understand-embeddings.md) models. ++ To enable vector search, you will need a `text-embedding-ada-002` deployment in your Azure OpenAI resource. Select your embedding deployment when connecting your data, then select one of the vector search types under **Data management**. > [!IMPORTANT]-> * Semantic search is subject to [additional pricing](/azure/search/semantic-search-overview#availability-and-pricing) +> * [Semantic search](/azure/search/semantic-search-overview#availability-and-pricing) and [vector search](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) are subject to additional pricing. > * Currently Azure OpenAI on your data supports semantic search for English data only. Only enable semantic search if both your documents and use case are in English. -If [semantic search](/azure/search/semantic-search-overview) is enabled for your Azure Cognitive Search service, you are more likely to produce better retrieval of your data, which can improve response and citation quality. +| Search option | Retrieval type | Additional pricing? | +|||| +| *simple* | Simple search | No additional pricing. | +| *semantic* | Semantic search | Additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. | +| *vector* | Vector search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. | +| *vector + simple* | A hybrid of vector search and simple search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. | +| *vector + semantic* | A hybrid of vector search and semantic search for retrieval. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. | ++The optimal search option can vary depending on your dataset and use-case. You may need to experiment with multiple options to determine which works best for your use-case. ### Index field mapping When customizing the app, we recommend: Now users will be asked to sign in with their Azure Active Directory account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's login information in any other way other than verifying they are a member of your tenant. +### Chat history ++You can enable chat history for your users of the web app. By enabling the feature, your users will have access to their individual previous queries and responses. ++To enable chat history, deploy or redeploy your model as a web app using [Azure OpenAI Studio](https://oai.azure.com/portal) +++> [!IMPORTANT] +> Enabling chat history will create a [Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and incur [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage used. ++Once you've enabled chat history, your users will be able to show and hide it in the top right corner of the app. When the history is shown, they can rename, or delete conversations. As they're logged into the app, conversations will be automatically ordered from newest to oldest, and named based on the first query in the conversation. +++#### Deleting your Cosmos DB instance ++Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance, along with all stored chats, you need to navigate to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history. ### Using the API You can send a streaming request using the `stream` parameter, allowing data to #### Conversation history for better results -When chatting with a model, providing a history of the chat will help the model return higher quality results. +When you chat with a model, providing a history of the chat will help the model return higher quality results. ```json { |
ai-services | Business Continuity Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/business-continuity-disaster-recovery.md | keywords: # Business Continuity and Disaster Recovery (BCDR) considerations with Azure OpenAI Service -Azure OpenAI is available in multiple regions. Since subscription keys are region bound, when a customer acquires a key, they select the region in which their deployments will reside and from then on, all operations stay associated with that Azure server region. +Azure OpenAI is available in multiple regions. When you create an Azure OpenAI resource, you specify a region. From then on, your resource and all its operations stay associated with that Azure server region. -It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either fail-over into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications. +It's rare, but not impossible, to encounter a network issue that hits an entire region. If your service needs to always be available, then you should design it to either failover into another region or split the workload between two or more regions. Both approaches require at least two Azure OpenAI resources in different regions. This article provides general recommendations for how to implement Business Continuity and Disaster Recovery (BCDR) for your Azure OpenAI applications. -## Best practices --Today customers will call the endpoint provided during deployment for both deployments and inference. These operations are stateless, so no data is lost in the case that a region becomes unavailable. --If a region is non-operational customers must take steps to ensure service continuity. +## BCDR requires custom code -## Business continuity +Today customers will call the endpoint provided during deployment for inferencing. Inferencing operations are stateless, so no data is lost if a region becomes unavailable. -The following set of instructions applies both customers using default endpoints and those using custom endpoints. +If a region is nonoperational customers must take steps to ensure service continuity. -### Default endpoint recovery +## BCDR for base model & customized model -If you're using a default endpoint, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription. +If you're using the base models, you should configure your client code to monitor errors, and if the errors persist, be prepared to redirect to another region of your choice where you have an Azure OpenAI subscription. Follow these steps to configure your client to monitor errors: -1. Use the [models page](../concepts/models.md) to identify the list of available regions for Azure OpenAI. +1. Use the [models](/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability) page to choose the datacenters and regions that are right for you. -2. Select a primary and one secondary/backup regions from the list. +2. Select a primary and one (or more) secondary/backup regions from the list. -3. Create Azure OpenAI resources for each region selected. +3. Create Azure OpenAI resources for each region(s) selected. 4. For the primary region and any backup regions your code will need to know: - a. Base URI for the resource -- b. Regional access key or Azure Active Directory access + - Base URI for the resource + - Regional access key or Azure Active Directory access -5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors). +5. Configure your code so that you monitor connectivity errors (typically connection timeouts and service unavailability errors). - a. Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry. -- b. For persistence redirect traffic to the backup resource in the region you've created. --## BCDR requires custom code + - Given that networks yield transient errors, for single connectivity issue occurrences, the suggestion is to retry. + - For persistent connectivity issues, redirect traffic to the backup resource in the region(s) you've created. -The recovery from regional failures for this usage type can be performed instantaneously and at a very low cost. This does however, require custom development of this functionality on the client side of your application. +If you have fine-tuned a model in your primary region, you will need to retrain the base model in the secondary region(s) using the same training data. And then follow the above steps. |
ai-services | Completions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/completions.md | Title: 'How to generate text with Azure OpenAI Service' -description: Learn how to generate or manipulate text, including code with Azure OpenAI +description: Learn how to generate or manipulate text, including code by using a completion endpoint in Azure OpenAI Service. Previously updated : 06/24/2022 Last updated : 08/15/2023 recommendations: false keywords: # Learn how to generate or manipulate text -The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful text-in, text-out interface to any of our [models](../concepts/models.md). You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, "As Descartes said, I think, therefore", it will return the completion " I am" with high probability. +Azure OpenAI Service provides a **completion endpoint** that can be used for a wide variety of tasks. The endpoint supplies a simple yet powerful text-in, text-out interface to any [Azure OpenAI model](../concepts/models.md). To trigger the completion, you input some text as a prompt. The model generates the completion and attempts to match your context or pattern. Suppose you provide the prompt "As Descartes said, I think, therefore" to the API. For this prompt, Azure OpenAI returns the completion endpoint " I am" with high probability. -The best way to start exploring completions is through our playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you can submit a prompt to generate a completion. You can start with a simple example like the following: +The best way to start exploring completions is through the playground in [Azure OpenAI Studio](https://oai.azure.com). It's a simple text box where you enter a prompt to generate a completion. You can start with a simple prompt like this one: -`write a tagline for an ice cream shop` +```console +write a tagline for an ice cream shop +``` -once you submit, you'll see something like the following generated: +After you enter your prompt, Azure OpenAI displays the completion: -``` console -write a tagline for an ice cream shop +```console we serve up smiles with every scoop! ``` -The actual completion results you see may differ because the API is stochastic by default. In other words, you might get a slightly different completion every time you call it, even if your prompt stays the same. You can control this behavior with the temperature setting. +The completion results that you see can differ because the Azure OpenAI API produces fresh output for each interaction. You might get a slightly different completion each time you call the API, even if your prompt stays the same. You can control this behavior with the `Temperature` setting. -This simple, "text in, text out" interface means you can "program" the model by providing instructions or just a few examples of what you'd like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a middle school student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond. +The simple text-in, text-out interface means you can "program" the Azure OpenAI model by providing instructions or just a few examples of what you'd like it to do. The output success generally depends on the complexity of the task and quality of your prompt. A general rule is to think about how you would write a word problem for a pre-teenage student to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond. > [!NOTE]-> Keep in mind that the models' training data cuts off in October 2019, so they may not have knowledge of current events. We plan to add more continuous training in the future. +> The model training data can be different for each model type. The [latest model's training data currently extends through September 2021 only](/azure/ai-services/openai/concepts/models). Depending on your prompt, the model might not have knowledge of related current events. -## Prompt design +## Design prompts -### Basics +Azure OpenAI Service models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you must be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt. -OpenAI's models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in showing what you want. Showing, not just telling, is often the secret to a good prompt. +The models try to predict what you want from the prompt. If you enter the prompt "Give me a list of cat breeds," the model doesn't automatically assume you're asking for a list only. You might be starting a conversation where your first words are "Give me a list of cat breeds" followed by "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks. -The models try to predict what you want from the prompt. If you send the words "Give me a list of cat breeds," the model wouldn't automatically assume that you're asking for a list of cat breeds. You could as easily be asking the model to continue a conversation where the first words are "Give me a list of cat breeds" and the next ones are "and I'll tell you which ones I like." If the model only assumed that you wanted a list of cats, it wouldn't be as good at content creation, classification, or other tasks. +### Guidelines for creating robust prompts -There are three basic guidelines to creating prompts: +There are three basic guidelines for creating useful prompts: -**Show and tell.** Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that's what you want. +- **Show and tell**. Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, include these details in your prompt to show the model. -**Provide quality data.** If you're trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples — the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume that the mistakes are intentional and it can affect the response. +- **Provide quality data**. If you're trying to build a classifier or get the model to follow a pattern, make sure there are enough examples. Be sure to proofread your examples. The model is smart enough to resolve basic spelling mistakes and give you a meaningful response. Conversely, the model might assume the mistakes are intentional, which can affect the response. -**Check your settings.** The temperature and top_p settings control how deterministic the model is in generating a response. If you're asking it for a response where there's only one right answer, then you'd want to set these settings to lower values. If you're looking for a response that's not obvious, then you might want to set them to higher values. The number one mistake people use with these settings is assuming that they're "cleverness" or "creativity" controls. +- **Check your settings**. Probability settings, such as `Temperature` and `Top P`, control how deterministic the model is in generating a response. If you're asking for a response where there's only one right answer, you should specify lower values for these settings. If you're looking for a response that's not obvious, you might want to use higher values. The most common mistake users make with these settings is assuming they control "cleverness" or "creativity" in the model response. -### Troubleshooting +### Troubleshooting for prompt issues -If you're having trouble getting the API to perform as expected, follow this checklist: +If you're having trouble getting the API to perform as expected, review the following points for your implementation: -1. Is it clear what the intended generation should be? -2. Are there enough examples? -3. Did you check your examples for mistakes? (The API won't tell you directly) -4. Are you using temp and top_p correctly? +- Is it clear what the intended generation should be? +- Are there enough examples? +- Did you check your examples for mistakes? (The API doesn't tell you directly.) +- Are you using the `Temperature` and `Top P` probability settings correctly? -## Classification +## Classify text -To create a text classifier with the API we provide a description of the task and provide a few examples. In this demonstration we show the API how to classify the sentiment of Tweets. +To create a text classifier with the API, you provide a description of the task and provide a few examples. In this demonstration, you show the API how to classify the _sentiment_ of text messages. The sentiment expresses the overall feeling or expression in the text. ```console-This is a tweet sentiment classifier +This is a text message sentiment classifier -Tweet: "I loved the new Batman movie!" +Message: "I loved the new adventure movie!" Sentiment: Positive -Tweet: "I hate it when my phone battery dies." +Message: "I hate it when my phone battery dies." Sentiment: Negative -Tweet: "My day has been 👍" +Message: "My day has been 👍" Sentiment: Positive -Tweet: "This is the link to the article" +Message: "This is the link to the article" Sentiment: Neutral -Tweet: "This new music video blew my mind" +Message: "This new music video is unreal" Sentiment: ``` -It's worth paying attention to several features in this example: +### Guidelines for designing text classifiers -**1. Use plain language to describe your inputs and outputs** -We use plain language for the input "Tweet" and the expected output "Sentiment." For best practices, start with plain language descriptions. While you can often use shorthand or keys to indicate the input and output, when building your prompt it's best to start by being as descriptive as possible and then working backwards removing extra words as long as the performance to the prompt is consistent. +This demonstration reveals several guidelines for designing classifiers: -**2. Show the API how to respond to any case** -In this example we provide multiple outcomes "Positive", "Negative" and "Neutral." A neutral outcome is important because there will be many cases where even a human would have a hard time determining if something is positive or negative and situations where it's neither. +- **Use plain language to describe your inputs and outputs**. Use plain language for the input "Message" and the expected value that expresses the "Sentiment." For best practices, start with plain language descriptions. You can often use shorthand or keys to indicate the input and output when building your prompt, but it's best to start by being as descriptive as possible. Then you can work backwards and remove extra words as long as the performance to the prompt is consistent. -**3. You can use text and emoji** -The classifier is a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them. +- **Show the API how to respond to any case**. The demonstration provides multiple outcomes: "Positive," "Negative," and "Neutral." Supporting a neutral outcome is important because there are many cases where even a human can have difficulty determining if something is positive or negative. -**4. You need fewer examples for familiar tasks** -For this classifier we only provided a handful of examples. This is because the API already has an understanding of sentiment and the concept of a tweet. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples. +- **Use emoji and text, per the common expression**. The demonstration shows that the classifier can be a mix of text and emoji 👍. The API reads emoji and can even convert expressions to and from them. For the best response, use common forms of expression for your examples. -### Improving the classifier's efficiency +- **Use fewer examples for familiar tasks**. This classifier provides only a handful of examples because the API already has an understanding of sentiment and the concept of a text message. If you're building a classifier for something the API might not be familiar with, it might be necessary to provide more examples. -Now that we have a grasp of how to build a classifier, let's take that example and make it even more efficient so that we can use it to get multiple results back from one API call. +### Multiple results from a single API call -``` -This is a tweet sentiment classifier +Now that you understand how to build a classifier, let's expand on the first demonstration to make it more efficient. You want to be able to use the classifier to get multiple results back from a single API call. -Tweet: "I loved the new Batman movie!" +```console +This is a text message sentiment classifier ++Message: "I loved the new adventure movie!" Sentiment: Positive -Tweet: "I hate it when my phone battery dies" +Message: "I hate it when my phone battery dies" Sentiment: Negative -Tweet: "My day has been 👍" +Message: "My day has been 👍" Sentiment: Positive -Tweet: "This is the link to the article" +Message: "This is the link to the article" Sentiment: Neutral -Tweet text -1. "I loved the new Batman movie!" +Message text +1. "I loved the new adventure movie!" 2. "I hate it when my phone battery dies" 3. "My day has been 👍" 4. "This is the link to the article"-5. "This new music video blew my mind" +5. "This new music video is unreal" -Tweet sentiment ratings: +Message sentiment ratings: 1: Positive 2: Negative 3: Positive 4: Neutral 5: Positive -Tweet text -1. "I can't stand homework" -2. "This sucks. I'm bored 😠" -3. "I can't wait for Halloween!!!" +Message text +1. "He doesn't like homework" +2. "The taxi is late. She's angry 😠" +3. "I can't wait for the weekend!!!" 4. "My cat is adorable ❤️❤️"-5. "I hate chocolate" +5. "Let's try chocolate bananas" -Tweet sentiment ratings: +Message sentiment ratings: 1. ``` -After showing the API how tweets are classified by sentiment we then provide it a list of tweets and then a list of sentiment ratings with the same number index. The API is able to pick up from the first example how a tweet is supposed to be classified. In the second example it sees how to apply this to a list of tweets. This allows the API to rate five (and even more) tweets in just one API call. --It's important to note that when you ask the API to create lists or evaluate text you need to pay extra attention to your probability settings (Top P or Temperature) to avoid drift. --1. Make sure your probability setting is calibrated correctly by running multiple tests. +This demonstration shows the API how to classify text messages by sentiment. You provide a numbered list of messages and a list of sentiment ratings with the same number index. The API uses the information in the first demonstration to learn how to classify sentiment for a single text message. In the second demonstration, the model learns how to apply the sentiment classification to a list of text messages. This approach allows the API to rate five (and even more) text messages in a single API call. -2. Don't make your list too long or the API is likely to drift. +> [!IMPORTANT] +> When you ask the API to create lists or evaluate text, it's important to help the API avoid drift. Here are some points to follow: +> +> - Pay careful attention to your values for the `Top P` or `Temperature` probability settings. +> - Run multiple tests to make sure your probability settings are calibrated correctly. +> - Don't use long lists. Long lists can lead to drift. -+## Trigger ideas -## Generation +One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. Suppose you're writing a mystery novel and you need some story ideas. You can give the API a list of a few ideas and it tries to add more ideas to your list. The API can create business plans, character descriptions, marketing slogans, and much more from just a small handful of examples. -One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. You can give the API a list of a few story ideas and it will try to add to that list. We've seen it create business plans, character descriptions and marketing slogans just by providing it a handful of examples. In this demonstration we'll use the API to create more examples for how to use virtual reality in the classroom: +In the next demonstration, you use the API to create more examples for how to use virtual reality in the classroom: -``` +```console Ideas involving education and virtual reality 1. Virtual Mars Students get to explore Mars via virtual reality and go on missions to collect a 2. ``` -All we had to do in this example is provide the API with just a description of what the list is about and one example. We then prompted the API with the number `2.` indicating that it's a continuation of the list. +This demonstration provides the API with a basic description for your list along with one list item. Then you use an incomplete prompt of "2." to trigger a response from the API. The API interprets the incomplete entry as a request to generate similar items and add them to your list. -Although this is a very simple prompt, there are several details worth noting: +### Guidelines for triggering ideas -**1. We explained the intent of the list**<br> -Just like with the classifier, we tell the API up front what the list is about. This helps it focus on completing the list and not trying to guess what the pattern is behind it. +Although this demonstration uses a simple prompt, it highlights several guidelines for triggering new ideas: -**2. Our example sets the pattern for the rest of the list**<br> -Because we provided a one-sentence description, the API is going to try to follow that pattern for the rest of the items it adds to the list. If we want a more verbose response, we need to set that up from the start. +- **Explain the intent of the list**. Similar to the demonstration for the text classifier, you start by telling the API what the list is about. This approach helps the API to focus on completing the list rather than trying to determine patterns by analyzing the text. -**3. We prompt the API by adding an incomplete entry**<br> -When the API sees `2.` and the prompt abruptly ends, the first thing it tries to do is figure out what should come after it. Since we already had an example with number one and gave the list a title, the most obvious response is to continue adding items to the list. +- **Set the pattern for the items in the list**. When you provide a one-sentence description, the API tries to follow that pattern when generating new items for the list. If you want a more verbose response, you need to establish that intent with more detailed text input to the API. -**Advanced generation techniques**<br> -You can improve the quality of the responses by making a longer more diverse list in your prompt. One way to do that is to start off with one example, let the API generate more and select the ones that you like best and add them to the list. A few more high-quality variations can dramatically improve the quality of the responses. +- **Prompt the API with an incomplete entry to trigger new ideas**. When the API encounters text that seems incomplete, such as the prompt text "2.," it first tries to determine any text that might complete the entry. Because the demonstration had a list title and an example with the number "1." and accompanying text, the API interpreted the incomplete prompt text "2." as a request to continue adding items to the list. -+- **Explore advanced generation techniques**. You can improve the quality of the responses by making a longer more diverse list in your prompt. One approach is to start with one example, let the API generate more examples, and then select the examples you like best and add them to the list. A few more high-quality variations in your examples can dramatically improve the quality of the responses. -## Conversation +## Conduct conversations -The API is extremely adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, we've seen the API perform as a customer service chatbot that intelligently answers questions without ever getting flustered or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples. +Starting with the release of [GPT-35-Turbo and GPT-4](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions), we recommend that you create conversational generation and chatbots by using models that support the _chat completion endpoint_. The chat completion models and endpoint require a different input structure than the completion endpoint. -Here's an example of the API playing the role of an AI answering questions: +The API is adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, the API can perform as a customer service chatbot that intelligently answers questions without getting flustered, or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples. -``` +In this demonstration, the API supplies the role of an AI answering questions: ++```console The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly. Human: Hello, who are you? AI: I am an AI created by OpenAI. How can I help you today? Human: ``` -This is all it takes to create a chatbot capable of carrying on a conversation. But underneath its simplicity there are several things going on that are worth paying attention to: --**1. We tell the API the intent but we also tell it how to behave** -Just like the other prompts, we cue the API into what the example represents, but we also add another key detail: we give it explicit instructions on how to interact with the phrase "The assistant is helpful, creative, clever, and very friendly." --Without that instruction the API might stray and mimic the human it's interacting with and become sarcastic or some other behavior we want to avoid. --**2. We give the API an identity** -At the start we have the API respond as an AI that was created by OpenAI. While the API has no intrinsic identity, this helps it respond in a way that's as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots. If you tell the API to respond as a woman who works as a research scientist in biology, you'll get intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background. +Let's look at a variation for a chatbot named "Cramer," an amusing and somewhat helpful virtual assistant. To help the API understand the character of the role, you provide a few examples of questions and answers. All it takes is just a few sarcastic responses and the API can pick up the pattern and provide an endless number of similar responses. -In this example we create a chatbot that is a bit sarcastic and reluctantly answers questions: --``` -Marv is a chatbot that reluctantly answers questions. +```console +Cramer is a chatbot that reluctantly answers questions. ### User: How many pounds are in a kilogram?-Marv: This again? There are 2.2 pounds in a kilogram. Please make a note of this. +Cramer: This again? There are 2.2 pounds in a kilogram. Please make a note of this. ### User: What does HTML stand for?-Marv: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future. +Cramer: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future. ### User: When did the first airplane fly?-Marv: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away. +Cramer: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they'd come and take me away. ### User: Who was the first man in space?-Marv: +Cramer: ``` -To create an amusing and somewhat helpful chatbot we provide a few examples of questions and answers showing the API how to reply. All it takes is just a few sarcastic responses and the API is able to pick up the pattern and provide an endless number of snarky responses. +### Guidelines for designing conversations -+Our demonstrations show how easily you can create a chatbot that's capable of carrying on a conversation. Although it looks simple, this approach follows several important guidelines: -## Transformation +- **Define the intent of the conversation**. Just like the other prompts, you describe the intent of the interaction to the API. In this case, "a conversation." This input prepares the API to process subsequent input according to the initial intent. -The API is a language model that is familiar with a variety of ways that words and characters can be used to express information. This ranges from natural language text to code and languages other than English. The API is also able to understand content on a level that allows it to summarize, convert and express it in different ways. +- **Tell the API how to behave**. A key detail in this demonstration is the explicit instructions for how the API should interact: "The assistant is helpful, creative, clever, and very friendly." Without your explicit instructions, the API might stray and mimic the human it's interacting with. The API might become unfriendly or exhibit other undesirable behavior. -### Translation +- **Give the API an identity**. At the start, you have the API respond as an AI created by OpenAI. While the API has no intrinsic identity, the character description helps the API respond in a way that's as close to the truth as possible. You can use character identity descriptions in other ways to create different kinds of chatbots. If you tell the API to respond as a research scientist in biology, you receive intelligent and thoughtful comments from the API similar to what you'd expect from someone with that background. -In this example we show the API how to convert from English to French: +## Transform text -``` +The API is a language model that's familiar with various ways that words and character identities can be used to express information. The knowledge data supports transforming text from natural language into code, and translating between other languages and English. The API is also able to understand content on a level that allows it to summarize, convert, and express it in different ways. Let's look at a few examples. ++### Translate from one language to another ++This demonstration instructs the API on how to convert English language phrases into French: ++```console English: I do not speak French. French: Je ne parle pas français. English: See you later! French: Quelles chambres avez-vous de disponible? English: ``` -This example works because the API already has a grasp of French, so there's no need to try to teach it this language. Instead, we just need to provide enough examples that API understands that it's converting from one language to another. +This example works because the API already has a grasp of the French language. You don't need to try to teach the language to the API. You just need to provide enough examples to help the API understand your request to convert from one language to another. -If you want to translate from English to a language the API is unfamiliar with you'd need to provide it with more examples and a fine-tuned model to do it fluently. +If you want to translate from English to a language the API doesn't recognize, you need to provide the API with more examples and a fine-tuned model that can produce fluent translations. -### Conversion +### Convert between text and emoji -In this example we convert the name of a movie into emoji. This shows the adaptability of the API to picking up patterns and working with other characters. +This demonstration converts the name of a movie from text into emoji characters. This example shows the adaptability of the API to pick up patterns and work with other characters. -``` -Back to Future: 👨👴🚗🕒 -Batman: 🤵🦇 -Transformers: 🚗🤖 -Wonder Woman: 👸🏻👸🏼👸🏽👸🏾👸🏿 -Spider-Man: 🕸🕷🕸🕸🕷🕸 -Winnie the Pooh: 🐻🐼🐻 -The Godfather: 👨👩👧🕵🏻‍♂️👲💥 -Game of Thrones: 🏹🗡🗡🏹 -Spider-Man: +```console +Carpool Time: 👨👴👩🚗🕒 +Robots in Cars: 🚗🤖 +Super Femme: 👸🏻👸🏼👸🏽👸🏾👸🏿 +Webs of the Spider: 🕸🕷🕸🕸🕷🕸 +The Three Bears: 🐻🐼🐻 +Mobster Family: 👨👩👧🕵🏻‍♂️👲💥 +Arrows and Swords: 🏹🗡🗡🏹 +Snowmobiles: ``` -## Summarization +### Summarize text -The API is able to grasp the context of text and rephrase it in different ways. In this example, the API takes a block of text and creates an explanation a child would understand. This illustrates that the API has a deep grasp of language. +The API can grasp the context of text and rephrase it in different ways. In this demonstration, the API takes a block of text and creates an explanation that's understandable by a primary-age child. This example illustrates that the API has a deep grasp of language. -``` +```console My ten-year-old asked me what this passage means: """ A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich.[1] Neutron stars are the smallest and densest stellar objects, excluding black holes and hypothetical white holes, quark stars, and strange stars.[2] Neutron stars have a radius on the order of 10 kilometres (6.2 mi) and a mass of about 1.4 solar masses.[3] They result from the supernova explosion of a massive star, combined with gravitational collapse, that compresses the core past white dwarf star density to that of atomic nuclei. I rephrased it for him, in plain language a ten-year-old can understand: """ ``` -In this example we place whatever we want summarized between the triple quotes. It's worth noting that we explain both before and after the text to be summarized what our intent is and who the target audience is for the summary. This is to keep the API from drifting after it processes a large block of text. +### Guidelines for producing text summaries -## Completion +Text summarization often involves supplying large amounts of text to the API. To help prevent the API from drifting after it processes a large block of text, follow these guidelines: -While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. For example, if given this prompt, the API will continue the train of thought about vertical farming. You can lower the temperature setting to keep the API more focused on the intent of the prompt or increase it to let it go off on a tangent. +- **Enclose the text to summarize within triple double quotes**. In this example, you enter three double quotes (""") on a separate line before and after the block of text to summarize. This formatting style clearly defines the start and end of the large block of text to process. -``` +- **Explain the summary intent and target audience before, and after summary**. Notice that this example differs from the others because you provide instructions to the API two times: before, and after the text to process. The redundant instructions help the API to focus on your intended task and avoid drift. ++## Complete partial text and code inputs ++While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. ++In this demonstration, you supply a text prompt to the API that appears to be incomplete. You stop the text entry on the word "and." The API interprets the incomplete text as a trigger to continue your train of thought. ++```console Vertical farming provides a novel solution for producing food locally, reducing transportation costs and ``` -This next prompt shows how you can use completion to help write React components. We send some code to the API, and it's able to continue the rest because it has an understanding of the React library. We recommend using models from our Codex series for tasks that involve understanding or generating code. Currently, we support two Codex models: `code-davinci-002` and `code-cushman-001`. For more information about Codex models, see the [Codex models](../concepts/legacy-models.md#codex-models) section in [Models](../concepts/models.md). +This next demonstration shows how you can use the completion feature to help write `React` code components. You begin by sending some code to the API. You stop the code entry with an open parenthesis `(`. The API interprets the incomplete code as a trigger to complete the `HeaderComponent` constant definition. The API can complete this code definition because it has an understanding of the corresponding `React` library. -``` +```python import React from 'react'; const HeaderComponent = () => ( ``` -+### Guidelines for generating completions -## Factual responses +Here are some helpful guidelines for using the API to generate text and code completions: -The API has a lot of knowledge that it's learned from the data it was trained on. It also has the ability to provide responses that sound very real but are in fact made up. There are two ways to limit the likelihood of the API making up an answer. +- **Lower the Temperature to keep the API focused**. Set lower values for the `Temperature` setting to instruct the API to provide responses that are focused on the intent described in your prompt. -**1. Provide a ground truth for the API** -If you provide the API with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to confabulate a response. +- **Raise the Temperature to allow the API to tangent**. Set higher values for the `Temperature` setting to allow the API to respond in a manner that's tangential to the intent described in your prompt. -**2. Use a low probability and show the API how to say "I don't know"** -If the API understands that in cases where it's less certain about a response that saying "I don't know" or some variation is appropriate, it will be less inclined to make up answers. +- **Use the GPT-35-Turbo and GPT-4 Azure OpenAI models**. For tasks that involve understanding or generating code, Microsoft recommends using the `GPT-35-Turbo` and `GPT-4` Azure OpenAI models. These models use the new [chat completions format](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions). + +## Generate factual responses -In this example we give the API examples of questions and answers it knows and then examples of things it wouldn't know and provide question marks. We also set the probability to zero so the API is more likely to respond with a "?" if there's any doubt. +The API has learned knowledge that's built on actual data reviewed during its training. It uses this learned data to form its responses. However, the API also has the ability to respond in a way that sounds true, but is in fact, fabricated. -``` +There are a few ways you can limit the likelihood of the API making up an answer in response to your input. You can define the foundation for a true and factual response, so the API drafts its response from your data. You can also set a low `Temperature` probability value and show the API how to respond when the data isn't available for a factual answer. ++The following demonstration shows how to teach the API to reply in a more factual manner. You provide the API with examples of questions and answers it understands. You also supply examples of questions ("Q") it might not recognize and use a question mark for the answer ("A") output. This approach teaches the API how to respond to questions it can't answer factually. ++As a safeguard, you set the `Temperature` probability to zero so the API is more likely to respond with a question mark (?) if there's any doubt about the true and factual response. ++```console Q: Who is Batman? A: Batman is a fictional comic book character. Q: What is Devz9? A: ? Q: Who is George Lucas?-A: George Lucas is American film director and producer famous for creating Star Wars. +A: George Lucas is an American film director and producer famous for creating Star Wars. Q: What is the capital of California? A: Sacramento. A: Sacramento. Q: What orbits the Earth? A: The Moon. -Q: Who is Fred Rickerson? +Q: Who is Egad Debunk? A: ? Q: What is an atom? A: Two, Phobos and Deimos. Q: ```-## Working with code ++### Guidelines for generating factual responses ++Let's review the guidelines to help limit the likelihood of the API making up an answer: ++- **Provide a ground truth for the API**. Instruct the API about what to use as the foundation for creating a true and factual response based on your intent. If you provide the API with a body of text to use to answer questions (like a Wikipedia entry), the API is less likely to fabricate a response. ++- **Use a low probability**. Set a low `Temperature` probability value so the API stays focused on your intent and doesn't drift into creating a fabricated or confabulated response. ++- **Show the API how to respond with "I don't know"**. You can enter example questions and answers that teach the API to use a specific response for questions for which it can't find a factual answer. In the example, you teach the API to respond with a question mark (?) when it can't find the corresponding data. This approach also helps the API to learn when responding with "I don't know" is more "correct" than making up an answer. ++## Work with code The Codex model series is a descendant of OpenAI's base GPT-3 series that's been trained on both natural language and billions of lines of code. It's most capable in Python and proficient in over a dozen languages including C#, JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell. -Learn more about generating code completions, with the [working with code guide](./work-with-code.md) +For more information about generating code completions, see [Codex models and Azure OpenAI Service](./work-with-code.md). ## Next steps -Learn [how to work with code (Codex)](./work-with-code.md). -Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md). +- Learn how to work with the [GPT-35-Turbo and GPT-4 models](/azure/ai-services/openai/how-to/chatgpt?pivots=programming-language-chat-completions). +- Learn more about the [Azure OpenAI Service models](../concepts/models.md). |
ai-services | Create Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/create-resource.md | |
ai-services | Function Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md | if response_message.get("function_call"): messages.append( # adding assistant response to messages { "role": response_message["role"],- "name": response_message["function_call"]["name"], - "content": response_message["function_call"]["arguments"], + "function_call": { + "name": function_name, + "arguments": response_message["function_call"]["arguments"], + }, + "content": None } ) messages.append( # adding function response to messages |
ai-services | Integrate Synapseml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/integrate-synapseml.md | We recommend [creating a Synapse workspace](../../../synapse-analytics/get-start The next step is to add this code into your Spark cluster. You can either create a notebook in your Spark platform and copy the code into this notebook to run the demo, or download the notebook and import it into Synapse Analytics. +1. [Download this demo as a notebook](https://github.com/microsoft/SynapseML/blob/master/docs/Explore%20Algorithms/OpenAI/OpenAI.ipynb) (select Raw, then save the file) 1. Import the notebook [into the Synapse Workspace](../../../synapse-analytics/spark/apache-spark-development-using-notebooks.md#create-a-notebook) or, if using Databricks, [into the Databricks Workspace](/azure/databricks/notebooks/notebooks-manage#create-a-notebook) 1. Install SynapseML on your cluster. See the installation instructions for Synapse at the bottom of [the SynapseML website](https://microsoft.github.io/SynapseML/). This requires pasting another cell at the top of the notebook you imported 1. Connect your notebook to a cluster and follow along, editing and running the cells below. |
ai-services | Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/manage-costs.md | -This article describes how you plan for and manage costs for Azure OpenAI Service. Before you deploy the service, you can use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you've started using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services. +This article describes how you can plan for and manage costs for Azure OpenAI Service. Before you deploy the service, use the Azure pricing calculator to estimate costs for Azure OpenAI. Later, as you deploy Azure resources, review the estimated costs. After you start using Azure OpenAI resources, use Cost Management features to set budgets and monitor costs. ++You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure OpenAI Service are only a portion of the monthly costs in your Azure bill. Although this article is about planning for and managing costs for Azure OpenAI, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services. ## Prerequisites Cost analysis in Cost Management supports most Azure account types, but not all Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the costs of using Azure OpenAI. -## Understand the full billing model for Azure OpenAI Service --Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue. +## Understand the Azure OpenAI full billing model -### How you're charged for Azure OpenAI Service +Azure OpenAI Service runs on Azure infrastructure that accrues costs when you deploy new resources. There could be other infrastructure costs that might accrue. The following sections describe how you're charged for Azure OpenAI Service. ### Base series and Codex series models Azure OpenAI base series and Codex series models are charged per 1,000 tokens. Costs vary depending on which model series you choose: Ada, Babbage, Curie, Davinci, or Code-Cushman. -Our models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text. +Azure OpenAI models understand and process text by breaking it down into tokens. For reference, each token is roughly four characters for typical English text. -Token costs are for both input and output. For example, if you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens. +Token costs are for both input and output. For example, suppose you have a 1,000 token JavaScript code sample that you ask an Azure OpenAI model to convert to Python. You would be charged approximately 1,000 tokens for the initial input request sent, and 1,000 more tokens for the output that is received in response for a total of 2,000 tokens. -In practice, for this type of completion call the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many different factors including the value assigned to the max_tokens parameter. +In practice, for this type of completion call, the token input/output wouldn't be perfectly 1:1. A conversion from one programming language to another could result in a longer or shorter output depending on many factors. One such factor is the value assigned to the `max_tokens` parameter. ### Base Series and Codex series fine-tuned models Azure OpenAI fine-tuned models are charged based on three factors: - Hosting hours - Inference per 1,000 tokens -The hosting hours cost is important to be aware of since once a fine-tuned model is deployed it continues to incur an hourly cost regardless of whether you're actively using it. Fine-tuned model costs should be monitored closely. +The hosting hours cost is important to be aware of since after a fine-tuned model is deployed, it continues to incur an hourly cost regardless of whether you're actively using it. Monitor fine-tuned model costs closely. [!INCLUDE [Fine-tuning deletion](../includes/fine-tune.md)] ### Other costs that might accrue with Azure OpenAI Service -Keep in mind that enabling capabilities like sending data to Azure Monitor Logs, alerting, etc. incurs additional costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource. +Enabling capabilities such as sending data to Azure Monitor Logs and alerting incurs extra costs for those services. These costs are visible under those other services and at the subscription level, but aren't visible when scoped just to your Azure OpenAI resource. ### Using Azure Prepayment with Azure OpenAI Service -You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace. +You can pay for Azure OpenAI Service charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those products and services found in the Azure Marketplace. ## Monitor costs -As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +As you use Azure resources with Azure OpenAI, you incur costs. Azure resource usage unit costs vary by time intervals, such as seconds, minutes, hours, and days, or by unit usage, such as bytes and megabytes. As soon as Azure OpenAI use starts, costs can be incurred and you can see the costs in the [cost analysis](../../../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). -When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. And you see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. +When you use cost analysis, you view Azure OpenAI costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends. You can see where overspending might have occurred. If you've created budgets, you can also easily see where they're exceeded. To view Azure OpenAI costs in cost analysis: 1. Sign in to the Azure portal. 2. Select one of your Azure OpenAI resources. 3. Under **Resource Management** select **Cost analysis**-4. By default cost analysis is scoped to the individual Azure OpenAI resource. +4. By default, cost analysis is scoped to the individual Azure OpenAI resource. :::image type="content" source="../media/manage-costs/resource-view.png" alt-text="Screenshot of cost analysis dashboard scoped to an Azure OpenAI resource." lightbox="../media/manage-costs/resource-view.png"::: -To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and in this case switching the chart type to **Line**. You can now see that for this particular resource the source of the costs is from three different model series with **Text-Davinci Tokens** representing the bulk of the costs. +To understand the breakdown of what makes up that cost, it can help to modify **Group by** to **Meter** and switching the chart type to **Line**. You can now see that for this particular resource, the source of the costs comes from three different model series with **Text-Davinci Tokens** that represent the bulk of the costs. :::image type="content" source="../media/manage-costs/grouping.png" alt-text="Screenshot of cost analysis dashboard with group by set to meter." lightbox="../media/manage-costs/grouping.png"::: -It's important to understand scope when evaluating costs associated with Azure OpenAI. If your resources are part of the same resource group you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups you can scope to the subscription level. +It's important to understand scope when you evaluate costs associated with Azure OpenAI. If your resources are part of the same resource group, you can scope Cost Analysis at that level to understand the effect on costs. If your resources are spread across multiple resource groups, you can scope to the subscription level. ++When scoped at a higher level, you often need to add more filters to focus on Azure OpenAI usage. When scoped at the subscription level, you see many other resources that you might not care about in the context of Azure OpenAI cost management. When you scope at the subscription level, we recommend that you navigate to the full **Cost analysis tool** under the **Cost Management** service. ++Here's an example of how to use the **Cost analysis tool** to see your accumulated costs for a subscription or resource group: ++1. Search for *Cost Management* in the top Azure search bar to navigate to the full service experience, which includes more options such as creating budgets. +1. If necessary, select **change** if the **Scope:** isn't pointing to the resource group or subscription you want to analyze. +1. On the left, select **Reporting + analytics** > **Cost analysis**. +1. On the **All views** tab, select **Accumulated costs**. + -However, when scoped at a higher level you often need to add additional filters to be able to zero in on Azure OpenAI usage. When scoped at the subscription level we see a number of other resources that we may not care about in the context of Azure OpenAI cost management. When scoping at the subscription level, we recommend navigating to the full **Cost analysis tool** under the **Cost Management** service. Search for **"Cost Management"** in the top Azure search bar to navigate to the full service experience, which includes more options like creating budgets. +The cost analysis dashboard shows the accumulated costs that are analyzed depending on what you've specified for **Scope**. :::image type="content" source="../media/manage-costs/subscription.png" alt-text="Screenshot of cost analysis dashboard with scope set to subscription." lightbox="../media/manage-costs/subscription.png"::: -If you try to add a filter by service, you'll find that you can't find Azure OpenAI in the list. This is because Azure OpenAI has commonality with a subset of Azure AI services where the service level filter is **Cognitive Services**, but if you want to see all Azure OpenAI resources across a subscription without any other type of Azure AI services resources you need to instead scope to **Service tier: Azure OpenAI**: +If you try to add a filter by service, you find that you can't find Azure OpenAI in the list. This situation occurs because Azure OpenAI has commonality with a subset of Azure AI services where the service level filter is **Cognitive Services**. If you want to see all Azure OpenAI resources across a subscription without any other type of Azure AI services resources, instead scope to **Service tier: Azure OpenAI**: :::image type="content" source="../media/manage-costs/service-tier.png" alt-text="Screenshot of cost analysis dashboard with service tier highlighted." lightbox="../media/manage-costs/service-tier.png"::: ## Create budgets -You can create [budgets](../../../cost-management/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that automatically notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. Budgets and alerts are created for Azure subscriptions and resource groups, so they're useful as part of an overall cost monitoring strategy. +You can create [budgets](../../../cost-management-billing/costs/tutorial-acm-create-budgets.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to manage costs and create [alerts](../../../cost-management-billing/costs/cost-mgt-alerts-monitor-usage-spending.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) that notify stakeholders of spending anomalies and overspending risks. Alerts are based on spending compared to budget and cost thresholds. You create budgets and alerts for Azure subscriptions and resource groups. They're useful as part of an overall cost monitoring strategy. -Budgets can be created with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you additional money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). +You can create budgets with filters for specific resources or services in Azure if you want more granularity present in your monitoring. Filters help ensure that you don't accidentally create new resources that cost you more money. For more information about the filter options available when you create a budget, see [Group and filter options](../../../cost-management-billing/costs/group-filter.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). > [!IMPORTANT]-> While OpenAI has an option for hard limits that will prevent you from going over your budget, Azure OpenAI does not currently provide this functionality. You are able to kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part. +> While OpenAI has an option for hard limits that prevent you from going over your budget, Azure OpenAI doesn't currently provide this functionality. You can kick off automation from action groups as part of your budget notifications to take more advanced actions, but this requires additional custom development on your part. ## Export cost data -You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account. This is helpful when you need or others to do additional data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. Exporting cost data is the recommended way to retrieve cost datasets. +You can also [export your cost data](../../../cost-management-billing/costs/tutorial-export-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) to a storage account, which is helpful when you need others to do extra data analysis for costs. For example, a finance team can analyze the data using Excel or Power BI. You can export your costs on a daily, weekly, or monthly schedule and set a custom date range. We recommend exporting cost data as the way to retrieve cost datasets. ## Next steps |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/role-based-access-control.md | This role provides little value by itself and is instead typically assigned in c #### Cognitive Services Usages Reader + Cognitive Services OpenAI User -All the capabilities of Cognitive Services OpenAI plus the ability to: +All the capabilities of Cognitive Services OpenAI User plus the ability to: ✅ View quota allocations in Azure OpenAI Studio |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and | Total size of all files per resource | 1 GB | | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion |+| Max size of all files per upload (Azure OpenAI on your data) | 16 MB | + <sup>1</sup> Default quota limits are subject to change. |
ai-services | Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md | POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-06-01-preview \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \--H "chatgpt_url: YOUR_RESOURCE_URL" \--H "chatgpt_key: YOUR_API_KEY" \ -d \ ' { The following parameters can be used inside of the `parameters` field inside of | `topNDocuments` | number | Optional | 5 | Number of documents that need to be fetched for document augmentation. | | `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only available when `queryType` is set to `semantic`. |-| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the ΓÇ£System MessageΓÇ¥ in Azure OpenAI Studio. <!--See [Using your data](./concepts/use-your-data.md#system-message) for more information.--> ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| +| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| +| `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control) +| `embeddingEndpoint` | string | Optional | null | the endpoint URL for an Ada embedding model deployment. Used for [vector search](./concepts/use-your-data.md#search-options). | +| `embeddingKey` | string | Optional | null | the API key for an Ada embedding model deployment. Used for [vector search](./concepts/use-your-data.md#search-options). | ## Image generation |
ai-services | Use Your Data Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md | In this quickstart you can use your own data with Azure OpenAI models. Using Azu - Your chat model must use version `0301`. You can view or change your model version in [Azure OpenAI Studio](./concepts/models.md#model-updates). -- Be sure that you are assigned at least the [Cognitive Services OpenAI Contributor](/azure/role-based-access-control/built-in-roles#cognitive-services-openai-contributor) role for the Azure OpenAI resource. +- Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource. > [!div class="nextstepaction"] |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | keywords: ## August 2023 +### Azure OpenAI on your own data (preview) updates + - You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).+- [Azure OpenAI on your data](./concepts/use-your-data.md#virtual-network-support--private-network-support) now supports private endpoints. +- Ability to [filter access to sensitive documents](./concepts/use-your-data.md#document-level-access-control). +- [Automatically refresh your index on a schedule](./concepts/use-your-data.md#schedule-automatic-index-refreshes). +- [Vector search and semantic search options](./concepts/use-your-data.md#search-options). +- [View your chat history in the deployed web app](./concepts/use-your-data.md#chat-history) ## July 2023 |
ai-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md | Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
ai-services | Add Sharepoint Datasources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/add-sharepoint-datasources.md | The Active Directory manager will get a pop-up window requesting permissions to --> ### Grant access from the Azure Active Directory admin center -1. The Active Directory manager signs in to the Azure portal and opens **[Enterprise applications](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps)**. +1. Sign in to the [Azure portal](https://portal.azure.com). +1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Search for `QnAMakerPortalSharePoint` the select the QnA Maker app. |
ai-services | Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/network-isolation.md | The Cognitive Search instance can be isolated via a private endpoint after the Q Follow the steps below to restrict public access to QnA Maker resources. Protect an Azure AI services resource from public access by [configuring the virtual network](../../cognitive-services-virtual-networks.md?tabs=portal). After restricting access to the Azure AI service resource based on VNet, To browse knowledgebases on the https://qnamaker.ai portal from your on-premises network or your local browser.-- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configuring-access-from-on-premises-networks).+- Grant access to [on-premises network](../../cognitive-services-virtual-networks.md?tabs=portal#configure-access-from-on-premises-networks). - Grant access to your [local browser/machine](../../cognitive-services-virtual-networks.md?tabs=portal#managing-ip-network-rules). - Add the **public IP address of the machine under the Firewall** section of the **Networking** tab. By default `portal.azure.com` shows the current browsing machine's public IP (select this entry) and then select **Save**. |
ai-services | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure AI services description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
ai-services | Batch Transcription Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-create.md | Batch transcription requests for expired models will fail with a 4xx error. You' The transcription result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. In that case, when the transcription job is deleted, the transcription result data is also deleted. -You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). The Storage account resource of the destination container must allow all external traffic. +You can store the results of a batch transcription to a writable Azure Blob storage container using option `destinationContainerUrl` in the [batch transcription creation request](#create-a-transcription-job). Note however that this option is only using [ad hoc SAS](batch-transcription-audio-data.md#sas-url-for-batch-transcription) URI and doesn't support [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism). This option also doesn't support Access policy based SAS. The Storage account resource of the destination container must allow all external traffic. If you would like to store the transcription results in an Azure Blob storage container via the [Trusted Azure services security mechanism](batch-transcription-audio-data.md#trusted-azure-services-security-mechanism), then you should consider using [Bring-your-own-storage (BYOS)](bring-your-own-storage-speech-resource.md). See details on how to use BYOS-enabled Speech resource for Batch transcription in [this article](bring-your-own-storage-speech-resource-speech-to-text.md). |
ai-services | Bring Your Own Storage Speech Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/bring-your-own-storage-speech-resource.md | General rule is that you need to pass this JSON string as a value of `--storage` To create a BYOS-enabled Speech resource with a REST Request to Cognitive Services API, we use [Accounts - Create](/rest/api/cognitiveservices/accountmanagement/accounts/create) request. -You need to have a meaning of authentication. The example in this section uses [Microsoft Azure Active Directory token](/azure/active-directory/develop/access-tokens). +You need to have a means of authentication. The example in this section uses [Microsoft Azure Active Directory token](/azure/active-directory/develop/access-tokens). This code snippet generates Azure AD token using interactive browser sign-in. It requires [Azure Identity client library](/dotnet/api/overview/azure/identity-readme): ```csharp |
ai-services | Get Started Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-speech-to-text.md | Title: "Speech to text quickstart - Speech service" -description: In this quickstart, you convert speech to text with recognition from a microphone. +description: In this quickstart, learn how to convert speech to text with recognition from a microphone or .wav file. Previously updated : 09/16/2022 Last updated : 08/24/2023 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python |
ai-services | Get Started Stt Diarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-stt-diarization.md | -zone_pivot_groups: programming-languages-set-twenty-two +zone_pivot_groups: programming-languages-speech-services keywords: speech to text, speech to text software keywords: speech to text, speech to text software [!INCLUDE [C++ include](includes/quickstarts/stt-diarization/cpp.md)] ::: zone-end + ::: zone pivot="programming-language-java" [!INCLUDE [Java include](includes/quickstarts/stt-diarization/java.md)] ::: zone-end +++ ::: zone pivot="programming-language-python" [!INCLUDE [Python include](includes/quickstarts/stt-diarization/python.md)] ::: zone-end +++ ## Next steps > [!div class="nextstepaction"] |
ai-services | Get Started Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/get-started-text-to-speech.md | Title: "Text to speech quickstart - Speech service" -description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio output formats, and custom configuration options for speech synthesis. +description: In this quickstart, you convert text to speech. Learn about object construction and design patterns, supported audio formats, and custom configuration options. Previously updated : 09/16/2022 Last updated : 08/25/2023 ms.devlang: cpp, csharp, golang, java, javascript, objective-c, python |
ai-services | How To Configure Openssl Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-configure-openssl-linux.md | |
ai-services | How To Track Speech Sdk Memory Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-track-speech-sdk-memory-usage.md | |
ai-services | Language Identification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-identification.md | SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \ ::: zone-end ::: zone pivot="programming-language-javascript"-Language detection with a custom endpoint isn't supported by the Speech SDK for JavaScript. For example, if you include "fr-FR" as shown here, the custom endpoint will be ignored. ```Javascript var enLanguageConfig = SpeechSDK.SourceLanguageConfig.fromLanguage("en-US"); |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-support.md | With the cross-lingual feature, you can transfer your custom neural voice model # [Pronunciation assessment](#tab/pronunciation-assessment) -The table in this section summarizes the 20 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 19 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. +The table in this section summarizes the 21 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 20 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. [!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)] |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/overview.md | The base model may not be sufficient if the audio contains ambient noise or incl With [real-time speech to text](get-started-speech-to-text.md), the audio is transcribed as speech is recognized from a microphone or file. Use real-time speech to text for applications that need to transcribe audio in real-time such as: - Transcriptions, captions, or subtitles for live meetings+- [Diarization](get-started-stt-diarization.md) +- [Pronunciation assessment](how-to-pronunciation-assessment.md) - Contact center agent assist - Dictation - Voice agents-- Pronunciation assessment ### Batch transcription |
ai-services | Rest Speech To Text Short | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/rest-speech-to-text-short.md | Audio is sent in the body of the HTTP `POST` request. It must be in one of the f | Format | Codec | Bit rate | Sample rate | |--|-|-|--| | WAV | PCM | 256 kbps | 16 kHz, mono |-| OGG | OPUS | 256 kpbs | 16 kHz, mono | +| OGG | OPUS | 256 kbps | 16 kHz, mono | > [!NOTE] > The preceding formats are supported through the REST API for short audio and WebSocket in the Speech service. The [Speech SDK](speech-sdk.md) supports the WAV format with PCM codec as well as [other formats](how-to-use-codec-compressed-audio-input-streams.md). |
ai-services | Speech Container Cstt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-cstt.md | sudo chown -R nonroot:nonroot <YOUR_LOCAL_MACHINE_PATH_1> <YOUR_LOCAL_MACHINE_PA * See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings-* Use more [Azure AI services containers](../cognitive-services-container-support.md) +* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Speech Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-howto.md | To run disconnected containers (not connected to the internet), you must submit * Review [configure containers](speech-container-configuration.md) for configuration settings. * Learn how to [use Speech service containers with Kubernetes and Helm](speech-container-howto-on-premises.md). * Deploy and run containers on [Azure Container Instance](../containers/azure-container-instance-recipe.md)-* Use more [Azure AI services containers](../cognitive-services-container-support.md). +* Use more [Azure AI containers](../cognitive-services-container-support.md). |
ai-services | Speech Container Lid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-lid.md | Increasing the number of concurrent calls can affect reliability and latency. Fo * See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings-* Use more [Azure AI services containers](../cognitive-services-container-support.md) +* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Speech Container Ntts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-ntts.md | For example, a model that was downloaded via the `latest` tag (defaults to "en-U * See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings-* Use more [Azure AI services containers](../cognitive-services-container-support.md) +* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Speech Container Stt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-container-stt.md | For more information about `docker run` with Speech containers, see [Install and * See the [Speech containers overview](speech-container-overview.md) * Review [configure containers](speech-container-configuration.md) for configuration settings-* Use more [Azure AI services containers](../cognitive-services-container-support.md) +* Use more [Azure AI containers](../cognitive-services-container-support.md) |
ai-services | Speech Services Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-private-link.md | Use these parameters instead of the parameters in the article that you chose: | Resource | **\<your-speech-resource-name>** | | Target sub-resource | **account** | -**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections. +**DNS for private endpoints:** Review the general principles of [DNS for private endpoints in Azure AI services resources](../cognitive-services-virtual-networks.md#apply-dns-changes-for-private-endpoints). Then confirm that your DNS configuration is working correctly by performing the checks described in the following sections. ### Resolve DNS from the virtual network |
ai-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md | These limits aren't adjustable. | Max number of simultaneous dataset uploads | N/A | 5 | | Max data file size for data import per dataset | N/A | 2 GB | | Upload of long audios or audios without script | N/A | Yes |-| Max number of simultaneous model trainings | N/A | 3 | +| Max number of simultaneous model trainings | N/A | 4 | | Max number of custom endpoints | N/A | 50 | #### Audio Content Creation tool |
ai-services | Speech Synthesis Markup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup.md | Title: Speech Synthesis Markup Language (SSML) overview - Speech service -description: Use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech. +description: Learn how to use the Speech Synthesis Markup Language to control pronunciation and prosody in text to speech. Previously updated : 11/30/2022 Last updated : 8/16/2023 # Speech Synthesis Markup Language (SSML) overview -Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input. +Speech Synthesis Markup Language (SSML) is an XML-based markup language that you can use to fine-tune your text to speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. It gives you more control and flexibility than plain text input. > [!TIP]-> You can hear voices in different styles and pitches reading example text via the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery). +> You can hear voices in different styles and pitches reading example text by using the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery). -## Scenarios +## Use case scenarios -You can use SSML to: +SSML is designed to give you flexibility in how you want your speech output to sound, and it provides different properties for how you can customize that output. You can use SSML to: -- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of the text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags such as bookmark or viseme that can be processed later by your application.-- [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. Adjust the emphasis, speaking rate, pitch, and volume. You can also use SSML to insert pre-recorded audio, such as a sound effect or a musical note.+- [Define the input text structure](speech-synthesis-markup-structure.md) that determines the structure, content, and other characteristics of your text to speech output. For example, you can use SSML to define a paragraph, a sentence, a break or a pause, or silence. You can wrap text with event tags, like a bookmark or viseme, that your application can process later. A viseme is the visual description of a phoneme, the individual speech sounds, in spoken language. +- [Choose the voice](speech-synthesis-markup-voice.md), language, name, style, and role. You can use multiple voices in a single SSML document. You can also adjust the emphasis, speaking rate, pitch, and volume. SSML can also insert prerecorded audio, such as a sound effect or a musical note. - [Control pronunciation](speech-synthesis-markup-pronunciation.md) of the output audio. For example, you can use SSML with phonemes and a custom lexicon to improve pronunciation. You can also use SSML to define how a word or mathematical expression is pronounced. -## Use SSML +## Ways to work with SSML ++SSML functionality is available in various tools that might fit your use case. > [!IMPORTANT]-> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself is not billable, optional elements that are used to adjust how the text is converted to speech, like phonemes and pitch, are counted as billable characters. For more information, see [text to speech pricing notes](text-to-speech.md#pricing-note). +> You're billed for each character that's converted to speech, including punctuation. Although the SSML document itself isn't billable, the service counts optional elements that you use to adjust how the text is converted to speech, like phonemes and pitch, as billable characters. For more information, see [Pricing note](text-to-speech.md#pricing-note). You can use SSML in the following ways: -- [Audio Content Creation](https://aka.ms/audiocontentcreation) tool: Author plain text and SSML in Speech Studio: You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md).-- [Batch synthesis API](batch-synthesis.md): Provide SSML via the `inputs` property. -- [Speech CLI](get-started-text-to-speech.md?pivots=programming-language-cli): Provide SSML via the `spx synthesize --ssml SSML` command line argument.-- [Speech SDK](how-to-speech-synthesis.md#use-ssml-to-customize-speech-characteristics): Provide SSML via the "speak" SSML method.+- [The Audio Content Creation](https://aka.ms/audiocontentcreation) tool lets you author plain text and SSML in Speech Studio. You can listen to the output audio and adjust the SSML to improve speech synthesis. For more information, see [Speech synthesis with the Audio Content Creation tool](how-to-audio-content-creation.md). +- [The Batch synthesis API](batch-synthesis.md) accepts SSML via the `inputs` property. +- [The Speech CLI](get-started-text-to-speech.md?pivots=programming-language-cli) accepts SSML via the `spx synthesize --ssml SSML` command line argument. +- [The Speech SDK](how-to-speech-synthesis.md#use-ssml-to-customize-speech-characteristics) accepts SSML via the "speak" SSML method across the different supported languages. ## Next steps - [SSML document structure and events](speech-synthesis-markup-structure.md) - [Voice and sound with SSML](speech-synthesis-markup-voice.md) - [Pronunciation with SSML](speech-synthesis-markup-pronunciation.md)-- [Language support: Voices, locales, languages](language-support.md?tabs=tts)+- [Language and voice support for the Speech service](language-support.md?tabs=tts) |
ai-services | Deploy User Managed Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/deploy-user-managed-glossary.md | + + Title: Deploy a user-managed glossary in Translator container ++description: How to deploy a user-managed glossary in the Translator container environment. ++++++ Last updated : 08/15/2023++recommendations: false +++<!--┬ámarkdownlint-disable┬áMD036┬á--> +<!--┬ámarkdownlint-disable┬áMD046┬á--> ++# Deploy a user-managed glossary ++Microsoft Translator containers enable you to run several features of the Translator service in your own environment and are great for specific security and data governance requirements. ++There may be times when you're running a container with a multi-layered ingestion process when you discover that you need to implement an update to sentence and/or phrase files. Since the standard phrase and sentence files are encrypted and read directly into memory at runtime, you need to implement a quick-fix engineering solution to implement a dynamic update. This update can be implemented using our user-managed glossary feature: ++* To deploy the **phrase​fix** solution, you need to create a **phrase​fix** glossary file to specify that a listed phrase is translated in a specified way. ++* To deploy the **sent​fix** solution, you need to create a **sent​fix** glossary file to specify an exact target translation for a source sentence. ++* The **phrase​fix** and **sent​fix** files are then included with your translation request and read directly into memory at runtime. ++## Managed glossary workflow ++ > [!IMPORTANT] + > **UTF-16 LE** is the only accepted file format for the managed-glossary folders. For more information about encoding your files, *see* [Encoding](/powershell/module/microsoft.powershell.management/set-content?view=powershell-7.2#-encoding&preserve-view=true) ++1. To get started manually creating the folder structure, you need to create and name your folder. The managed-glossary folder is encoded in **UTF-16 LE BOM** format and nests **phrase​fix** or **sent​fix** source and target language files. Let's name our folder `customhotfix`. Each folder can have **phrase​fix** and **sent​fix** files. You provide the source (`src`) and target (`tgt`) language codes with the following naming convention: ++ |Glossary file name format|Example file name | + |--|--| + |{`src`}.{`tgt`}.{container-glossary}.{phrase​fix}.src.snt|en.es.container-glossary.phrasefix.src.snt| + |{`src`}.{`tgt`}.{container-glossary}.{phrase​fix}.tgt.snt|en.es.container-glossary.phrasefix.tgt.snt| + |{`src`}.{`tgt`}.{container-glossary}.{sent​fix}.src.snt|en.es.container-glossary.sentfix.src.snt| + |{`src`}.{`tgt`}.{container-glossary}.{sent​fix}.tgt.snt|en.es.container-glossary.sentfix.tgt.snt| ++ > [!NOTE] + > + > * The **phrase​fix** solution is an exact find-and-replace operation. Any word or phrase listed is translated in the way specified. + > * The **sent​fix** solution is more precise and allows you to specify an exact target translation for a source sentence. For a sentence match to occur, the entire submitted sentence must match the **sent​fix** entry. If only a portion of the sentence matches, the entry won't match. + > * If you're hesitant about making sweeping find-and-replace changes, we recommend, at the outset, solely using the **sent​fix** solution. ++1. Next, to dynamically reload glossary entry updates, create a `version.json` file within the `customhotfix` folder. The `version.json` file should contain the following parameters: **VersionId**. An integer value. ++ ***Sample version.json file*** ++ ```json + { ++ "VersionId": 5 ++ } ++ ``` ++ > [!TIP] + > + > Reload can be controlled by setting the following environmental variables when starting the container: + > + > * **HotfixReloadInterval=**. Default value is 5 minutes. + > * **HotfixReloadEnabled=**. Default value is true. ++1. Use the **docker run** command ++ **Docker run command required options** ++ ```dockerfile + docker run --rm -it -p 5000:5000 \ ++ -e eula=accept \ ++ -e billing={ENDPOINT_URI} \ ++ -e apikey={API_KEY} \ ++ -e Languages={LANGUAGES_LIST} \ ++ -e HotfixDataFolder={path to glossary folder} ++ {image} + ``` ++ **Example docker run command** ++ ```dockerfile ++ docker run -rm -it -p 5000:5000 \ + -v /mnt/d/models:/usr/local/models -v /mnt/d /customerhotfix:/usr/local/customhotfix \ + -e EULA=accept \ + -e billing={ENDPOINT_URI} \ + -e apikey={API_Key} \ + -e Languages=en,es \ + -e HotfixDataFolder=/usr/local/customhotfix\ + mcr.microsoft.com/azure-cognitive-services/translator/text-translation:latest ++ ``` ++## Learn more ++> [!div class="nextstepaction"] +> [Create a dynamic dictionary](../dynamic-dictionary.md) [Use a custom dictionary](../custom-translator/concepts/dictionaries.md) |
ai-services | Translator How To Install Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/containers/translator-how-to-install-container.md | keywords: on-premises, Docker, container, identify # Install and run Translator containers -Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you'll learn how to download, install, and run a Translator container. +Containers enable you to run several features of the Translator service in your own environment. Containers are great for specific security and data governance requirements. In this article you learn how to download, install, and run a Translator container. Translator container enables you to build a translator application architecture that is optimized for both robust cloud capabilities and edge locality. See the list of [languages supported](../language-support.md) when using Transla > [!IMPORTANT] >-> * To use the Translator container, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below. -> * Translator container supports limited features compared to the cloud offerings. Form more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md). +> * To use the Translator container, you must submit an online request and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container). +> * Translator container supports limited features compared to the cloud offerings. For more information, _see_ [**Container translate methods**](translator-container-supported-parameters.md). <!-- markdownlint-disable MD033 --> ## Prerequisites -To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). +To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). -You'll also need to have: +You also need: | Required | Purpose | |--|--|-| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> | +| Familiarity with Docker | <ul><li>You should have a basic understanding of Docker concepts like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology).</li></ul> | | Docker Engine | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> |-| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) resource with region other than 'global', associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>| +| Translator resource | <ul><li>An Azure [Translator](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation) regional resource (not `global`) with an associated API key and endpoint URI. Both values are required to start the container and can be found on the resource overview page.</li></ul>| |Optional|Purpose| ||-| curl -X POST "http://localhost:5000/translate?api-version=3.0&from=en&to=zh-HANS There are several ways to validate that the container is running: -* The container provides a homepage at `\` as a visual validation that the container is running. +* The container provides a homepage at `/` as a visual validation that the container is running. -* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port. +* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the following request URLs to validate the container is running. The example request URLs listed point to `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port. | Request URL | Purpose | |--|--| |
ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/language-support.md | -| Uzbek (Latin | `uz` |Γ£ö|Γ£ö||Γ£ö|| +| Uzbek (Latin) | `uz` |Γ£ö|Γ£ö||Γ£ö|| | Vietnamese | `vi` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Welsh | `cy` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Yucatec Maya | `yua` |Γ£ö|Γ£ö||Γ£ö|| |
aks | Auto Upgrade Node Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-image.md | Last updated 02/03/2023 # Automatically upgrade Azure Kubernetes Service cluster node operating system images (preview) -AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, works in tandem with the existing [auto-upgrade][Autoupgrade] channel, which is used for Kubernetes version upgrades. +AKS now supports an exclusive channel dedicated to controlling node-level OS security updates. This channel, referred to as the node OS auto-upgrade channel, can't be used for cluster-level Kubernetes version upgrades. To automatically upgrade Kubernetes versions, continue to use the cluster [auto-upgrade][Autoupgrade] channel. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] -## Why use node OS auto-upgrade +## How does node OS auto-upgrade work with cluster auto-upgrade? -This channel is exclusively meant to control node OS security updates. You can use this channel to disable [unattended upgrades][unattended-upgrades]. You can schedule maintenance without worrying about [Kured][kured] for security patches, provided you choose either the `SecurityPatch` or `NodeImage` options for `nodeOSUpgradeChannel`. By using this channel, you can run node image upgrades in tandem with Kubernetes version auto-upgrade channels like `Stable` and `Rapid`. +Node-level OS security updates come in at a faster cadence than Kubernetes patch or minor version updates. This is the main reason for introducing a separate, dedicated Node OS auto-upgrade channel. With this feature, you can have a flexible and customized strategy for node-level OS security updates and a separate plan for cluster-level Kubernetes version auto-upgrades [auto-upgrade][Autoupgrade]. +It's highly recommended to use both cluster-level [auto-upgrades][Autoupgrade] and the node OS auto-upgrade channel together. Scheduling can be fine-tuned by applying two separate sets of [maintenance windows][planned-maintenance] - `aksManagedAutoUpgradeSchedule` for the cluster [auto-upgrade][Autoupgrade] channel and `aksManagedNodeOSUpgradeSchedule` for the node OS auto-upgrade channel. -## Prerequisites +## Using node OS auto-upgrade ++The selected channel determines the timing of upgrades. When making changes to node OS auto-upgrade channels, allow up to 24 hours for the changes to take effect. ++> [!NOTE] +> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it will only work for a cluster in a [supported version][supported]. +++The following upgrade channels are available. You're allowed to choose one of these options: ++|Channel|Description|OS-specific behavior| +||| +| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A| +| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure.|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. Azure Linux CPU node pools don't automatically apply security patches, so this option behaves equivalently to `None`.| +| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section below for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There may be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs.| +| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.| ++To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example. ++```azurecli-interactive +az aks create --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch +``` ++To set the node os auto-upgrade channel on existing cluster, update the *node-os-upgrade-channel* parameter, similar to the following example. ++```azurecli-interactive +az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch +``` +## Cadence and Ownership ++The default cadence means there's no planned maintenance window applied. ++|Channel|Updates Ownership|Default cadence| +||| +| `Unmanaged`|OS driven security updates. AKS has no control over these updates|Nightly around 6AM UTC for Ubuntu and Mariner, Windows every month.| +| `SecurityPatch`|AKS|Weekly| +| `NodeImage`|AKS|Weekly| +## Prerequisites +"The following prerequisites are only applicable when using the `SecurityPatch` channel. If you aren't using this channel, you can ignore these requirements. - Must be using API version `11-02-preview` or later - If using Azure CLI, the `aks-preview` CLI extension version `0.5.127` or later must be installed -- If using the `SecurityPatch` channel, the `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription+- The `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription -### Register the 'NodeOsUpgradeChannelPreview' feature flag +### Register the 'NodeOsUpgradeChannelPreview' feature flag Register the `NodeOsUpgradeChannelPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: az provider register --namespace Microsoft.ContainerService ## Limitations -If using the `node-image` cluster auto-upgrade channel or the `NodeImage` node OS auto-upgrade channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel values, make sure the [cluster auto-upgrade channel][Autoupgrade] isn't `node-image`. --The `nodeosupgradechannel` isn't supported on Windows OS node pools. Azure Linux support is now rolled out and is expected to be available in all regions soon. +- Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel value, make sure the [cluster auto-upgrade channel][Autoupgrade] value isn't `node-image`. -## Using node OS auto-upgrade +- The `SecurityPatch` channel isn't supported on Windows OS node pools. + + > [!NOTE] + > By default, any new cluster created with an API version of `06-01-2022`or later will set the node OS auto-upgrade channel value to `NodeImage`. Any existing clusters created with an API version earlier than `06-01-2022` will have the node OS auto-upgrade channel value set to `None` by default. -Automatically completed upgrades are functionally the same as manual upgrades. The selected channel determines the timing of upgrades. When making changes to auto-upgrade, allow 24 hours for the changes to take effect. By default, a cluster's node OS auto-upgrade channel is set to `Unmanaged`. -> [!NOTE] -> Node OS image auto-upgrade won't affect the cluster's Kubernetes version, but it still requires the cluster to be in a supported version to function properly. -> When changing channels to `NodeImage` or `SecurityPatch`, the unattended upgrades will only be disabled when the image gets applied in the next cycle and not immediately. +## Using node OS auto-upgrade with Planned Maintenance -The following upgrade channels are available: +If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade starts during your specified maintenance window. -|Channel|Description|OS-specific behavior| -||| -| `None`| Your nodes won't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A| -| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially and will be patched at some point by the OS's infrastructure.|Ubuntu applies security patches through unattended upgrade roughly once a day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`. Azure Linux CPU node pools don't automatically apply security patches, so this option behaves equivalently to `None`.| -| `SecurityPatch`|AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only". There maybe disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs.| -| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.| +> [!NOTE] +> To ensure proper functionality, use a maintenance window of four hours or more. -To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example. +For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance]. -```azurecli-interactive -az aks create --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch -``` +## FAQ -To set the auto-upgrade channel on existing cluster, update the *node-os-upgrade-channel* parameter, similar to the following example. +* How can I check the current nodeOsUpgradeChannel value on a cluster? -```azurecli-interactive -az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch -``` +Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to. -## Using node OS auto-upgrade with Planned Maintenance +* How can I monitor the status of node OS auto-upgrades? -If youΓÇÖre using Planned Maintenance and node OS auto-upgrade, your upgrade starts during your specified maintenance window. +To view the status of your node OS auto upgrades, look up [activity logs][monitor-aks] on your cluster. You may also look up specific upgrade-related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade-related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid]. -> [!NOTE] -> To ensure proper functionality, use a maintenance window of four hours or more. +* Can I change the node OS auto-upgrade channel value if my cluster auto-upgrade channel is set to `node-image` ? -For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance]. + No. Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change the node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to be able to change the node OS auto-upgrade channel values, make sure the cluster auto-upgrade channel isn't `node-image`. <!-- LINKS --> [planned-maintenance]: planned-maintenance.md For more information on Planned Maintenance, see [Use Planned Maintenance to sch [unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates [Autoupgrade]: auto-upgrade-cluster.md [kured]: node-updates-kured.md+[supported]: ./support-policies.md +[monitor-aks]: ./monitor-aks-reference.md +[aks-eventgrid]: ./quickstart-event-grid.md +[aks-upgrade]: ./upgrade-cluster.md |
aks | Azure Ad Integration Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-integration-cli.md | description: Learn how to use the Azure CLI to create and Azure Active Directory Previously updated : 07/07/2023 Last updated : 08/15/2023 # Integrate Azure Active Directory with Azure Kubernetes Service (AKS) using the Azure CLI (legacy) > [!WARNING]-> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from August 1st, 2023. +> The feature described in this document, Azure AD Integration (legacy) was **deprecated on June 1st, 2023**. At this time, no new clusters can be created with Azure AD Integration (legacy). All Azure AD Integration (legacy) AKS clusters will be migrated to AKS-managed Azure AD automatically starting from December 1st, 2023. > > AKS has a new improved [AKS-managed Azure AD][managed-aad] experience that doesn't require you to manage server or client applications. If you want to migrate follow the instructions [here][managed-aad-migrate]. |
aks | Azure Csi Blob Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md | This section provides guidance for cluster administrators who want to provision |location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.| |resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.| |storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|+|networkEndpointType| Specify network endpoint type for the storage account created by driver. If privateEndpoint is specified, a [private endpoint][storage-account-private-endpoint] is created for the storage account. For other cases, a service endpoint will be created for NFS protocol.<sup>1</sup> | `privateEndpoint` | No | For an AKS cluster, add the AKS cluster name to the Contributor role in the resource group hosting the VNET.| |protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`| |containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. | |containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No | This section provides guidance for cluster administrators who want to provision | | **Following parameters are only for NFS protocol** | | | | |mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No | +<sup>1</sup> If the storage account is created by the driver, then you only need to specify `networkEndpointType: privateEndpoint` parameter in storage class. The CSI driver creates the private endpoint together with the account. If you bring your own storage account, then you need to [create the private endpoint][storage-account-private-endpoint] for the storage account. + ### Create a persistent volume claim using built-in storage class A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation. This section provides guidance for cluster administrators who want to create one ### Create a Blob storage container -When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group. +When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**: The following YAML creates a pod that uses the persistent volume or persistent v [az-tags]: ../azure-resource-manager/management/tag-resources.md [sas-tokens]: ../storage/common/storage-sas-overview.md [azure-datalake-storage-account]: ../storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md+[storage-account-private-endpoint]: ../storage/common/storage-private-endpoints.md |
aks | Azure Csi Files Storage Provision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md | The following YAML creates a pod that uses the persistent volume claim *my-azure metadata: name: mypod spec:- containers: - - name: mypod - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine - resources: - requests: - cpu: 100m - memory: 128Mi - limits: - cpu: 250m - memory: 256Mi - volumeMounts: - - mountPath: "/mnt/azure" - name: volume - volumes: - - name: volume - persistentVolumeClaim: - claimName: my-azurefile + containers: + - name: mypod + image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + volumeMounts: + - mountPath: /mnt/azure + name: volume + volumes: + - name: volume + persistentVolumeClaim: + claimName: my-azurefile ``` 2. Create the pod using the [`kubectl apply`][kubectl-apply] command. |
aks | Azure Files Csi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md | The output of the commands resembles the following example: [kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec [csi-specification]: https://github.com/container-storage-interface/spec/blob/master/spec.md [data-plane-api]: https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/azcore/internal/shared/shared.go+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply + <!-- LINKS - internal --> [csi-drivers-overview]: csi-storage-drivers.md |
aks | Cluster Autoscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md | This article requires Azure CLI version 2.0.76 or later. Run `az --version` to f To adjust to changing application demands, such as between workdays and evenings or weekends, clusters often need a way to automatically scale. AKS clusters can scale in one of two ways: -* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. +* The **cluster autoscaler** watches for pods that can't be scheduled on nodes because of resource constraints. The cluster then automatically increases the number of nodes. For more information, see [How does scale-up work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-up-work) * The **horizontal pod autoscaler** uses the Metrics Server in a Kubernetes cluster to monitor the resource demand of pods. If an application needs more resources, the number of pods is automatically increased to meet the demand. ![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png) To further help improve cluster resource utilization and free up CPU and memory [aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group [aks-multiple-node-pools]: create-node-pools.md [aks-scale-apps]: tutorial-kubernetes-scale.md-[aks-view-master-logs]: ../azure-monitor/containers/monitor-kubernetes.md#configure-monitoring +[aks-view-master-logs]: monitor-aks.md#resource-logs [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update |
aks | Configure Kubenet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet.md | For more information to help you decide which network model to use, see [Compare --service-cidr 10.0.0.0/16 \ --dns-service-ip 10.0.0.10 \ --pod-cidr 10.244.0.0/16 \- --docker-bridge-address 172.17.0.1/16 \ --vnet-subnet-id $SUBNET_ID ``` For more information to help you decide which network model to use, see [Compare * This address range must be large enough to accommodate the number of nodes that you expect to scale up to. You can't change this address range once the cluster is deployed. * The pod IP address range is used to assign a */24* address space to each node in the cluster. In the following example, the *--pod-cidr* of *10.244.0.0/16* assigns the first node *10.244.0.0/24*, the second node *10.244.1.0/24*, and the third node *10.244.2.0/24*. * As the cluster scales or upgrades, the Azure platform continues to assign a pod IP address range to each new node.- * *--docker-bridge-address* is optional. The address lets the AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster and shouldn't overlap with other address ranges in use on your network. The default value is 172.17.0.1/16. > [!NOTE] > If you want to enable an AKS cluster to include a [Calico network policy][calico-network-policies], you can use the following command: For more information to help you decide which network model to use, see [Compare > --resource-group myResourceGroup \ > --name myAKSCluster \ > --node-count 3 \-> --network-plugin kubenet --network-policy calico \ +> --network-plugin kubenet \ +> --network-policy calico \ > --vnet-subnet-id $SUBNET_ID > ``` |
aks | Create Node Pools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md | The Azure Linux container host for AKS is an open-source Linux distribution avai az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \- --name azurelinuxpool \ + --name azlinuxpool \ --os-sku AzureLinux ``` |
aks | Deploy Marketplace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md | description: Learn how to deploy Kubernetes applications from Azure Marketplace Previously updated : 05/01/2023 Last updated : 08/18/2023 Included among these solutions are Kubernetes application-based container offers This feature is currently supported only in the following regions: -- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India +- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India Kubernetes application-based container offers can't be deployed on AKS for Azure Stack HCI or AKS Edge Essentials. -## Register resource providers --Before you deploy a container offer, you must register the `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` providers on your subscription by using the `az provider register` command: --```azurecli-interactive -az provider register --namespace Microsoft.ContainerService --wait -az provider register --namespace Microsoft.KubernetesConfiguration --wait -``` - ## Select and deploy a Kubernetes application -### From the AKS portal screen +### From an AKS cluster 1. In the [Azure portal](https://portal.azure.com/), you can deploy a Kubernetes application from an existing cluster by navigating to **Marketplace** or selecting **Extensions + applications**, then selecting **+ Add**. az provider register --namespace Microsoft.KubernetesConfiguration --wait 1. After you decide on an application, select the offer. -1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**. +1. On the **Plans + Pricing** tab, select an option. Ensure that the terms are acceptable, and then select **Create**. :::image type="content" source="./media/deploy-marketplace/plan-pricing.png" alt-text="Screenshot of the offer purchasing page in the Azure portal, showing plan and pricing information."::: -1. Follow each page in the wizard, all the way through Review + Create. Fill in information for your resource group, your cluster, and any configuration options that the application requires. +1. Follow each page in the wizard, all the way through **Review + Create**. Fill in information for your resource group, your cluster, and any configuration options that the application requires. :::image type="content" source="./media/deploy-marketplace/review-create.png" alt-text="Screenshot of the Azure portal wizard for deploying a new offer, with the selector for creating a cluster or using an existing one."::: az provider register --namespace Microsoft.KubernetesConfiguration --wait :::image type="content" source="./media/deploy-marketplace/deploying.png" alt-text="Screenshot of the Azure portal deployments screen, showing that the Kubernetes offer is currently being deployed."::: -### From the Marketplace portal screen +### Search in the Azure portal 1. In the [Azure portal](https://portal.azure.com/), search for **Marketplace** on the top search bar. In the results, under **Services**, select **Marketplace**. You can view the extension instance from the cluster by using the following comm az k8s-extension show --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` ------- ## Monitor billing and usage information To monitor billing and usage information for the offer that you deployed: You can delete a purchased plan for an Azure container offer by deleting the extension instance on the cluster. -- ### [Portal](#tab/azure-portal) Select an application, then select the uninstall button to remove the extension from your cluster: Select an application, then select the uninstall button to remove the extension az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type managedClusters ``` - ## Troubleshooting If you experience issues, see the [troubleshooting checklist for failed deployme ## Next steps - Learn more about [exploring and analyzing costs][billing].+- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli) <!-- LINKS --> [azure-marketplace]: /marketplace/azure-marketplace-overview- [cluster-extensions]: ./cluster-extensions.md- [billing]: ../cost-management-billing/costs/quick-acm-cost-analysis.md--[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer ------- Learn more about [deploying a Kubernetes application programmatically using Azure CLI](/azure/aks/deploy-application-az-cli)----+[marketplace-troubleshoot]: /troubleshoot/azure/azure-kubernetes/troubleshoot-failed-kubernetes-deployment-offer |
aks | Image Cleaner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md | -# Use Image Cleaner to clean up stale images on your Azure Kubernetes Service cluster (preview) +# Use Image Cleaner to clean up stale images on your Azure Kubernetes Service (AKS) cluster -It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images can present security issues as they may contain vulnerabilities. By cleaning these unreferenced images, you can remove an area of risk in your clusters. When done manually, this process can be time intensive, which Image Cleaner can mitigate via automatic image identification and removal. +It's common to use pipelines to build and deploy images on Azure Kubernetes Service (AKS) clusters. While great for image creation, this process often doesn't account for the stale images left behind and can lead to image bloat on cluster nodes. These images may contain vulnerabilities, which may create security issues. To remove security risks in your clusters, you can clean these unreferenced images. Manually cleaning images can be time intensive. Image Cleaner performs automatic image identification and removal, which mitigates the risk of stale images and reduces the time required to clean them up. > [!NOTE]-> Image Cleaner is a feature based on [Eraser](https://azure.github.io/eraser). -> On an AKS cluster, the feature name and property name is `Image Cleaner` while the relevant Image Cleaner pods' names contain `Eraser`. -+> Image Cleaner is a feature based on [Eraser](https://eraser-dev.github.io/eraser). +> On an AKS cluster, the feature name and property name is `Image Cleaner`, while the relevant Image Cleaner pods' names contain `Eraser`. ## Prerequisites * An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).-* [Azure CLI][azure-cli-install] or [Azure PowerShell][azure-powershell-install] and the `aks-preview` 0.5.96 or later CLI extension installed. -* The `EnableImageCleanerPreview` feature flag registered on your subscription: --### [Azure CLI](#tab/azure-cli) --First, install the aks-preview extension by running the following command: --```azurecli -az extension add --name aks-preview -``` --Run the following command to update to the latest version of the extension released: --```azurecli -az extension update --name aks-preview -``` --Then register the `EnableImageCleanerPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: --```azurecli-interactive -az feature register --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview" -``` --It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: +* Azure CLI version 2.49.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. -```azurecli-interactive -az feature show --namespace "Microsoft.ContainerService" --name "EnableImageCleanerPreview" -``` --When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: +## Limitations -```azurecli-interactive -az provider register --namespace Microsoft.ContainerService -``` +Image Cleaner doesn't yet support Windows node pools or AKS virtual nodes. -### [Azure PowerShell](#tab/azure-powershell) +## How Image Cleaner works -Register the `EnableImageCleanerPreview` feature flag by using the [Register-AzProviderPreviewFeature][register-azproviderpreviewfeature] cmdlet, as shown in the following example: +When you enable Image Cleaner, it deploys an `eraser-controller-manager` pod, which generates an `ImageList` CRD. The eraser pods running on each node clean up any unreferenced and vulnerable images according to the `ImageList`. A [trivy][trivy] scan helps determine vulnerability and flags images with a classification of `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL`. Image Cleaner automatically generates an updated `ImageList` based on a set time interval and can also be supplied manually. Once Image Cleaner generates an `ImageList`, it removes all images in the list from node VMs. -```azurepowershell-interactive -Register-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EnableImageCleanerPreview -``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [Get-AzProviderPreviewFeature][get-azproviderpreviewfeature] cmdlet: +## Configuration options -```azurepowershell-interactive -Get-AzProviderPreviewFeature -ProviderNamespace Microsoft.ContainerService -Name EnableImageCleanerPreview | - Format-Table -Property Name, @{name='State'; expression={$_.Properties.State}} -``` +With Image Cleaner, you can choose between manual and automatic mode and the following configuration options: -When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [Register-AzResourceProvider][register-azresourceprovider] command: +|Name|Description|Required| +|-|--|--| +|`--enable-image-cleaner`|Enable the Image Cleaner feature for an AKS cluster|Yes, unless disable is specified| +|`--disable-image-cleaner`|Disable the Image Cleaner feature for an AKS cluster|Yes, unless enable is specified| +|`--image-cleaner-interval-hours`|This parameter determines the interval time (in hours) Image Cleaner uses to run. The default value for Azure CLI is one week, the minimum value is 24 hours and the maximum is three months.|Not required for Azure CLI, required for ARM template or other clients| -```azurepowershell-interactive -Register-AzResourceProvider -ProviderNamespace Microsoft.ContainerService -``` +> [!NOTE] +> After disabling Image Cleaner, the old configuration still exists. This means if you enable the feature again without explicitly passing configuration, the existing value is used instead of the default. -+## Enable Image Cleaner on your AKS cluster -## Limitations +### Enable Image Cleaner on a new cluster -Image Cleaner does not support the following: +* Enable Image Cleaner on a new AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-image-cleaner` parameter. -* ARM64 node pools. For more information, see [Azure Virtual Machines with ARM-based processors][arm-vms]. -* Windows node pools. + ```azurecli-interactive + az aks create -g myResourceGroup -n myManagedCluster \ + --enable-image-cleaner + ``` -## How Image Cleaner works +### Enable Image Cleaner on an existing cluster -When enabled, an `eraser-controller-manager` pod is deployed, which generates an `ImageList` CRD. The eraser pods running on each nodes will clean up the unreferenced and vulnerable images according to the ImageList. Vulnerability is determined based on a [trivy][trivy] scan, after which images with a `LOW`, `MEDIUM`, `HIGH`, or `CRITICAL` classification are flagged. An updated `ImageList` will be automatically generated by Image Cleaner based on a set time interval, and can also be supplied manually. +* Enable Image Cleaner on an existing AKS cluster using the [`az aks update`][az-aks-update] command. + ```azurecli-interactive + az aks update -g myResourceGroup -n myManagedCluster \ + --enable-image-cleaner + ``` +### Update the Image Cleaner interval on a new or existing cluster -Once an `ImageList` is generated, Image Cleaner will remove all the images in the list from node VMs. +* Update the Image Cleaner interval on a new or existing AKS cluster using the `--image-cleaner-interval-hours` parameter. + ```azurecli-interactive + # Update the interval on a new cluster + az aks create -g myResourceGroup -n myManagedCluster \ + --enable-image-cleaner \ + --image-cleaner-interval-hours 48 + # Update the interval on an existing cluster + az aks update -g myResourceGroup -n myManagedCluster \ + --image-cleaner-interval-hours 48 + ``` -## Configuration options +After you enable the feature, the `eraser-controller-manager-xxx` pod and `collector-aks-xxx` pod are deployed. The `eraser-aks-xxx` pod contains *three* containers: -In addition to choosing between manual and automatic mode, there are several options for Image Cleaner: + - **Scanner container**: Performs vulnerability image scans + - **Collector container**: Collects nonrunning and unused images + - **Remover container**: Removes these images from cluster nodes -|Name|Description|Required| -|-|--|--| -|--enable-image-cleaner|Enable the Image Cleaner feature for an AKS cluster|Yes, unless disable is specified| -|--disable-image-cleaner|Disable the Image Cleaner feature for an AKS cluster|Yes, unless enable is specified| -|--image-cleaner-interval-hours|This parameter determines the interval time (in hours) Image Cleaner will use to run. The default value for Azure CLI is one week, the minimum value is 24 hours and the maximum is three months.|Not required for Azure CLI, required for ARM template or other clients| +Image Cleaner generates an `ImageList` containing nonrunning and vulnerable images at the desired interval based on your configuration. Image Cleaner automatically removes these images from cluster nodes. -> [!NOTE] -> After disabling Image Cleaner, the old configuration still exists. This means that if you enable the feature again without explicitly passing configuration, the existing value will be used rather than the default. +## Manually remove images using Image Cleaner -## Enable Image Cleaner on your AKS cluster +1. Create an `ImageList` using the following example YAML named `image-list.yml`. -To create a new AKS cluster using the default interval, use [az aks create][az-aks-create]: + ```yml + apiVersion: eraser.sh/v1alpha1 + kind: ImageList + metadata: + name: imagelist + spec: + images: + - docker.io/library/alpine:3.7.3 # You can also use "*" to specify all non-running images + ``` -```azurecli-interactive -az aks create -g MyResourceGroup -n MyManagedCluster \ - --enable-image-cleaner -``` +2. Apply the `ImageList` to your cluster using the `kubectl apply` command. -To enable on an existing AKS cluster, use [az aks update][az-aks-update]: + ```bash + kubectl apply -f image-list.yml + ``` -```azurecli-interactive -az aks update -g MyResourceGroup -n MyManagedCluster \ - --enable-image-cleaner -``` + Applying the `ImageList` triggers a job named `eraser-aks-xxx`, which causes Image Cleaner to remove the desired images from all nodes. Unlike the `eraser-aks-xxx` pod under autoclean with *three* containers, the eraser-pod here has only *one* container. -The `--image-cleaner-interval-hours` parameter can be specified at creation time or for an existing cluster. For example, the following command updates the interval for a cluster with Image Cleaner already enabled: +## Image exclusion list -```azurecli-interactive -az aks update -g MyResourceGroup -n MyManagedCluster \ - --image-cleaner-interval-hours 48 -``` +Images specified in the exclusion list aren't removed from the cluster. Image Cleaner supports system and user-defined exclusion lists. It's not supported to edit the system exclusion list. -After the feature is enabled, the `eraser-controller-manager-xxx` pod and `collector-aks-xxx` pod will be deployed. -Based on your configuration, Image Cleaner will generate an `ImageList` containing non-running and vulnerable images at the desired interval. Image Cleaner will automatically remove these images from cluster nodes. +### Check the system exclusion list -## Manually remove images +* Check the system exclusion list using the following `kubectl get` command. -To manually remove images from your cluster using Image Cleaner, first create an `ImageList`. For example, save the following as `image-list.yml`: + ```bash + kubectl get -n kube-system cm eraser-system-exclusion -o yaml + ``` -```yml -apiVersion: eraser.sh/v1alpha1 -kind: ImageList -metadata: - name: imagelist -spec: - images: - - docker.io/library/alpine:3.7.3 # You can also use "*" to specify all non-running images -``` +### Create a user-defined exclusion list -And apply it to the cluster: +1. Create a sample JSON file to contain excluded images. -```bash -kubectl apply -f image-list.yml -``` + ```bash + cat > sample.json <<EOF + {"excluded": ["excluded-image-name"]} + EOF + ``` -A job named `eraser-aks-xxx`will be triggered which causes Image Cleaner to remove the desired images from all nodes. +2. Create a `configmap` using the sample JSON file using the following `kubectl create` and `kubectl label` command. -## Disable Image Cleaner + ```bash + kubectl create configmap excluded --from-file=sample.json --namespace=kube-system + kubectl label configmap excluded eraser.sh/exclude.list=true -n kube-system + ``` -To stop using Image Cleaner, you can disable it via the `--disable-image-cleaner` flag: +3. Verify the images are in the exclusion list using the following `kubectl logs` command. -```azurecli-interactive -az aks update -g MyResourceGroup -n MyManagedCluster - --disable-image-cleaner -``` + ```bash + kubectl logs -n kube-system <eraser-pod-name> + ``` -## Logging +## Image Cleaner image logs -Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images, and in `collector-aks-nodes-xxx` pods for automatically deleted images. +Deletion image logs are stored in `eraser-aks-nodepool-xxx` pods for manually deleted images and in `collector-aks-nodes-xxx` pods for automatically deleted images. -You can view these logs by running `kubectl logs <pod name> -n kubesystem`. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table. +You can view these logs using the `kubectl logs <pod name> -n kubesystem` command. However, this command may return only the most recent logs, since older logs are routinely deleted. To view all logs, follow these steps to enable the [Azure Monitor add-on](./monitor-aks.md) and use the Container Insights pod log table. -1. Ensure that Azure monitoring is enabled on the cluster. For detailed steps, see [Enable Container Insights for AKS cluster](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster). +1. Ensure Azure Monitoring is enabled on your cluster. For detailed steps, see [Enable Container Insights on AKS clusters](../azure-monitor/containers/container-insights-enable-aks.md#existing-aks-cluster). -1. Get the Log Analytics resource ID: +2. Get the Log Analytics resource ID using the [`az aks show`][az-aks-show] command. ```azurecli- az aks show -g <resourceGroupofAKSCluster> -n <nameofAksCluster> + az aks show -g myResourceGroup -n myManagedCluster ``` - After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID: + After a few minutes, the command returns JSON-formatted information about the solution, including the workspace resource ID. - ```json + ```json "addonProfiles": { "omsagent": { "config": { You can view these logs by running `kubectl logs <pod name> -n kubesystem`. Howe "enabled": true } }- ``` + ``` -1. In the Azure portal, search for the workspace resource ID, then select **Logs**. +3. In the Azure portal, search for the workspace resource ID, then select **Logs**. -1. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `collector-aks-nodes-xxx` (for automatic mode). +4. Copy this query into the table, replacing `name` with either `eraser-aks-nodepool-xxx` (for manual mode) or `collector-aks-nodes-xxx` (for automatic mode). ```kusto let startTimestamp = ago(1h); You can view these logs by running `kubectl logs <pod name> -n kubesystem`. Howe | order by TimeGenerated desc ``` -1. Select **Run**. Any deleted image logs will appear in the **Results** area. +5. Select **Run**. Any deleted image logs appear in the **Results** area. :::image type="content" source="media/image-cleaner/eraser-log-analytics.png" alt-text="Screenshot showing deleted image logs in the Azure portal." lightbox="media/image-cleaner/eraser-log-analytics.png"::: +## Disable Image Cleaner ++* Disable Image Cleaner on your cluster using the [`az aks update`][az-aks-update] command with the `--disable-image-cleaner` parameter. ++ ```azurecli-interactive + az aks update -g myResourceGroup -n myManagedCluster \ + --disable-image-cleaner + ``` + <!-- LINKS --> [azure-cli-install]: /cli/azure/install-azure-cli-[azure-powershell-install]: /powershell/azure/install-az-ps - [az-aks-create]: /cli/azure/aks#az_aks_create [az-aks-update]: /cli/azure/aks#az_aks_update-[az-feature-register]: /cli/azure/feature#az-feature-register -[register-azproviderpreviewfeature]: /powershell/module/az.resources/register-azproviderpreviewfeature -[az-feature-show]: /cli/azure/feature#az-feature-show -[get-azproviderpreviewfeature]: /powershell/module/az.resources/get-azproviderpreviewfeature -[az-provider-register]: /cli/azure/provider#az-provider-register -[register-azresourceprovider]: /powershell/module/az.resources/register-azresourceprovider --[arm-vms]: https://azure.microsoft.com/blog/azure-virtual-machines-with-ampere-altra-arm-based-processors-generally-available/ [trivy]: https://github.com/aquasecurity/trivy+[az-aks-show]: /cli/azure/aks#az_aks_show |
aks | Intro Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md | Learn more about deploying and managing AKS. [azure-monitor-overview]: ../azure-monitor/overview.md [container-insights]: ../azure-monitor/containers/container-insights-overview.md [azure-monitor-managed-prometheus]: ../azure-monitor/essentials/prometheus-metrics-overview.md-[collect-control-plane-logs]: monitor-aks.md#collect-control-plane-logs +[collect-resource-logs]: monitor-aks.md#resource-logs [azure-monitor-logs]: ../azure-monitor/logs/data-platform-logs.md [helm]: quickstart-helm.md [aks-best-practices]: best-practices.md |
aks | Istio About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-about.md | This service mesh add-on uses and builds on top of open-source Istio. The add-on ## Limitations Istio-based service mesh add-on for AKS has the following limitations:--* The add-on currently doesn't work on AKS clusters using [Azure CNI Powered by Cilium][azure-cni-cilium]. * The add-on doesn't work on AKS clusters that are using [Open Service Mesh addon for AKS][open-service-mesh-about]. * The add-on doesn't work on AKS clusters that have Istio installed on them already outside the add-on installation. * Managed lifecycle of mesh on how Istio versions are installed and later made available for upgrades. |
aks | Quick Kubernetes Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md | After a few minutes, the command completes and returns information about the clu To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. -1. Install `kubectl` locally using the `Install-AzAksKubectl` cmdlet: +1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet: ```azurepowershell- Install-AzAksKubectl + Install-AzAksCliTool ``` 2. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following cmdlet downloads credentials and configures the Kubernetes CLI to use them. |
aks | Quick Windows Container Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md | AKS supports Windows Server 2019 and 2022 node pools. Windows Server 2022 is the ## Connect to the cluster -You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the `Install-AzAksKubectl` cmdlet. +You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the `Install-AzAzAksCliTool` cmdlet. 1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. |
aks | Load Balancer Standard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md | spec: This example updates the rule to allow inbound external traffic only from the `MY_EXTERNAL_IP_RANGE` range. If you replace `MY_EXTERNAL_IP_RANGE` with the internal subnet IP address, traffic is restricted to only cluster internal IPs. If traffic is restricted to cluster internal IPs, clients outside your Kubernetes cluster are unable to access the load balancer. > [!NOTE]-> Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer. +> * Inbound, external traffic flows from the load balancer to the virtual network for your AKS cluster. The virtual network has a network security group (NSG) which allows all inbound traffic from the load balancer. This NSG uses a [service tag][service-tags] of type *LoadBalancer* to allow traffic from the load balancer. +> * Pod CIDR should be added to loadBalancerSourceRanges if there are Pods needing to access the service's LoadBalancer IP for clusters with version v1.25 or above. ## Maintain the client's IP on inbound connections |
aks | Monitor Aks Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md | The following table lists [dimensions](../azure-monitor/essentials/data-platform ## Resource logs -AKS implements control plane logs for the cluster as [resource logs in Azure Monitor](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [How to query logs from Container insights](../azure-monitor/containers/container-insights-log-query.md#resource-logs) for query examples. +AKS implements control plane logs for the cluster as [resource logs in Azure Monitor.](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [Sample queries](monitor-aks-reference.md#resource-logs) for query examples. The following table lists the resource log categories you can collect for AKS. All logs are written to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. |
aks | Network Observability Byo Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-byo-cli.md | +> [!IMPORTANT] +> AKS Network Observability is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). ## Prerequisites For more information about AKS Network Observability, see [What is Azure Kuberne [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] +- Minimum version of **Azure CLI** required for the steps in this article is **2.44.0**. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). + ### Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] |
aks | Network Observability Managed Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-managed-cli.md | +> [!IMPORTANT] +> AKS Network Observability is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. + For more information about AKS Network Observability, see [What is Azure Kubernetes Service (AKS) Network Observability?](network-observability-overview.md). ## Prerequisites For more information about AKS Network Observability, see [What is Azure Kuberne [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] +- Minimum version of **Azure CLI** required for the steps in this article is **2.44.0**. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ### Install the `aks-preview` Azure CLI extension [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] az aks get-credentials --name myAKSCluster --resource-group myResourceGroup --namespace kube-system ``` -1. Azure Monitor pods should restart themselves, if they do not please rollout restart with following command: - ```azurecli-interactive +1. Azure Monitor pods should restart themselves, if they don't, rollout restart with following command: + +```azurecli-interactive kubectl rollout restart deploy -n kube-system ama-metrics ``` |
aks | Planned Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md | Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview) + Title: Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster + description: Learn how to use Planned Maintenance to schedule and control cluster and node image upgrades in Azure Kubernetes Service (AKS). -# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster (preview) +# Use Planned Maintenance to schedule and control upgrades for your Azure Kubernetes Service (AKS) cluster ++Your AKS cluster has regular maintenance performed on it automatically. There are two types of regular maintenance - AKS initiated and those that you initiate. Planned Maintenance feature allows you to run both types of maintenance in a cadence of your choice thereby minimizing any workload impact. -Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows to perform updates and minimize workload impact. Once scheduled, upgrades occur only during the window you selected. +AKS intiated maintenance refers to the AKS releases. These releases are weekly rounds of fixes and feature and component updates that affect your clusters. The type of maintenance that you initiate regularly are [cluster auto-upgrades][aks-upgrade] and [Node OS automatic security updates][node-image-auto-upgrade]. There are currently three available configuration types: `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`: -- `default` corresponds to a basic configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].+- `default` corresponds to a basic configuration that is used to control AKS releases, these releases can take up to two weeks to roll out to all regions from the initial time of shipping due to Azure Safe Deployment Practices (SDP). Choose `default` to schedule these updates in such a way that it's least disruptive for you. You can monitor the status of an ongoing AKS release by region from the [weekly releases tracker][release-tracker]. - `aksManagedAutoUpgradeSchedule` controls when cluster upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default` configuration. For more information on cluster auto-upgrade, see [Automatically upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade]. -- `aksManagedNodeOSUpgradeSchedule` controls when node operating system upgrades scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade]+- `aksManagedNodeOSUpgradeSchedule` controls when the node operating system security patching scheduled by your node OS auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible than in a `default configuration. For more information on node OS auto-upgrade channel, see [Automatically patch and update AKS cluster node images][node-image-auto-upgrade] -We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node image upgrade scenarios, while `default` is meant exclusively for weekly releases. You can port `default` configurations to `aksManagedAutoUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command. --To configure Planned Maintenance using pre-created configurations, see [Use Planned Maintenance pre-created configurations to schedule AKS weekly releases][pm-weekly]. +We recommend using `aksManagedAutoUpgradeSchedule` for all cluster upgrade scenarios and `aksManagedNodeOSUpgradeSchedule` for all node OS security patching scenarios, while `default` is meant exclusively for the AKS weekly releases. You can port `default` configurations to the `aksManagedAutoUpgradeSchedule` or `aksManagedNodeOSUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command. ## Before you begin This article assumes that you have an existing AKS cluster. If you need an AKS c Be sure to upgrade Azure CLI to the latest version using [`az upgrade`](/cli/azure/update-azure-cli#manual-update). --### Limitations --When you use Planned Maintenance, the following restrictions apply: --- AKS reserves the right to break these windows for unplanned/reactive maintenance operations that are urgent or critical.-- Currently, performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window.-- Updates can't be blocked for more than seven days.--### Install aks-preview CLI extension --You also need the *aks-preview* Azure CLI extension version 0.5.124 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command. --```azurecli-interactive -# Install the aks-preview extension -az extension add --name aks-preview --# Update the extension to make sure you have the latest version installed -az extension update --name aks-preview -``` - ## Creating a maintenance window -To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name will cause your maintenance window not to run. +To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default`, `aksManagedAutoUpgradeSchedule`, or `aksManagedNodeOSUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name causes your maintenance window not to run. > [!NOTE] > When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more. A `RelativeMonthly` schedule may look like *"every two months, on the last Monda Valid values for `weekIndex` are `First`, `Second`, `Third`, `Fourth`, and `Last`. +### Things to note ++When you use Planned Maintenance, the following restrictions apply: ++- AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical. These maintenance operations may even run during the `notAllowedTime` or `notAllowedDates` periods defined in your configuration. +- Performing maintenance operations are considered *best-effort only* and aren't guaranteed to occur within a specified window. + ## Add a maintenance window configuration with Azure CLI The following example shows a command to add a new `default` configuration that schedules maintenance to run from 1:00am to 2:00am every Monday: To delete a certain maintenance configuration window in your AKS Cluster, use th ```azurecli-interactive az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule ```+## Frequently Asked Questions ++* How can I check the existing maintenance configurations in my cluster? ++ Use the `az aks maintenanceconfiguration show` command. + +* Can reactive, unplanned maintenance happen during the `notAllowedTime` or `notAllowedDates` periods too? ++ Yes, AKS reserves the right to break these windows for unplanned, reactive maintenance operations that are urgent or critical. ++* How can you tell if a maintenance event occurred? ++ For releases, check your cluster's region and look up release information in [weekly releases][release-tracker] and validate if it matches your maintenance schedule or not. To view the status of your auto upgrades, look up [activity logs][monitor-aks] on your cluster. You may also look up specific upgrade related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid]. ++* Can you use more than one maintenance configuration at the same time? + + Yes, you can run all three configurations i.e `default`, `aksManagedAutoUpgradeSchedule`, `aksManagedNodeOSUpgradeSchedule`simultaneously. In case the windows overlap AKS decides the running order. ++* Are there any best practices for the maintenance configurations? + + We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy]. ## Next steps az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl [auto-upgrade]: auto-upgrade-cluster.md [node-image-auto-upgrade]: auto-upgrade-node-image.md [pm-weekly]: ./aks-planned-maintenance-weekly-releases.md+[monitor-aks]: monitor-aks-reference.md +[aks-eventgrid]:quickstart-event-grid.md +[aks-support-policy]:support-policies.md |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
aks | Rdp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md | If you need more troubleshooting data, you can [view the Kubernetes primary node [install-azure-cli]: /cli/azure/install-azure-cli [install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md-[view-primary-logs]: ../azure-monitor/containers/container-insights-log-query.md#resource-logs +[view-primary-logs]: monitor-aks.md#resource-logs [azure-bastion]: ../bastion/bastion-overview.md |
aks | Scale Down Mode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/scale-down-mode.md | Title: Use Scale-down Mode for your Azure Kubernetes Service (AKS) cluster description: Learn how to use Scale-down Mode in Azure Kubernetes Service (AKS). Previously updated : 09/01/2021 Last updated : 08/21/2023 # Use Scale-down Mode to delete/deallocate nodes in Azure Kubernetes Service (AKS) -By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down. +By default, scale-up operations performed manually or by the cluster autoscaler require the allocation and provisioning of new nodes, and scale-down operations delete nodes. Scale-down Mode allows you to decide whether you would like to delete or deallocate the nodes in your Azure Kubernetes Service (AKS) cluster upon scaling down. When an Azure VM is in the `Stopped` (deallocated) state, you will not be charged for the VM compute resources. However, you'll still need to pay for any OS and data storage disks attached to the VM. This also means that the container images will be preserved on those nodes. For more information, see [States and billing of Azure Virtual Machines][state-billing-azure-vm]. This behavior allows for faster operation speeds, as your deployment uses cached images. Scale-down Mode removes the need to pre-provision nodes and pre-pull container images, saving you compute cost. This article assumes that you have an existing AKS cluster. If you need an AKS c ### Limitations -- [Ephemeral OS][ephemeral-os] disks aren't supported. Be sure to specify managed OS disks via `--node-osdisk-type Managed` when creating a cluster or node pool.+- [Ephemeral OS][ephemeral-os] disks aren't supported. Be sure to specify managed OS disks by including the argument `--node-osdisk-type Managed` when creating a cluster or node pool. > [!NOTE] > Previously, while Scale-down Mode was in preview, [spot node pools][spot-node-pool] were unsupported. Now that Scale-down Mode is Generally Available, this limitation no longer applies. ## Using Scale-down Mode to deallocate nodes on scale-down -By setting `--scale-down-mode Deallocate`, nodes will be deallocated during a scale-down of your cluster/node pool. All deallocated nodes are stopped. When your cluster/node pool needs to scale up, the deallocated nodes will be started first before any new nodes are provisioned. +By setting `--scale-down-mode Deallocate`, nodes will be deallocated during a scale-down of your cluster/node pool. All deallocated nodes are stopped. When your cluster/node pool needs to scale up, the deallocated nodes are started first before any new nodes are provisioned. -In this example, we create a new node pool with 20 nodes and specify that upon scale-down, nodes are to be deallocated via `--scale-down-mode Deallocate`. +In this example, we create a new node pool with 20 nodes and specify that upon scale-down, nodes are to be deallocated using the argument `--scale-down-mode Deallocate`. ```azurecli-interactive az aks nodepool add --node-count 20 --scale-down-mode Deallocate --node-osdisk-type Managed --max-pods 10 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup By scaling the node pool and changing the node count to 5, we'll deallocate 15 n az aks nodepool scale --node-count 5 --name nodepool2 --cluster-name myAKSCluster --resource-group myResourceGroup ``` +To deallocate Windows nodes during scale-down, run the following command. The default behavior is consistent with Linux nodes, where nodes are [deleted during scale-down](#using-scale-down-mode-to-delete-nodes-on-scale-down). ++```azurecli-interactive +az aks nodepool add --node-count 20 --scale-down-mode Deallocate --os-type Windows --node-osdisk-type Managed --max-pods 10 --name npwin2 --cluster-name myAKSCluster --resource-group myResourceGroup +``` + ### Deleting previously deallocated nodes To delete your deallocated nodes, you can change your Scale-down Mode to `Delete` by setting `--scale-down-mode Delete`. The 15 deallocated nodes will now be deleted. az aks nodepool update --scale-down-mode Delete --name nodepool2 --cluster-name The default behavior of AKS without using Scale-down Mode is to delete your nodes when you scale-down your cluster. With Scale-down Mode, this behavior can be explicitly achieved by setting `--scale-down-mode Delete`. -In this example, we create a new node pool and specify that our nodes will be deleted upon scale-down via `--scale-down-mode Delete`. Scaling operations will be handled via the cluster autoscaler. +In this example, we create a new node pool and specify that our nodes will be deleted upon scale-down using the argument `--scale-down-mode Delete`. Scaling operations will be handled using the cluster autoscaler. ```azurecli-interactive az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --max-pods 10 --node-osdisk-type Managed --scale-down-mode Delete --name nodepool3 --cluster-name myAKSCluster --resource-group myResourceGroup az aks nodepool add --enable-cluster-autoscaler --min-count 1 --max-count 10 --m [aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md-[aks-support-policies]: support-policies.md -[aks-faq]: faq.md -[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update -[az-feature-list]: /cli/azure/feature#az_feature_list -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli -[az-provider-register]: /cli/azure/provider#az_provider_register [aks-upgrade]: upgrade-cluster.md [cluster-autoscaler]: cluster-autoscaler.md [ephemeral-os]: concepts-storage.md#ephemeral-os-disk |
aks | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
aks | Support Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md | Title: Support policies for Azure Kubernetes Service (AKS) description: Learn about Azure Kubernetes Service (AKS) support policies, shared responsibility, and features that are in preview (or alpha or beta). Previously updated : 05/22/2023 Last updated : 08/28/2023 #Customer intent: As a cluster operator or developer, I want to understand what AKS components I need to manage, what components are managed by Microsoft (including security patches), and networking and preview features. Microsoft doesn't provide technical support for the following scenarios: * Third-party closed-source software. This software can include security scanning tools and networking devices or software. * Network customizations other than the ones listed in the [AKS documentation](./index.yml). * Custom or third-party CNI plugins used in [BYOCNI](use-byo-cni.md) mode.-* Stand-by and proactive scenarios. Microsoft Support provides reactive support to help solve active issues in a timely and professional manner. However, standby or proactive support to help you eliminate operational risks, increase availability, and optimize performance are not covered. [Eligible customers](https://www.microsoft.com/unifiedsupport) can contact their account team to get nominated for Azure Event Management service[https://devblogs.microsoft.com/premier-developer/proactively-plan-for-your-critical-event-in-azure-with-enhanced-support-and-engineering-services/]. It's a paid service delivered by Microsoft support engineers that includes a proactive solution risk assessment and coverage during the event. +* Stand-by and proactive scenarios. Microsoft Support provides reactive support to help solve active issues in a timely and professional manner. However, standby or proactive support to help you eliminate operational risks, increase availability, and optimize performance are not covered. [Eligible customers](https://www.microsoft.com/unifiedsupport) can contact their account team to get nominated for [Azure Event Management service](https://devblogs.microsoft.com/premier-developer/proactively-plan-for-your-critical-event-in-azure-with-enhanced-support-and-engineering-services/). It's a paid service delivered by Microsoft support engineers that includes a proactive solution risk assessment and coverage during the event. ## AKS support coverage for agent nodes |
aks | Supported Kubernetes Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md | For the past release history, see [Kubernetes history](https://github.com/kubern | 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023 | 1.26 | Dec 2022 | Feb 2023 | Apr 2023 | Mar 2024 | 1.27 | Apr 2023 | Jun 2023 | Jul 2023 | Jul 2024-| 1.28 | Aug 2023 | Aug 2023 | Sep 2023 || +| 1.28 | Aug 2023 | Sep 2023 | Oct 2023 || ### AKS Kubernetes release schedule Gantt chart If you prefer to see this information visually, here's a Gantt chart with all the current releases displayed: ## AKS Components Breaking Changes by Version |
aks | Tutorial Kubernetes Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-scale.md | Kubernetes supports [horizontal pod autoscaling][kubernetes-hpa] to adjust the n * Check the version of your AKS cluster using the [`Get-AzAksCluster`][get-azakscluster] cmdlet. ```azurepowershell- Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).KubernetesVersion + (Get-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster).KubernetesVersion ``` |
aks | Use Azure Ad Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md | Title: Use Azure Active Directory pod-managed identities in Azure Kubernetes Ser description: Learn how to use Azure AD pod-managed identities in Azure Kubernetes Service (AKS) Previously updated : 04/28/2023 Last updated : 08/15/2023 # Use Azure Active Directory pod-managed identities in Azure Kubernetes Service (Preview) Azure Active Directory (Azure AD) pod-managed identities use Kubernetes primitiv > Kubernetes native capabilities to federate with any external identity providers on behalf of the > application. >-> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2023. +> The open source Azure AD pod-managed identity (preview) in Azure Kubernetes Service has been deprecated as of 10/24/2022, and the project will be archived in Sept. 2023. For more information, see the [deprecation notice](https://github.com/Azure/aad-pod-identity#-announcement). The AKS Managed add-on begins deprecation in Sept. 2024. > > To disable the AKS Managed add-on, use the following command: `az feature unregister --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"`. |
aks | Use Pod Security Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-pod-security-policies.md | description: Learn how to control pod admissions using PodSecurityPolicy in Azur Last updated 08/01/2023+ # Secure your cluster using pod security policies in Azure Kubernetes Service (AKS) (preview) |
aks | Workload Identity Migrate From Pod Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-migrate-from-pod-identity.md | This section explains the migration options available depending on what version For either scenario, you need to have the federated trust set up before you update your application to use the workload identity. The following are the minimum steps required: - [Create a managed identity](#create-a-managed-identity) credential.-- Associate the managed identity with the kubernetes service account already used for the pod-manged identity or [create a new Kubernetes service account](#create-kubernetes-service-account) and then associate it with the managed identity.+- Associate the managed identity with the kubernetes service account already used for the pod-managed identity or [create a new Kubernetes service account](#create-kubernetes-service-account) and then associate it with the managed identity. - [Establish a federated trust relationship](#establish-federated-identity-credential-trust) between the managed identity and Azure AD. ### Migrate from latest version |
aks | Workload Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-overview.md | Title: Use an Azure AD workload identities on Azure Kubernetes Service (AKS) + Title: Use an Azure AD workload identity on Azure Kubernetes Service (AKS) description: Learn about Azure Active Directory workload identity for Azure Kubernetes Service (AKS) and how to migrate your application to authenticate using this identity. Previously updated : 05/23/2023 Last updated : 08/24/2023 # Use Azure AD workload identity with Azure Kubernetes Service (AKS) This article helps you understand this new authentication feature, and reviews t In the Azure Identity client libraries, choose one of the following approaches: -- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`.+- Use `DefaultAzureCredential`, which will attempt to use the `WorkloadIdentityCredential`. † - Create a `ChainedTokenCredential` instance that includes `WorkloadIdentityCredential`. - Use `WorkloadIdentityCredential` directly. -The following table provides the **minimum** package version required for each language's client library. +The following table provides the **minimum** package version required for each language ecosystem's client library. -| Language | Library | Minimum Version | Example | -||-|--|| -| .NET | [Azure.Identity](/dotnet/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/dotnet) | -| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/go) | -| Java | [azure-identity](/java/api/overview/azure/identity-readme) | 1.9.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/java) | -| JavaScript | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/node) | -| Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/azure-identity/python) | +| Ecosystem | Library | Minimum version | +|--||--| +| .NET | [Azure.Identity](/dotnet/api/overview/azure/identity-readme) | 1.9.0 | +| C++ | [azure-identity-cpp](https://github.com/Azure/azure-sdk-for-cpp/blob/main/sdk/identity/azure-identity/README.md) | 1.6.0-beta.1 | +| Go | [azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) | 1.3.0 | +| Java | [azure-identity](/java/api/overview/azure/identity-readme) | 1.9.0 | +| Node.js | [@azure/identity](/javascript/api/overview/azure/identity-readme) | 3.2.0 | +| Python | [azure-identity](/python/api/overview/azure/identity-readme) | 1.13.0 | ++† In the C++ library, `WorkloadIdentityCredential` isn't part of the `DefaultAzureCredential` authentication flow. ++In the following code samples, the credential type will use the environment variables injected by the Azure Workload Identity mutating webhook to authenticate with Azure Key Vault. ++## [.NET](#tab/dotnet) ++```csharp +using Azure.Identity; +using Azure.Security.KeyVault.Secrets; ++string keyVaultUrl = Environment.GetEnvironmentVariable("KEYVAULT_URL"); +string secretName = Environment.GetEnvironmentVariable("SECRET_NAME"); ++var client = new SecretClient( + new Uri(keyVaultUrl), + new DefaultAzureCredential()); ++KeyVaultSecret secret = await client.GetSecretAsync(secretName); +``` ++## [C++](#tab/cpp) ++```cpp +#include <cstdlib> +#include <azure/identity.hpp> +#include <azure/keyvault/secrets/secret_client.hpp> ++using namespace Azure::Identity; +using namespace Azure::Security::KeyVault::Secrets; ++// * AZURE_TENANT_ID: Tenant ID for the Azure account. +// * AZURE_CLIENT_ID: The client ID to authenticate the request. +std::string GetTenantId() { return std::getenv("AZURE_TENANT_ID"); } +std::string GetClientId() { return std::getenv("AZURE_CLIENT_ID"); } +std::string GetTokenFilePath() { return std::getenv("AZURE_FEDERATED_TOKEN_FILE"); } ++int main() +{ + const char* keyVaultUrl = std::getenv("KEYVAULT_URL"); + const char* secretName = std::getenv("SECRET_NAME"); + auto credential = std::make_shared<WorkloadIdentityCredential>( + GetTenantId(), GetClientId(), GetTokenFilePath()); ++ SecretClient client(keyVaultUrl, credential); + Secret secret = client.GetSecret(secretName).Value; ++ return 0; +} +``` ++## [Go](#tab/go) ++```go +package main ++import ( + "context" + "os" ++ "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/sdk/security/keyvault/azsecrets" + "k8s.io/klog/v2" +) ++func main() { + keyVaultUrl := os.Getenv("KEYVAULT_URL") + secretName := os.Getenv("SECRET_NAME") ++ credential, err := azidentity.NewDefaultAzureCredential(nil) + if err != nil { + klog.Fatal(err) + } ++ client, err := azsecrets.NewClient(keyVaultUrl, credential, nil) + if err != nil { + klog.Fatal(err) + } ++ secret, err := client.GetSecret(context.Background(), secretName, "", nil) + if err != nil { + klog.ErrorS(err, "failed to get secret", "keyvault", keyVaultUrl, "secretName", secretName) + os.Exit(1) + } +} +``` ++## [Java](#tab/java) ++```java +import java.util.Map; ++import com.azure.security.keyvault.secrets.SecretClient; +import com.azure.security.keyvault.secrets.SecretClientBuilder; +import com.azure.security.keyvault.secrets.models.KeyVaultSecret; +import com.azure.identity.DefaultAzureCredentialBuilder; +import com.azure.identity.DefaultAzureCredential; ++public class App { + public static void main(String[] args) { + Map<String, String> env = System.getenv(); + String keyVaultUrl = env.get("KEYVAULT_URL"); + String secretName = env.get("SECRET_NAME"); ++ SecretClient client = new SecretClientBuilder() + .vaultUrl(keyVaultUrl) + .credential(new DefaultAzureCredentialBuilder().build()) + .buildClient(); + KeyVaultSecret secret = client.getSecret(secretName); + } +} +``` ++## [Node.js](#tab/javascript) ++```nodejs +import { DefaultAzureCredential } from "@azure/identity"; +import { SecretClient } from "@azure/keyvault-secrets"; ++const main = async () => { + const keyVaultUrl = process.env["KEYVAULT_URL"]; + const secretName = process.env["SECRET_NAME"]; ++ const credential = new DefaultAzureCredential(); + const client = new SecretClient(keyVaultUrl, credential); ++ const secret = await client.getSecret(secretName); +} ++main().catch((error) => { + console.error("An error occurred:", error); + process.exit(1); +}); +``` ++## [Python](#tab/python) ++```python +import os ++from azure.keyvault.secrets import SecretClient +from azure.identity import DefaultAzureCredential ++def main(): + keyvault_url = os.getenv('KEYVAULT_URL', '') + secret_name = os.getenv('SECRET_NAME', '') ++ client = SecretClient(vault_url=keyvault_url, credential=DefaultAzureCredential()) + secret = client.get_secret(secret_name) ++if __name__ == '__main__': + main() +``` ++ ## Microsoft Authentication Library (MSAL) -The following client libraries are the **minimum** version required +The following client libraries are the **minimum** version required. -| Language | Library | Image | Example | Has Windows | +| Ecosystem | Library | Image | Example | Has Windows | |--|--|-|-|-|-| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | ghcr.io/azure/azure-workload-identity/msal-net:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes | -| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | ghcr.io/azure/azure-workload-identity/msal-go:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes | -| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | ghcr.io/azure/azure-workload-identity/msal-java:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No | -| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | ghcr.io/azure/azure-workload-identity/msal-node:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No | -| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | ghcr.io/azure/azure-workload-identity/msal-python:latest | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No | +| .NET | [microsoft-authentication-library-for-dotnet](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) | `ghcr.io/azure/azure-workload-identity/msal-net:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-net/akvdotnet) | Yes | +| Go | [microsoft-authentication-library-for-go](https://github.com/AzureAD/microsoft-authentication-library-for-go) | `ghcr.io/azure/azure-workload-identity/msal-go:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-go) | Yes | +| Java | [microsoft-authentication-library-for-java](https://github.com/AzureAD/microsoft-authentication-library-for-java) | `ghcr.io/azure/azure-workload-identity/msal-java:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-java) | No | +| JavaScript | [microsoft-authentication-library-for-js](https://github.com/AzureAD/microsoft-authentication-library-for-js) | `ghcr.io/azure/azure-workload-identity/msal-node:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-node) | No | +| Python | [microsoft-authentication-library-for-python](https://github.com/AzureAD/microsoft-authentication-library-for-python) | `ghcr.io/azure/azure-workload-identity/msal-python:latest` | [Link](https://github.com/Azure/azure-workload-identity/tree/main/examples/msal-python) | No | ## Limitations - You can only have 20 federated identity credentials per managed identity. - It takes a few seconds for the federated identity credential to be propagated after being initially added.-- [Virtual nodes][aks-virtual-nodes] add on, based on the open source project [Virtual Kubelet][virtual-kubelet], is not supported.+- [Virtual nodes][aks-virtual-nodes] add on, based on the open source project [Virtual Kubelet][virtual-kubelet], isn't supported. ## How it works If you've used [Azure AD pod-managed identity][use-azure-ad-pod-identity], think ### Service account annotations -All annotations are optional. If the annotation is not specified, the default value will be used. +All annotations are optional. If the annotation isn't specified, the default value will be used. |Annotation |Description |Default | |--||--| All annotations are optional. If the annotation is not specified, the default va ### Pod annotations -All annotations are optional. If the annotation is not specified, the default value will be used. +All annotations are optional. If the annotation isn't specified, the default value will be used. |Annotation |Description |Default | |--||--| |`azure.workload.identity/service-account-token-expiration` |Represents the `expirationSeconds` field for the projected service account token. It's an optional field that you configure to prevent any downtime caused by errors during service account token refresh. Kubernetes service account token expiry isn't correlated with Azure AD tokens. Azure AD tokens expire in 24 hours after they're issued. <sup>1</sup> |3600<br> Supported range is 3600-86400. |-|`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. | +|`azure.workload.identity/skip-containers` |Represents a semi-colon-separated list of containers to skip adding projected service account token volume. For example, `container1;container2`. |By default, the projected service account token volume is added to all containers if the service account is labeled with `azure.workload.identity/use: true`. | |`azure.workload.identity/inject-proxy-sidecar` |Injects a proxy init container and proxy sidecar into the pod. The proxy sidecar is used to intercept token requests to IMDS and acquire an Azure AD token on behalf of the user with federated identity credential. |true | |`azure.workload.identity/proxy-sidecar-port` |Represents the port of the proxy sidecar. |8000 | |
analysis-services | Analysis Services Gateway Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-gateway-install.md | description: Learn how to install and configure an On-premises data gateway to c Previously updated : 01/27/2023 Last updated : 08/25/2023 To learn more about how Azure Analysis Services works with the gateway, see [Con * During setup, when registering your gateway with Azure, the default region for your subscription is selected. You can choose a different subscription and region. If you have servers in more than one region, you must install a gateway for each region. * The gateway cannot be installed on a domain controller.-* The gateway cannot be installed and configured by using automation. * Only one gateway can be installed on a single computer. * Install the gateway on a computer that remains on and does not go to sleep. * Do not install the gateway on a computer with a wireless only connection to your network. Performance can be diminished. |
api-center | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md | For more information about the information assets and capabilities in API Center ## Preview limitations * In preview, API Center is available in the following Azure regions:-- * East US - * UK South - * Central India - * Australia East - + * Australia East + * Central India + * East US + * UK South + * West Europe + ## Frequently asked questions ### Q: Is API Center part of Azure API Management? A: Yes, all data in API Center is encrypted at rest. > [!div class="nextstepaction"] > [Provide feedback](https://aka.ms/apicenter/preview/feedback)+ |
api-management | Api Management Howto Deploy Multi Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-deploy-multi-region.md | To restore routing to the regional gateway, set the value of `disableGateway` to This section provides considerations for multi-region deployments when the API Management instance is injected in a virtual network. -* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are the same as those for a network in the primary region. +* Configure each regional network independently. The [connectivity requirements](virtual-network-reference.md) such as required network security group rules for a virtual network in an added region are generally the same as those for a network in the primary region. * Virtual networks in the different regions don't need to be peered.+> [!IMPORTANT] +> When configured in internal VNet mode, each regional gateway must also have outbound connectivity on port 1443 to the Azure SQL database configured for your API Management instance, which is only in the *primary* region. Ensure that you allow connectivity to the FQDN or IP address of this Azure SQL database in any routes or firewall rules you configure for networks in your secondary regions; the Azure SQL service tag can't be used in this scenario. To find the Azure SQL database name in the primary region, go to the **Network** > **Network status** page of your API Management instance in the portal. ### IP addresses This section provides considerations for multi-region deployments when the API M [create an api management service instance]: get-started-create-service-instance.md+ [get started with azure api management]: get-started-create-service-instance.md+ [deploy an api management service instance to a new region]: #add-region+ [delete an api management service instance from a region]: #remove-region+ [unit]: https://azure.microsoft.com/pricing/details/api-management/+ [premium]: https://azure.microsoft.com/pricing/details/api-management/++ |
api-management | Api Management Howto Log Event Hubs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-log-event-hubs.md | Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger "loggerType": "azureEventHub", "description": "adding a new logger with system assigned managed identity", "credentials": {- "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net/<EventHubName>", + "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net", "identityClientId":"SystemAssigned", "name":"<EventHubName>" } resource ehLoggerWithSystemAssignedIdentity 'Microsoft.ApiManagement/service/log loggerType: 'azureEventHub' description: 'Event hub logger with system-assigned managed identity' credentials: {- endpointAddress: '<EventHubsNamespace>.servicebus.windows.net/<EventHubName>' + endpointAddress: '<EventHubsNamespace>.servicebus.windows.net' identityClientId: 'systemAssigned'- name: 'ApimEventHub' + name: '<EventHubName>' } } } Include a JSON snippet similar to the following in your Azure Resource Manager t "description": "Event hub logger with system-assigned managed identity", "resourceId": "<EventHubsResourceID>", "credentials": {- "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net/<EventHubName>", + "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net", "identityClientId": "SystemAssigned",- "name": "ApimEventHub" + "name": "<EventHubName>" }, } } Include a JSON snippet similar to the following in your Azure Resource Manager t For prerequisites, see [Configure API Management managed identity](#option-2-configure-api-management-managed-identity). -#### [PowerShell](#tab/PowerShell) +#### [REST API](#tab/PowerShell) Use the API Management [REST API](/rest/api/apimanagement/current-preview/logger/create-or-update) or a Bicep or ARM template to configure a logger to an event hub with user-assigned managed identity credentials. +```JSON +{ + "properties": { + "loggerType": "azureEventHub", + "description": "adding a new logger with system assigned managed identity", + "credentials": { + "endpointAddress":"<EventHubsNamespace>.servicebus.windows.net", + "identityClientId":"<ClientID>", + "name":"<EventHubName>" + } + } +} ++``` + #### [Bicep](#tab/bicep) Include a snippet similar the following in your Bicep template. resource ehLoggerWithUserAssignedIdentity 'Microsoft.ApiManagement/service/logge loggerType: 'azureEventHub' description: 'Event hub logger with user-assigned managed identity' credentials: {- endpointAddress: '<EventHubsNamespace>.servicebus.windows.net/<EventHubName>' + endpointAddress: '<EventHubsNamespace>.servicebus.windows.net' identityClientId: '<ClientID>'- name: 'ApimEventHub' + name: '<EventHubName>' } } } Include a JSON snippet similar to the following in your Azure Resource Manager t "description": "Event hub logger with user-assigned managed identity", "resourceId": "<EventHubsResourceID>", "credentials": {- "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net/<EventHubName>", + "endpointAddress": "<EventHubsNamespace>.servicebus.windows.net", "identityClientId": "<ClientID>",- "name": "ApimEventHub" + "name": "<EventHubName>" }, } } |
api-management | Api Management Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-versions.md | The format of an API request URL when using query string-based versioning is: `h For example, `https://apis.contoso.com/products?api-version=v1` and `https://apis.contoso.com/products?api-version=v2` could refer to the same `products` API but to versions `v1` and `v2` respectively. > [!NOTE]-> Query parameters aren't allowed in the `servers` propery of an OpenAPI specification. If you export an OpenAPI specification from an API version, a query string won't appear in the server URL. +> Query parameters aren't allowed in the `servers` property of an OpenAPI specification. If you export an OpenAPI specification from an API version, a query string won't appear in the server URL. ## Original versions |
api-management | Authorizations Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/authorizations-overview.md | Configuring an authorization in your API Management instance consists of three s :::image type="content" source="media/authorizations-overview/configure-authorization.png" alt-text="Diagram of steps to create an authorization in API Management." border="false"::: -#### Step 1 - Authorization provider +#### Step 1: Authorization provider During Step 1, you configure your authorization provider. You can choose between different [identity providers](authorizations-configure-common-providers.md) and grant types (authorization code or client credential). Each identity provider requires specific configurations. Important things to keep in mind: * An authorization provider configuration can only have one grant type. To use an authorization provider, at least one *authorization* is required. Ea |Authorization code | Bound to a user context, meaning a user needs to consent to the authorization. As long as the refresh token is valid, API Management can retrieve new access and refresh tokens. If the refresh token becomes invalid, the user needs to reauthorize. All identity providers support authorization code. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.1) | |Client credentials | Isn't bound to a user and is often used in application-to-application scenarios. No consent is required for client credentials grant type, and the authorization doesn't become invalid. [Learn more](https://www.rfc-editor.org/rfc/rfc6749?msclkid=929b18b5d0e611ec82a764a7c26a9bea#section-1.3.4) | -### Step 2 - Log in +#### Step 2: Log in For authorizations based on the authorization code grant type, you must authenticate to the provider and *consent* to authorization. After successful login and authorization by the identity provider, the provider returns valid access and refresh tokens, which are encrypted and saved by API Management. For details, see [Process flow - runtime](#process-flowruntime). -### Step 3 - Access policy +#### Step 3: Access policy You configure one or more *access policies* for each authorization. The access policies determine which [Azure AD identities](../active-directory/develop/app-objects-and-service-principals.md) can gain access to your authorizations at runtime. Authorizations currently support managed identities and service principals. |
api-management | Cache Lookup Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-lookup-policy.md | Use the `cache-lookup` policy to perform cache lookup and return a valid cached ### Usage notes +- API Management only performs cache lookup for HTTP GET requests. * When using `vary-by-query-parameter`, you might want to declare the parameters in the rewrite-uri template or set the attribute `copy-unmatched-params` to `false`. By deactivating this flag, parameters that aren't declared are sent to the backend. - This policy can only be used once in a policy section. |
api-management | Cache Store Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/cache-store-policy.md | The `cache-store` policy caches responses according to the specified cache setti ### Usage notes +- API Management only caches responses to HTTP GET requests. - This policy can only be used once in a policy section. |
api-management | Developer Portal Extend Custom Functionality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md | The managed developer portal includes a **Custom HTML code** widget where you ca ## Create and upload custom widget -### Prerequisites - +For more advanced widget use cases, API Management provides a scaffold and tools to help developers create a widget and upload it to the developer portal. ++### Prerequisites + * Install [Node.JS runtime](https://nodejs.org/en/) locally * Basic knowledge of programming and web development ### Create widget +> [!WARNING] +> Your custom widget code is stored in public Azure blob storage that's associated with your API Management instance. When you add a custom widget to the developer portal, code is read from this storage via an endpoint that doesn't require authentication, even if the developer portal or a page with the custom widget is only accessible to authenticated users. Don't include sensitive information or secrets in the custom widget code. +> + 1. In the administrative interface for the developer portal, select **Custom widgets** > **Create new custom widget**. 1. Enter a widget name and choose a **Technology**. For more information, see [Widget templates](#widget-templates), later in this article. 1. Select **Create widget**. The managed developer portal includes a **Custom HTML code** widget where you ca If prompted, sign in to your Azure account. - The custom widget is now deployed to your developer portal. Using the portal's administrative interface, you can add it on pages in the developer portal and set values for any custom properties configured in the widget. ### Publish the developer portal The React template contains prepared custom hooks in the `hooks.ts` file and est This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools) contains the following functions to help you develop your custom widget and provides features including communication between the developer portal and your widget: + |Function |Description | ||| |[getValues](#azureapi-management-custom-widgets-toolsgetvalues) | Returns a JSON object containing values set in the widget editor combined with default values | This [npm package](https://www.npmjs.com/package/@azure/api-management-custom-wi |[getWidgetData](#azureapi-management-custom-widgets-toolsgetwidgetdata) | Returns all data passed to your custom widget from the developer portal<br/><br/>Used internally in templates | + #### `@azure/api-management-custom-widgets-tools/getValues` Function that returns a JSON object containing the values you've set in the widget editor combined with default values, passed as an argument. This function returns a JavaScript promise, which after resolution returns a JSO > Manage and use the token carefully. Anyone who has it can access data in your API Management service. + #### `@azure/api-management-custom-widgets-tools/deployNodeJs` This function deploys your widget to your blob storage. In all templates, it's preconfigured in the `deploy.js` file. To implement your widget using another JavaScript UI framework and libraries, yo * For local development, the `config.msapim.json` file must be accessible at the URL `localhost:<port>/config.msapim.json` when the server is running. - ## Next steps Learn more about the developer portal: Learn more about the developer portal: - [Frequently asked questions](developer-portal-faq.md) - [Scaffolder of a custom widget for developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-scaffolder) - [Tools for working with custom widgets of developer portal of Azure API Management service](https://www.npmjs.com/package/@azure/api-management-custom-widgets-tools)+ |
api-management | Get Authorization Context Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-authorization-context-policy.md | class Authorization ### Usage notes -* Configure `identity-type=jwt` when the [access policy](authorizations-overview.md#step-3access-policy) for the authorization is assigned to a service principal. Only `/.default` app-only scopes are supported for the JWT. +* Configure `identity-type=jwt` when the [access policy](authorizations-overview.md#step-3-access-policy) for the authorization is assigned to a service principal. Only `/.default` app-only scopes are supported for the JWT. ## Examples |
api-management | Monetization Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/monetization-overview.md | The following steps explain how to implement a monetization strategy for your AP :::image type="content" source="media/monetization-overview/implementing-strategy.png" alt-text="Diagram of the steps for implementing your monetization strategy"::: -### Step 1 - Understand your customer +### Step 1: Understand your customer 1. Map out the stages in your API consumers' likely journey, from first discovery of your API to maximum scale. The following steps explain how to implement a monetization strategy for your AP 1. Consider applying a value-based pricing strategy if the direct value of the API to the customer is well understood. 1. Calculate the anticipated lifetime usage levels of the API for a customer and your expected number of customers over the lifetime of the API. -### Step 2 - Quantify the costs +### Step 2: Quantify the costs Calculate the total cost of ownership for your API. Calculate the total cost of ownership for your API. | **Engineering costs** | The human resources required to build, test, operate, and maintain the API over its lifetime. Tends to be the most significant cost component. Where possible, exploit cloud PaaS and serverless technologies to minimize. | | **Infrastructure costs** | The costs for the underlying platforms, compute, network, and storage required to support the API over its lifetime. Exploit cloud platforms to achieve an infrastructure cost model that scales up proportionally in line with API usage levels. | -### Step 3 - Conduct market research +### Step 3: Conduct market research 1. Research the market to identify competitors. 1. Analyze competitors' monetization strategies. 1. Understand the specific features (functional and non-functional) that they are offering with their API. -### Step 4 - Design the revenue model +### Step 4: Design the revenue model Design a revenue model based on the outcome of the steps above. You can work across two dimensions: Maximize the lifetime value (LTV) you generate from each customer by designing a Identify the range of required pricing models. A *pricing model* describes a specific set of rules for the API provider to turn consumption by the API consumer into revenue. -For example, to support the [customer stages above](#step-1understand-your-customer), we would need six types of subscription: +For example, to support the [customer stages above](#step-1-understand-your-customer), we would need six types of subscription: | Subscription type | Description | | -- | -- | Building on the examples above, the pricing models could be applied to create an * Are charged an extra $0.06/100 calls past the first 50,000. * Rate limited to 1,200 calls/minute. -### Step 5 - Calibrate +### Step 5: Calibrate Calibrate the pricing across the revenue model to: Calibrate the pricing across the revenue model to: - Verify the quality of your service offerings in each revenue model tier can be supported by your solution. - For example, if you are offering to support 3,500 calls/minute, make sure your end-to-end solution can scale to support that throughput level. -### Step 6 - Release and monitor +### Step 6: Release and monitor Choose an appropriate solution to collect payment for usage of your APIs. Providers tend to fall into two groups: |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
api-management | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
api-management | Self Hosted Gateway Enable Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-enable-azure-ad.md | Assign the API Management Configuration API Access Validator Service Role to the ### Assign API Management Gateway Configuration Reader Role -#### Step 1. Register Azure AD app +#### Step 1: Register Azure AD app Create a new Azure AD app. For steps, see [Create an Azure Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). This app will be used by the self-hosted gateway to authenticate to the API Management instance. * Generate a [client secret](../active-directory/develop/howto-create-service-principal-portal.md#option-3-create-a-new-application-secret) * Take note of the following application values for use in the next section when deploying the self-hosted gateway: application (client) ID, directory (tenant) ID, and client secret -#### Step 2. Assign API Management Gateway Configuration Reader Service Role +#### Step 2: Assign API Management Gateway Configuration Reader Service Role [Assign](../active-directory/develop/howto-create-service-principal-portal.md#assign-a-role-to-the-application) the API Management Gateway Configuration Reader Service Role to the app. |
api-management | Send One Way Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-one-way-request-policy.md | The `send-one-way-request` policy sends the provided request to the specified UR | Attribute | Description | Required | Default | | - | -- | -- | -- |-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | +| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | | timeout| The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 | The `send-one-way-request` policy sends the provided request to the specified UR | [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |+| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No | ## Usage |
api-management | Send Request Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/send-request-policy.md | The `send-request` policy sends the provided request to the specified URL, waiti | Attribute | Description | Required | Default | | - | -- | -- | -- |-| mode | Determines whether this is a `new` request or a `copy` of the current request. In outbound mode, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | +| mode | Determines whether this is a `new` request or a `copy` of the headers and body in the current request. In the outbound policy section, `mode=copy` does not initialize the request body. Policy expressions are allowed. | No | `new` | | response-variable-name | The name of context variable that will receive a response object. If the variable doesn't exist, it will be created upon successful execution of the policy and will become accessible via [`context.Variable`](api-management-policy-expressions.md#ContextVariables) collection. Policy expressions are allowed. | Yes | N/A | | timeout | The timeout interval in seconds before the call to the URL fails. Policy expressions are allowed. | No | 60 | | ignore-error | If `true` and the request results in an error, the error will be ignored, and the response variable will contain a null value. Policy expressions aren't allowed. | No | `false` | The `send-request` policy sends the provided request to the specified URL, waiti | [set-header](set-header-policy.md) | Sets a header in the request. Use multiple `set-header` elements for multiple request headers. | No | | [set-body](set-body-policy.md) | Sets the body of the request. | No | | authentication-certificate | [Certificate to use for client authentication](authentication-certificate-policy.md), specified in a `thumbprint` attribute. | No |-| proxy | A [proxy](proxy-policy.md) policy statement. Used to route request via HTTP proxy | No | +| [proxy](proxy-policy.md) | Routes request via HTTP proxy. | No | ## Usage |
app-service | App Service Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-best-practices.md | Title: Best Practices description: Learn best practices and the common troubleshooting scenarios for your app running in Azure App Service.- ms.assetid: f3359464-fa44-4f4a-9ea6-7821060e8d0d Last updated 07/01/2016-++ # Best Practices for Azure App Service |
app-service | App Service Configure Premium Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-configure-premium-tier.md | description: Learn how to better performance for your web, mobile, and API app i keywords: app service, azure app service, scale, scalable, app service plan, app service cost ms.assetid: ff00902b-9858-4bee-ab95-d3406018c688 Previously updated : 05/08/2023 Last updated : 08/29/2023 + # Configure Premium V3 tier for Azure App Service -The new Premium V3 pricing tier gives you faster processors, SSD storage, and quadruple the memory-to-core ratio of the existing pricing tiers (double the Premium V2 tier). With the performance advantage, you could save money by running your apps on fewer instances. In this article, you learn how to create an app in Premium V3 tier or scale up an app to Premium V3 tier. +The new Premium V3 pricing tier gives you faster processors, SSD storage, memory-optimized options, and quadruple the memory-to-core ratio of the existing pricing tiers (double the Premium V2 tier). With the performance advantage, you could save money by running your apps on fewer instances. In this article, you learn how to create an app in Premium V3 tier or scale up an app to Premium V3 tier. ## Prerequisites -To scale-up an app to Premium V3, you need to have an Azure App Service app that runs in a pricing tier lower than Premium V3, and the app must be running in an App Service deployment that supports Premium V3. +To scale-up an app to Premium V3, you need to have an Azure App Service app that runs in a pricing tier lower than Premium V3, and the app must be running in an App Service deployment that supports Premium V3. Additionally the App Service deployment must support the desired SKU within Premium V3. <a name="availability"></a> To scale-up an app to Premium V3, you need to have an Azure App Service app that The Premium V3 tier is available for both native and custom containers, including both Windows containers and Linux containers. -Premium V3 is available in some Azure regions and availability in additional regions is being added continually. To see if it's available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md): +Premium V3 as well as specific Premium V3 SKUs are available in some Azure regions and availability in additional regions is being added continually. To see if a specific PremiumV3 offering is available in your region, run the following Azure CLI command in the [Azure Cloud Shell](../cloud-shell/overview.md) (substitute _P1v3_ with the desired SKU): ```azurecli-interactive az appservice list-locations --sku P1V3 To see all the Premium V3 options, select **Explore pricing plans**, then select :::image type="content" source="media/app-service-configure-premium-tier/explore-pricing-plans.png" alt-text="Screenshot showing the Explore pricing plans page with a Premium V3 plan selected."::: > [!IMPORTANT] -> If you don't see a Premium V3 plan as an option, or if the options are greyed out, then Premium V3 likely isn't available in the underlying App Service deployment that contains the App Service plan. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details. +> If you don't see **P0V3**, **P1V3**, **P2V3**, **P3V3**, **P1mV3**, **P2mV3**, **P3mV3**, **P4mV3**, and **P5mV3** as options, or if some options are greyed out, then either **Premium V3** or an individual SKU within **Premium V3** isn't available in the underlying App Service deployment that contains the App Service plan. See [Scale up from an unsupported resource group and region combination](#unsupported) for more details. +> ## Scale up an existing app to Premium V3 tier -Before scaling an existing app to Premium V3 tier, make sure that Premium V3 is available. For information, see [Premium V3 availability](#availability). If it's not available, see [Scale up from an unsupported resource group and region combination](#unsupported). +Before scaling an existing app to Premium V3 tier, make sure that both Premium V3 as well as the specific SKU within Premium V3 are available. For information, see [PremiumV3 availability](#availability). If it's not available, see [Scale up from an unsupported resource group and region combination](#unsupported). Depending on your hosting environment, scaling up may require extra steps. Some App Service plans can't scale up to the Premium V3 tier, or to a newer SKU ## Scale up from an unsupported resource group and region combination -If your app runs in an App Service deployment where Premium V3 isn't available, or if your app runs in a region that currently does not support Premium V3, you need to re-deploy your app to take advantage of Premium V3. You have two options: +If your app runs in an App Service deployment where Premium V3 isn't available, or if your app runs in a region that currently does not support Premium V3, you need to re-deploy your app to take advantage of Premium V3. Alternatively newer Premium V3 SKUs may not be available, in which case you also need to re-deploy your app to take advantage of newer SKUs within Premium V3. You have two options: -- Create an app in a new resource group and with a new App Service plan. When creating the App Service plan, select a Premium V3 tier. This step ensures that the App Service plan is deployed into a deployment unit that supports Premium V3. Then, redeploy your application code into the newly created app. Even if you scale the App Service plan down to a lower tier to save costs, you can always scale back up to Premium V3 because the deployment unit supports it.-- If your app already runs in an existing **Premium** tier, then you can clone your app with all app settings, connection strings, and deployment configuration into a new resource group on a new app service plan that uses Premium V3.+- Create an app in a new resource group and with a new App Service plan. When creating the App Service plan, select the desired Premium V3 tier. This step ensures that the App Service plan is deployed into a deployment unit that supports Premium V3 as well as the specific SKU within Premium V3. Then, redeploy your application code into the newly created app. Even if you scale the new App Service plan down to a lower tier to save costs, you can always scale back up to Premium V3 and the desired SKU within Premium V3 because the deployment unit supports it. ![Screenshot showing how to clone your app.](media/app-service-configure-premium-tier/clone-app.png) |
app-service | App Service Key Vault References | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-key-vault-references.md | |
app-service | App Service Plan Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-plan-manage.md | keywords: app service, azure app service, scale, app service plan, change, creat ms.assetid: 4859d0d5-3e3c-40cc-96eb-f318b2c51a3d + Last updated 07/31/2023 |
app-service | App Service Web App Cloning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-app-cloning.md | ms.assetid: f9a5cfa1-fbb0-41e6-95d1-75d457347a35 Last updated 01/14/2016 -++ # Azure App Service App Cloning Using PowerShell |
app-service | App Service Web Configure Tls Mutual Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-configure-tls-mutual-auth.md | Title: Configure TLS mutual authentication description: Learn how to authenticated client certificates on TLS. Azure App Service can make the client certificate available to the app code for verification. ++ ms.assetid: cd1d15d3-2d9e-4502-9f11-a306dac4453a Last updated 12/11/2020 |
app-service | App Service Web Tutorial Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md | ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c Last updated 01/31/2023 ++ # Map an existing custom DNS name to Azure App Service |
app-service | App Service Web Tutorial Dotnet Sqldatabase | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-dotnet-sqldatabase.md | ms.assetid: 03c584f1-a93c-4e3d-ac1b-c82b50c75d3e ms.devlang: csharp Last updated 01/27/2022-+ # Tutorial: Deploy an ASP.NET app to Azure with Azure SQL Database |
app-service | App Service Web Tutorial Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md | ms.devlang: csharp Last updated 01/31/2023 + # Tutorial: Host a RESTful API with CORS in Azure App Service |
app-service | Configure Authentication Api Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md | Title: Manage AuthN/AuthZ API versions description: Upgrade your App Service authentication API to V2 or pin it to a specific version, if needed. Last updated 02/17/2023-+ # Manage the API and runtime versions of App Service authentication |
app-service | Configure Authentication Customize Sign In Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md | Title: Customize sign-ins and sign-outs description: Use the built-in authentication and authorization in App Service and at the same time customize the sign-in and sign-out behavior. Last updated 03/29/2021+ # Customize sign-in and sign-out in Azure App Service authentication |
app-service | Configure Authentication File Based | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-file-based.md | Title: File-based configuration of AuthN/AuthZ description: Configure authentication and authorization in App Service using a configuration file to enable certain preview capabilities. Last updated 07/15/2021+ # File-based configuration in Azure App Service authentication |
app-service | Configure Authentication Oauth Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-oauth-tokens.md | Title: OAuth tokens in AuthN/AuthZ description: Learn how to retrieve tokens and refresh tokens and extend sessions when using the built-in authentication and authorization in App Service. Last updated 03/29/2021+ # Work with OAuth tokens in Azure App Service authentication |
app-service | Configure Authentication Provider Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md | description: Learn how to configure Azure Active Directory authentication as an ms.assetid: 6ec6a46c-bce4-47aa-b8a3-e133baef22eb Last updated 01/31/2023-+ # Configure your App Service or Azure Functions app to use Azure AD login |
app-service | Configure Authentication Provider Apple | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-apple.md | description: Learn how to configure Sign in with Apple as an identity provider f Last updated 11/19/2020 + # Configure your App Service or Azure Functions app to sign in using a Sign in with Apple provider (Preview) |
app-service | Configure Authentication Provider Facebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-facebook.md | description: Learn how to configure Facebook authentication as an identity provi ms.assetid: b6b4f062-fcb4-47b3-b75a-ec4cb51a62fd Last updated 03/29/2021-+ # Configure your App Service or Azure Functions app to use Facebook login |
app-service | Configure Authentication Provider Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-github.md | Title: Configure GitHub authentication description: Learn how to configure GitHub authentication as an identity provider for your App Service or Azure Functions app. Last updated 03/01/2022+ # Configure your App Service or Azure Functions app to use GitHub login |
app-service | Configure Authentication Provider Google | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-google.md | description: Learn how to configure Google authentication as an identity provide ms.assetid: 2b2f9abf-9120-4aac-ac5b-4a268d9b6e2b Last updated 03/29/2021-+ |
app-service | Configure Authentication Provider Microsoft | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-microsoft.md | description: Learn how to configure Microsoft Account authentication as an ident ms.assetid: ffbc6064-edf6-474d-971c-695598fd08bf Last updated 03/29/2021-+ |
app-service | Configure Authentication Provider Openid Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-openid-connect.md | description: Learn how to configure an OpenID Connect provider as an identity pr Last updated 10/20/2021 + # Configure your App Service or Azure Functions app to login using an OpenID Connect provider |
app-service | Configure Authentication Provider Twitter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-twitter.md | description: Learn how to configure Twitter authentication as an identity provid ms.assetid: c6dc91d7-30f6-448c-9f2d-8e91104cde73 Last updated 03/29/2021-+ # Configure your App Service or Azure Functions app to use Twitter login |
app-service | Configure Authentication User Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-user-identities.md | Title: User identities in AuthN/AuthZ description: Learn how to access user identities when using the built-in authentication and authorization in App Service. Last updated 03/29/2021+ # Work with user identities in Azure App Service authentication |
app-service | Configure Common | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md | keywords: azure app service, web app, app settings, environment variables ms.assetid: 9af8a367-7d39-4399-9941-b80cbc5f39a0 Last updated 04/21/2023-+ ms.devlang: azurecli # Configure an App Service app This article explains how to configure common settings for web apps, mobile back ## Configure app settings +> [!NOTE] +> - App settings names can only contain letters, numbers (0-9), periods ("."), and underscores ("_") +> - Special characters in the value of an App Setting must be escaped as needed by the target OS +> +> For example to set an environment variable in App Service Linux with the value `"pa$$w0rd\"` the string for the app setting should be: `"pa\$\$w0rd\\"` + In App Service, app settings are variables passed as environment variables to the application code. For Linux apps and custom containers, App Service passes app settings to the container using the `--env` flag to set the environment variable in the container. In either case, they're injected into your app environment at app startup. When you add, remove, or edit app settings, App Service triggers an app restart. For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in *Web.config* or *appsettings.json*, but the values in App Service override the ones in *Web.config* or *appsettings.json*. You can keep development settings (for example, local MySQL password) in *Web.config* or *appsettings.json* and production secrets (for example, Azure MySQL database password) safely in App Service. The same code uses your development settings when you debug locally, and it uses your production secrets when deployed to Azure. |
app-service | Configure Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-custom-container.md | Title: Configure a custom container description: Learn how to configure a custom container in Azure App Service. This article shows the most common configuration tasks. -++ Last updated 01/04/2023 |
app-service | Configure Domain Traffic Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-domain-traffic-manager.md | ms.assetid: 0f96c0e7-0901-489b-a95a-e3b66ca0a1c2 Last updated 03/05/2020 ++ # Configure a custom domain name in Azure App Service with Traffic Manager integration After the records for your domain name have propagated, use the browser to verif ## Next steps > [!div class="nextstepaction"]-> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md) +> [Secure a custom DNS name with an TLS/SSL binding in Azure App Service](configure-ssl-bindings.md) |
app-service | Configure Language Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md | |
app-service | Configure Language Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md | description: Learn how to configure a PHP app in a pre-built PHP container, in A ms.devlang: php Previously updated : 05/09/2023 Last updated : 08/31/2023 zone_pivot_groups: app-service-platform-windows-linux++ For more information on how App Service runs and builds PHP apps in Linux, see [ ## Customize start-up -By default, the built-in PHP container runs the Apache server. At start-up, it runs `apache2ctl -D FOREGROUND"`. If you like, you can run a different command at start-up, by running the following command in the [Cloud Shell](https://shell.azure.com): +If you want, you can run a custom command at the container start-up time, by running the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>" By default, Azure App Service points the root virtual application path (*/*) to The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the `public/` subdirectory as the site root. -The default PHP image for App Service uses Apache, and it doesn't let you customize the site root for your app. To work around this limitation, add an *.htaccess* file to your repository root with the following content: +The default PHP image for App Service uses Nginx, and you change the site root by [configuring the Nginx server with the `root` directive](https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/). This [example configuration file](https://github.com/Azure-Samples/laravel-tasks/blob/main/default) contains the following snippets that changes the `root` directive: ```-<IfModule mod_rewrite.c> - RewriteEngine on - RewriteCond %{REQUEST_URI} ^(.*) - RewriteRule ^(.*)$ /public/$1 [NC,L,QSA] -</IfModule> +server { + #proxy_cache cache; + #proxy_cache_valid 200 1s; + listen 8080; + listen [::]:8080; + root /home/site/wwwroot/public; # Changed for Laravel ++ location / { + index index.php https://docsupdatetracker.net/index.html index.htm hostingstart.html; + try_files $uri $uri/ /index.php?$args; # Changed for Laravel + } + ... +``` ++The default container uses the configuration file found at */etc/nginx/sites-available/default*. Keep in mind that any edit you make to this file is erased when the app restarts. To make a change that is effective across app restarts, [add a custom start-up command](#customize-start-up) like this example: ++``` +cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload ``` -If you would rather not use *.htaccess* rewrite, you can deploy your Laravel application with a [custom Docker image](quickstart-custom-container.md) instead. +This command replaces the default Nginx configuration file with a file named *default* in your repository root and restarts Nginx. ::: zone-end Then, go to the Azure portal and add an Application Setting to scan the "ini" di ::: zone pivot="platform-windows" -To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), you can't use the *.htaccess* approach. App Service provides a separate mechanism using the `PHP_INI_SCAN_DIR` app setting. +To customize PHP_INI_SYSTEM directives (see [php.ini directives](https://www.php.net/manual/ini.list.php)), use the `PHP_INI_SCAN_DIR` app setting. First, run the following command in the [Cloud Shell](https://shell.azure.com) to add an app setting called `PHP_INI_SCAN_DIR`: |
app-service | Configure Language Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md | Title: Configure Linux Python apps description: Learn how to configure the Python container in which web apps are run, using both the Azure portal and the Azure CLI. Last updated 11/16/2022-+++ ms.devlang: python adobe-target: true |
app-service | Configure Ssl App Service Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-app-service-certificate.md | |
app-service | Configure Ssl Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md | In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>: 1. In **TLS/SSL type**, choose between **SNI SSL** and **IP based SSL**. - **[SNI SSL](https://en.wikipedia.org/wiki/Server_Name_Indication)**: Multiple SNI SSL bindings may be added. This option allows multiple TLS/SSL certificates to secure multiple domains on the same IP address. Most modern browsers (including Internet Explorer, Chrome, Firefox, and Opera) support SNI (for more information, see [Server Name Indication](https://wikipedia.org/wiki/Server_Name_Indication)).- + 1. When adding a new certificate, validate the new certificate by selecting **Validate**. |
app-service | Configure Ssl Certificate In Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md | |
app-service | Configure Ssl Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md | |
app-service | Deploy Zip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md | For more information, see [Kudu documentation](https://github.com/projectkudu/ku You can deploy your [WAR](https://wikipedia.org/wiki/WAR_(file_format)), [JAR](https://wikipedia.org/wiki/JAR_(file_format)), or [EAR](https://wikipedia.org/wiki/EAR_(file_format)) package to App Service to run your Java web app using the Azure CLI, PowerShell, or the Kudu publish API. -The deployment process places the package on the shared file drive correctly (see [Kudu publish API reference](#kudu-publish-api-reference)). For that reason, deploying WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy is not recommended. +The deployment process used by the steps here places the package on the app's content share with the right naming convention and directory structure (see [Kudu publish API reference](#kudu-publish-api-reference)), and it's the recommended approach. If you deploy WAR/JAR/EAR packages using [FTP](deploy-ftp.md) or WebDeploy instead, you may see unkown failures due to mistakes in the naming or structure. # [Azure CLI](#tab/cli) |
app-service | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md | The size of the subnet can affect the scaling limits of the App Service plan ins > > Sample calculation: >-> For each App Service plan instance, you need: -> 5 Windows Container apps = 5 IP addresses -> 1 IP address per App Service plan instance +> For each App Service plan instance, you need: +> 5 Windows Container apps = 5 IP addresses +> 1 IP address per App Service plan instance > 5 + 1 = 6 IP addresses >-> For 25 instances: +> For 25 instances: > 6 x 25 = 150 IP addresses per App Service plan > > Since you have 2 App Service plans, 2 x 150 = 300 IP addresses. If you use a smaller subnet, be aware of the following limitations: -- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses).-+- Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment dynamically scales the supporting infrastructure, and uses between 7 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet is a `/27` address space (32 addresses). +- For any App Service plan OS/SKU combination used in your App Service Environment like I1v2 Windows, one standby instance is created for every 20 active instances. The standby instances also require IP addresses. +- When scaling App Service plans in the App Service Environment up/down, the amount of IP addresses used by the App Service plan is temporarily doubled while the scale operation completes. The new instances need to be fully operational before the existing instances are deprovisioned. +- Platform upgrades need free IP addresses to ensure upgrades can happen without interruptions to outbound traffic. Finally, after scale up, down, or in operations complete, there might be a short period of time before IP addresses are released. - If you run out of addresses within your subnet, you can be restricted from scaling out your App Service plans in the App Service Environment. Another possibility is that you can experience increased latency during intensive traffic load, if Microsoft isn't able to scale the supporting infrastructure. ## Addresses You can find details in the **IP Addresses** portion of the portal, as shown in ![Screenshot that shows details about IP addresses.](./media/networking/networking-ip-addresses.png) -As you scale your App Service plans in your App Service Environment, you'll use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time. +As you scale your App Service plans in your App Service Environment, you use more addresses out of your subnet. The number of addresses you use varies, based on the number of App Service plan instances you have, and how much traffic there is. Apps in the App Service Environment don't have dedicated addresses in the subnet. The specific addresses used by an app in the subnet will change over time. ## Ports and network restrictions -For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet. +For your app to receive traffic, ensure that inbound network security group (NSG) rules allow the App Service Environment subnet to receive traffic from the required ports. In addition to any ports, you'd like to receive traffic on, you should ensure that Azure Load Balancer is able to connect to the subnet on port 80. This port is used for health checks of the internal virtual machine. You can still control port 80 traffic from the virtual network to your subnet. It's a good idea to configure the following inbound NSG rule: The normal app access ports inbound are as follows: You can set route tables without restriction. You can tunnel all of the outbound application traffic from your App Service Environment to an egress firewall device, such as Azure Firewall. In this scenario, the only thing you have to worry about is your application dependencies. -Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, dependencies could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](../deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`. +Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, dependencies could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](../deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you need to allow `oryx-cdn.microsoft.io:443`. You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so allows you to expose specific apps on that App Service Environment. -Your application will use one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet. +Your application uses one of the default outbound addresses for egress traffic to public endpoints. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet. > [!NOTE] > Outbound SMTP connectivity (port 25) is supported for App Service Environment v3. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting, see [Troubleshoot outbound SMTP connectivity problems in Azure](../../virtual-network/troubleshoot-outbound-smtp-connectivity.md). For more information about Private Endpoint and Web App, see [Azure Web App Priv ## DNS -The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you'll need to use their respective domain suffix. +The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment. The examples use the domain suffix `appserviceenvironment.net` from Azure Public Cloud. If you're using other clouds like Azure Government, you need to use their respective domain suffix. ### DNS configuration to your App Service Environment In addition to setting up DNS, you also need to enable it in the [App Service En ### DNS configuration from your App Service Environment -The apps in your App Service Environment will use the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server. +The apps in your App Service Environment uses the DNS that your virtual network is configured with. If you want some apps to use a different DNS server, you can manually set it on a per app basis, with the app settings `WEBSITE_DNS_SERVER` and `WEBSITE_DNS_ALT_SERVER`. `WEBSITE_DNS_ALT_SERVER` configures the secondary DNS server. The secondary DNS server is only used when there's no response from the primary DNS server. ## More resources |
app-service | Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/getting-started.md | zone_pivot_groups: app-service-getting-started-stacks | Action | Resources | | | |-| **Create your first Java app** | Using one of the following tools:<br><br>- [Linux - Maven](./quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-maven)<br>- [Linux - Azure portal](./quickstart-java.md?tabs=javase&pivots=platform-linux-development-environment-azure-portal)<br>- [Windows - Maven](./quickstart-java.md?tabs=javase&pivots=platform-windows-development-environment-maven)<br>- [Windows - Azure portal](./quickstart-java.md?tabs=javase&pivots=platform-windows-development-environment-azure-portal) | -| **Deploy your app** | - [Configure Java](./configure-language-java.md?pivots=platform-linux)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [GitHub actions](./deploy-github-actions.md) | +| **Create your first Java app** | Using one of the following tools:<br><br>- [Maven deploy with an embedded web server](./quickstart-java.md?pivots=java-maven-quarkus)<br>- [Maven deploy to a Tomcat server](./quickstart-java.md?pivots=java-maven-tomcat)<br>- [Maven deploy to a JBoss server](./quickstart-java.md?pivots=java-maven-jboss) | +| **Deploy your app** | - [With Maven](configure-language-java.md?pivots=platform-linux#maven)<br>- [With Gradle](configure-language-java.md?pivots=platform-linux#gradle)<br>- [Deploy War](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With popular IDEs (VS Code, IntelliJ, and Eclipse)](configure-language-java.md?pivots=platform-linux#ides)<br>- [Deploy WAR or JAR packages directly](./deploy-zip.md?tabs=cli#deploy-warjarear-packages)<br>- [With GitHub Actions](./deploy-github-actions.md) | | **Monitor your app**| - [Log stream](./troubleshoot-diagnostic-logs.md#stream-logs)<br>- [Diagnose and solve tool](./overview-diagnostics.md)| | **Add domains & certificates** |- [Map a custom domain](./app-service-web-tutorial-custom-domain.md?tabs=root%2Cazurecli)<br>- [Add SSL certificate](./configure-ssl-certificate.md)| | **Connect to a database** |- [Java Spring with Cosmos DB](./tutorial-java-spring-cosmosdb.md)| |
app-service | Identity Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/identity-scenarios.md | |
app-service | Manage Automatic Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-automatic-scaling.md | description: Learn how to scale automatically in Azure App Service with zero con Last updated 08/02/2023 + # Automatic scaling in Azure App Service |
app-service | Manage Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md | description: Learn how to restore backups of your apps in Azure App Service or c ms.assetid: 6223b6bd-84ec-48df-943f-461d84605694 Last updated 04/25/2023++ # Back up and restore your app in Azure App Service |
app-service | Manage Create Arc Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md | Title: 'Set up Azure Arc for App Service, Functions, and Logic Apps' description: For your Azure Arc-enabled Kubernetes clusters, learn how to enable App Service apps, function apps, and logic apps.++ Last updated 03/24/2023 |
app-service | Manage Custom Dns Buy Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md | ms.assetid: 70fb0e6e-8727-4cca-ba82-98a4d21586ff Last updated 01/31/2023 ++ # Buy an App Service domain and configure an app with it |
app-service | Manage Custom Dns Migrate Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-migrate-domain.md | Title: Migrate an active DNS name description: Learn how to migrate a custom DNS domain name that is already assigned to a live site to Azure App Service without any downtime. tags: top-support-issue-++ ms.assetid: 10da5b8a-1823-41a3-a2ff-a0717c2b5c2d Last updated 01/31/2023 |
app-service | Manage Move Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md | Title: Move an app to another region description: Learn how to move App Service resources from one region to another.-++ Last updated 02/27/2020 Delete the source app and App Service plan. [An App Service plan in the non-free ## Next steps -[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md) +[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md) |
app-service | Manage Scale Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-scale-up.md | |
app-service | Monitor Instances Health Check | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md | function envVarMatchesHeader(headerValue) { > The `x-ms-auth-internal-token` header is only available on Windows App Service. ## Instances-Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab shows your instance's name, the status of that instance and gives you the option to manually restart the application instance. -If the status of your instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they are listed on the opening blade from the restart button. +Once Health Check is enabled, you can restart and monitor the status of your application instances through the instances tab. The instances tab shows your instance's name, the status of that application's instance and gives you the option to manually restart the instance. ++If the status of your application instance is unhealthy, you can restart the instance manually using the restart button in the table. Keep in mind that any other applications hosted on the same App Service Plan as the instance will also be affected by the restart. If there are other applications using the same App Service Plan as the instance, they are listed on the opening blade from the restart button. If you restart the instance and the restart process fails, you will then be given the option to replace the worker (only 1 instance can be replaced per hour). This will also affect any applications using the same App Service Plan. |
app-service | Operating System Functionality | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/operating-system-functionality.md | Title: Operating system functionality description: Learn about the OS functionality in Azure App Service on Windows. Find out what types of file, network, and registry access your app gets. -++ ms.assetid: 39d5514f-0139-453a-b52e-4a1c06d8d914 Last updated 01/21/2022 |
app-service | Overview Arc Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-arc-integration.md | Title: 'App Service on Azure Arc' description: An introduction to App Service integration with Azure Arc for Azure operators. Last updated 03/15/2023++ # App Service, Functions, and Logic Apps on Azure Arc (Preview) |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | ms.assetid: b7151b57-09e5-4c77-a10c-375a262f17e5 Last updated 02/03/2023 -+ # Authentication and authorization in Azure App Service and Azure Functions |
app-service | Overview Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-diagnostics.md | description: Learn how you can troubleshoot issues with your app in Azure App Se keywords: app service, azure app service, diagnostics, support, web app, troubleshooting, self-help Previously updated : 06/29/2013 Last updated : 06/29/2023 + |
app-service | Overview Hosting Plans | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-hosting-plans.md | ms.assetid: dea3f41e-cf35-481b-a6bc-33d7fc9d01b1 Last updated 05/26/2023 + |
app-service | Overview Local Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md | |
app-service | Overview Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md | Title: Plan to manage costs for App Service description: Learn how to plan for and manage costs for Azure App Service by using cost analysis in the Azure portal.++ |
app-service | Overview Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md | description: Learn how managed identities work in Azure App Service and Azure Fu Last updated 06/27/2023 -+ # How to use managed identities for App Service and Azure Functions |
app-service | Overview Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-monitoring.md | keywords: app service, azure app service, monitoring, diagnostic settings, suppo Last updated 06/29/2023 + # Azure App Service monitoring overview |
app-service | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md | description: Learn how Azure App Service helps you develop and host web applicat ms.assetid: 94af2caf-a2ec-4415-a097-f60694b860b3 Previously updated : 07/19/2023 Last updated : 08/31/2023 ++ # App Service overview Azure App Service is a fully managed platform as a service (PaaS) offering for d * **Authentication** - [Authenticate users](overview-authentication-authorization.md) using the built-in authentication component. Authenticate users with [Azure Active Directory](configure-authentication-provider-aad.md), [Google](configure-authentication-provider-google.md), [Facebook](configure-authentication-provider-facebook.md), [Twitter](configure-authentication-provider-twitter.md), or [Microsoft account](configure-authentication-provider-microsoft.md). * **Application templates** - Choose from an extensive list of application templates in the [Azure Marketplace](https://azure.microsoft.com/marketplace/), such as WordPress, Joomla, and Drupal. * **Visual Studio and Visual Studio Code integration** - Dedicated tools in Visual Studio and Visual Studio Code streamline the work of creating, deploying, and debugging.+* **Java tools integration** - Develop and deploy to Azure without leaving your favorite development tools, such as Maven, Gradle, Visual Studio Code, IntelliJ, and Eclipse. * **API and mobile features** - App Service provides turn-key CORS support for RESTful API scenarios, and simplifies mobile app scenarios by enabling authentication, offline data sync, push notifications, and more. * **Serverless code** - Run a code snippet or script on-demand without having to explicitly provision or manage infrastructure, and pay only for the compute time your code actually uses (see [Azure Functions](../azure-functions/index.yml)). App Service can also host web apps natively on Linux for supported application s ### Built-in languages and frameworks -App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (8, 11, and 17), Tomcat, PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container. +App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (Tomcat, JBoss, or with an embedded web server), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az-webapp-list-runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container. Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful. |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
app-service | Quickstart Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md | |
app-service | Quickstart Arm Template Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arm-template-uiex.md | -Get started with [Azure App Service](overview.md) by deploying a app to the cloud using an <abbr title="A JSON file that declaratively defines one or more Azure resources and dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly.">ARM template</abbr> and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart. <abbr title="In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.">The template uses declarative syntax.</abbr> +Get started with [Azure App Service](overview.md) by deploying a app to the cloud using an ARM template (A JSON file that declaratively defines one or more Azure resources and dependencies between the deployed resources. The template can be used to deploy the resources consistently and repeatedly.) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart. The template uses declarative syntax. (In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.) If your environment meets the prerequisites and you're familiar with using [ARM templates](../azure-resource-manager/templates/overview.md), select the **Deploy to Azure** button. The template will open in the Azure portal. The following table details defaults parameters and their descriptions: ::: zone pivot="platform-windows" Run the code below to deploy a .NET framework app on Windows using Azure CLI. -Replace <abbr title="Valid characters characters are `a-z`, `0-9`, and `-`."> \<app-name> </abbr> with a globally unique app name. To learn other <abbr title="You can also use the Azure portal, Azure PowerShell, and REST API.">deployment methods</abbr>, see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). You can find more [Azure App Service template samples here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Sites). +Replace \<app-name> (Valid characters characters are `a-z`, `0-9`, and `-`.) with a globally unique app name. To learn other deployment methods (You can also use the Azure portal, Azure PowerShell, and REST API.), see [Deploy templates](../azure-resource-manager/templates/deploy-powershell.md). You can find more [Azure App Service template samples here](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Sites). ```azurecli-interactive az group create --name myResourceGroup --location "southcentralus" && az deployment group create --resource-group myResourceGroup --parameters webAppN <summary>What's the code doing?</summary> <p>The commands do the following actions:</p> <ul>-<li>Create a default <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr>.</li> -<li>Create a default <abbr title="The plan that specifies the location, size, and features of the web server farm that hosts your app.">App Service plan</abbr>.</li> -<li><a href="/cli/azure/webapp#az-webapp-create">Create an <abbr title="The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.">App Service app</abbr></a> with the specified name.</li> +<li>Create a default resource group (A logical container for related Azure resources that you can manage as a unit.).</li> +<li>Create a default App Service plan (The plan that specifies the location, size, and features of the web server farm that hosts your app.).</li> +<li><a href="/cli/azure/webapp#az-webapp-create">Create an App Service app (The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.)</a> with the specified name.</li> </ul> </details> ::: zone pivot="platform-windows" <details> <summary>How do I deploy a different language stack?</summary>-To deploy a different language stack, update <abbr title="This template is compatible with .NET Core, .NET Framework, PHP, Node.js, and Static HTML apps.">language parameter</abbr> with appropriate values. For Java, see <a href="/azure/app-service/quickstart-java-uiex">Create Java app</a>. +To deploy a different language stack, update language parameter (This template is compatible with .NET Core, .NET Framework, PHP, Node.js, and Static HTML apps.) with appropriate values. For Java, see <a href="/azure/app-service/quickstart-java-uiex">Create Java app</a>. | Parameters | Type | Default value | Description | ||||-| When no longer needed, [delete the resource group](../azure-resource-manager/man ## Next steps -- [Deploy from local Git](deploy-local-git.md)-- [ASP.NET Core with SQL Database](tutorial-dotnetcore-sqldb-app.md)-- [Python with Postgres](tutorial-python-postgresql-app.md)-- [PHP with MySQL](tutorial-php-mysql-app.md)-- [Connect to Azure SQL database with Java](/azure/azure-sql/database/connect-query-java?toc=%2fazure%2fjava%2ftoc.json)+- [Deploy from local Git](deploy-local-git.md) +- [ASP.NET Core with SQL Database](tutorial-dotnetcore-sqldb-app.md) +- [Python with Postgres](tutorial-python-postgresql-app.md) +- [PHP with MySQL](tutorial-php-mysql-app.md) +- [Connect to Azure SQL database with Java](/azure/azure-sql/database/connect-query-java?toc=%2fazure%2fjava%2ftoc.json) |
app-service | Quickstart Dotnetcore Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore-uiex.md | -In this quickstart, you'll learn how to create and deploy your first ASP.NET Core web app to <abbr title="An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.">Azure App Service</abbr>. App Service supports .NET 5.0 apps. +In this quickstart, you'll learn how to create and deploy your first ASP.NET Core web app to Azure App Service (An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.). App Service supports .NET 5.0 apps. -When you're finished, you'll have an Azure <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr>, consisting of an <abbr title="The plan that specifies the location, size, and features of the web server farm that hosts your app.">App Service plan</abbr> and an <abbr title="The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.">App Service app</abbr> with a deployed sample ASP.NET Core application. +When you're finished, you'll have an Azure resource group (A logical container for related Azure resources that you can manage as a unit.) consisting of an App Service plan (The plan that specifies the location, size, and features of the web server farm that hosts your app.) and on App Service app (The representation of your web app, which contains your app code, DNS hostnames, certificates, and related resources.) with a deployed sample ASP.NET Core application. <hr/> ## 1. Prepare your environment -- **Get an Azure account** with an active <abbr title="The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.">subscription</abbr>. [Create an account for free](https://azure.microsoft.com/free/dotnet/).+- **Get an Azure account** with an active subscription (The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.). [Create an account for free](https://azure.microsoft.com/free/dotnet/). - **Install <a href="https://www.visualstudio.com/downloads/" target="_blank">Visual Studio 2019</a>** with the **ASP.NET and web development** workload. <details> Advance to the next article to learn how to create a .NET Core app and connect i ::: zone-end ::: zone pivot="platform-linux"-This quickstart shows how to create a [.NET Core](/aspnet/core/) app on <abbr title="App Service on Linux provides a highly scalable, self-patching web hosting service using the Linux operating system.">App Service on Linux</abbr>. You create the app using the [Azure CLI](/cli/azure/get-started-with-azure-cli), and you use Git to deploy the .NET Core code to the app. +This quickstart shows how to create a [.NET Core](/aspnet/core/) app on App Service on Linux (App Service on Linux provides a highly scalable, self-patching web hosting service using the Linux operating system.). You create the app using the [Azure CLI](/cli/azure/get-started-with-azure-cli), and you use Git to deploy the .NET Core code to the app. <hr/> |
app-service | Quickstart Java Uiex | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java-uiex.md | There are also IDE versions of this article. Check out [Azure Toolkit for Intell Before you begin, you must have the following: -+ An <abbr title="The profile that maintains billing information for Azure usage.">Azure account</abbr> with an active <abbr title="The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.">subscription</abbr>. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ++ An Azure account (The profile that maintains billing information for Azure usage.) with an active subscription (The basic organizational structure in which you manage resources in Azure, typically associated with an individual or department within an organization.). [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). + The [Azure CLI](/cli/azure/install-azure-cli). Property | Required | Description | Version ||| `<schemaVersion>` | false | Specify the version of the configuration schema. Supported values are: `v1`, `v2`. | 1.5.2 `<subscriptionId>` | false | Specify the subscription ID. | 0.1.0+-`<resourceGroup>` | true | Azure <abbr title="A logical container for related Azure resources that you can manage as a unit.">resource group</abbr> for your Web App. | 0.1.0+ +`<resourceGroup>` | true | Azure resource group (A logical container for related Azure resources that you can manage as a unit.) for your Web App. | 0.1.0+ `<appName>` | true | The name of your Web App. | 0.1.0+ `<region>` | true | Specifies the region where your Web App will be hosted; the default value is **westeurope**. All valid regions at [Supported Regions](https://azure.microsoft.com/global-infrastructure/services/?products=app-service) section. | 0.1.0+ `<pricingTier>` | false | The pricing tier for your Web App. The default value is **P1V2** for production workload, while **B2** is the recommended minimum for Java dev/test. [Learn more](https://azure.microsoft.com/pricing/details/app-service/linux/)| 0.1.0+ |
app-service | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-java.md | keywords: azure, app service, web app, windows, linux, java, maven, quickstart ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a ms.devlang: java Previously updated : 03/08/2023 Last updated : 08/31/2023 -zone_pivot_groups: app-service-platform-environment +zone_pivot_groups: app-service-java-hosting adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B adobe-target-content: ./quickstart-java-uiex # Quickstart: Create a Java app on Azure App Service ::: zone-end ::: zone-end --- ::: zone-end |
app-service | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-python.md | Title: 'Quickstart: Deploy a Python (Django or Flask) web app to Azure' description: Get started with Azure App Service by deploying your first Python app to Azure App Service. Last updated 07/26/2023--++ ms.devlang: python |
app-service | Quickstart Wordpress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md | To complete this quickstart, you need an Azure account with an active subscripti 1. Select the **Advanced** tab. If you're unfamiliar with an [Azure CDN](../cdn/cdn-overview.md), [Azure Front Door](../frontdoor/front-door-overview.md), or [Blob Storage](../storage/blobs/storage-blobs-overview.md), then clear the checkboxes. For more details on the Content Distribution options, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html). :::image type="content" source="./media/quickstart-wordpress/08-wordpress-advanced-settings.png" alt-text="Screenshot of WordPress Advanced Settings.":::+ + > [!NOTE] + > The WordPress app requires a virtual network with an address space of /23 at minimum. 1. Select the **Review + create** tab. After validation runs, select the **Create** button at the bottom of the page to create the WordPress site. |
app-service | Reference App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md | The following environment variables are related to the [push notifications](/pre | `WEBSITE_PUSH_TAGS_DYNAMIC` | Read-only. Contains a list of tags in the notification registration that were added automatically. | >[!NOTE]-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. <!-- ## WellKnownAppSettings |
app-service | Resources Kudu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/resources-kudu.md | Title: Kudu service overview description: Learn about the engine that powers continuous deployment in App Service and its features.++ Last updated 03/17/2021 |
app-service | Scenario Secure App Access Microsoft Graph As App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-app.md | Last updated 04/05/2023 ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities. |
app-service | Scenario Secure App Access Microsoft Graph As User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-microsoft-graph-as-user.md | Last updated 06/28/2023 ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user. |
app-service | Scenario Secure App Access Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md | Last updated 07/31/2023 ms.devlang: csharp, azurecli-+ #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities. |
app-service | Scenario Secure App Authentication App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-authentication-app-service.md | |
app-service | Scenario Secure App Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-overview.md | |
app-service | Cli Continuous Deployment Vsts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md | This sample script creates an app in App Service with its related resources, and Create the following variables containing your GitHub information. ```azurecli-gitrepo=<Replace with your Visual Studio Team Services repo URL> -token=<Replace with a Visual Studio Team Services personal access token> +gitrepo=<Replace with your Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) repo URL> +token=<Replace with a Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) personal access token> ``` -Configure continuous deployment from Visual Studio Team Services. The `--git-token` parameter is required only once per Azure account (Azure remembers token). +Configure continuous deployment from Azure DevOps Services (formerly Visual Studio Team Services, or VSTS). The `--git-token` parameter is required only once per Azure account (Azure remembers token). ```azurecli az webapp deployment source config --name $webapp --resource-group $resourceGroup \ |
app-service | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
app-service | Troubleshoot Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-diagnostic-logs.md | |
app-service | Tutorial Auth Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md | keywords: app service, azure app service, authN, authZ, secure, security, multi- ms.devlang: csharp Last updated 3/08/2023-+ zone_pivot_groups: app-service-platform-windows-linux # Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps |
app-service | Tutorial Connect App Access Microsoft Graph As App Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-app-javascript.md | Last updated 03/14/2023 ms.devlang: javascript-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph by using managed identities. |
app-service | Tutorial Connect App Access Microsoft Graph As User Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-microsoft-graph-as-user-javascript.md | Last updated 03/08/2022 ms.devlang: csharp-+ #Customer intent: As an application developer, I want to learn how to access data in Microsoft Graph for a signed-in user. |
app-service | Tutorial Connect App Access Sql Database As User Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-sql-database-as-user-dotnet.md | |
app-service | Tutorial Connect App Access Storage Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-access-storage-javascript.md | Last updated 07/31/2023 ms.devlang: javascript, azurecli-+ #Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities. |
app-service | Tutorial Connect App App Graph Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-app-app-graph-javascript.md | keywords: app service, azure app service, authN, authZ, secure, security, multi- ms.devlang: javascript Last updated 3/13/2023-+ zone_pivot_groups: app-service-platform-windows-linux # Requires non-internal subscription - internal subscriptions doesn't provide permission to correctly configure AAD apps |
app-service | Tutorial Connect Msi Azure Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md | keywords: azure app service, web app, security, msi, managed service identity, m ms.devlang: csharp,java,javascript,python Last updated 04/12/2022-+ # Tutorial: Connect to Azure databases from App Service without secrets using a managed identity |
app-service | Tutorial Connect Msi Key Vault Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md | |
app-service | Tutorial Connect Msi Key Vault Php | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-php.md | |
app-service | Tutorial Connect Msi Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md | |
app-service | Tutorial Connect Msi Sql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md | description: Secure Azure SQL Database connectivity with managed identity from a ms.devlang: csharp Last updated 04/01/2023-+ # Tutorial: Connect to SQL Database from .NET App Service without secrets using a managed identity You're now ready to develop and debug your app with the SQL Database as the back > It is replaced with new **Azure Identity client library** available for .NET, Java, TypeScript and Python and should be used for all new development. > Information about how to migrate to `Azure Identity`can be found here: [AppAuthentication to Azure.Identity Migration Guidance](/dotnet/api/overview/azure/app-auth-migration). -The steps you follow for your project depends on whether you're using [Entity Framework](/ef/ef6/) (default for ASP.NET) or [Entity Framework Core](/ef/core/) (default for ASP.NET Core). +The steps you follow for your project depends on whether you're using [Entity Framework Core](/ef/core/) (default for ASP.NET Core) or [Entity Framework](/ef/ef6/) (default for ASP.NET). ++# [Entity Framework Core](#tab/efcore) ++1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient): ++ ```powershell + Install-Package Microsoft.Data.SqlClient -Version 5.1.0 + ``` ++1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with: ++ ```json + "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;" + ``` ++ > [!NOTE] + > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI. + > ++ That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token. ++1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. # [Entity Framework](#tab/ef) The steps you follow for your project depends on whether you're using [Entity Fr 1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. -# [Entity Framework Core](#tab/efcore) --1. In Visual Studio, open the Package Manager Console and add the NuGet package [Microsoft.Data.SqlClient](https://www.nuget.org/packages/Microsoft.Data.SqlClient): -- ```powershell - Install-Package Microsoft.Data.SqlClient -Version 5.1.0 - ``` --1. In the [ASP.NET Core and SQL Database tutorial](tutorial-dotnetcore-sqldb-app.md), the `MyDbConnection` connection string in *appsettings.json* isn't used at all yet. The local environment and the Azure environment both get connection strings from their respective environment variables in order to keep connection secrets out of the source file. But now with Active Directory authentication, there are no more secrets. In *appsettings.json*, replace the value of the `MyDbConnection` connection string with: -- ```json - "Server=tcp:<server-name>.database.windows.net;Authentication=Active Directory Default; Database=<database-name>;" - ``` -- > [!NOTE] - > The [Active Directory Default](/sql/connect/ado-net/sql/azure-active-directory-authentication#using-active-directory-default-authentication) authentication type can be used both on your local machine and in Azure App Service. The driver attempts to acquire a token from Azure Active Directory using various means. If the app is deployed, it gets a token from the app's managed identity. If the app is running locally, it tries to get a token from Visual Studio, Visual Studio Code, and Azure CLI. - > -- That's everything you need to connect to SQL Database. When you debug in Visual Studio, your code uses the Azure AD user you configured in [2. Set up your dev environment](#2-set-up-your-dev-environment). You'll set up SQL Database later to allow connection from the managed identity of your App Service app. The `DefaultAzureCredential` class caches the token in memory and retrieves it from Azure AD just before expiration. You don't need any custom code to refresh the token. --1. Type `Ctrl+F5` to run the app again. The same CRUD app in your browser is now connecting to the Azure SQL Database directly, using Azure AD authentication. This setup lets you run database migrations from Visual Studio. - -- ## 4. Use managed identity connectivity |
app-service | Tutorial Connect Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-overview.md | description: Your app service may need to connect to other Azure services such a Last updated 02/16/2022+ # Securely connect to Azure services and databases from Azure App Service |
app-service | Tutorial Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md | description: A step-by-step guide to build a custom Linux or Windows image, push Last updated 11/29/2022 + keywords: azure app service, web app, linux, windows, docker, container zone_pivot_groups: app-service-containers-windows-linux |
app-service | Tutorial Dotnetcore Sqldb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md | Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row::: :::column span="2":::- **Step 1.** In the Azure portal: + **Step 1:** In the Azure portal: 1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows. + **Step 2:** In the **Create Web App + Database** page, fill out the form as follows. 1. *Resource Group* → Select **Create new** and use a name of **msdocs-core-sql-tutorial**. 1. *Region* → Any Azure region near you. 1. *Name* → **msdocs-core-sql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: + **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: - **Resource group** → The container for all the created resources. - **App Service plan** → Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** → Represents your app and runs in the App Service plan. The creation wizard generated connection strings for the SQL database and the Re :::row::: :::column span="2":::- **Step 1.** In the App Service page, in the left menu, select Configuration. + **Step 1:** In the App Service page, in the left menu, select Configuration. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-get-connection-string-1.png"::: The creation wizard generated connection strings for the SQL database and the Re :::row-end::: :::row::: :::column span="2":::- **Step 2.** + **Step 2:** 1. Scroll to the bottom of the page and find **AZURE_SQL_CONNECTIONSTRING** in the **Connection strings** section. This string was generated from the new SQL database by the creation wizard. To set up your application, this name is all you need. 1. Also, find **AZURE_REDIS_CONNECTIONSTRING** in the **Application settings** section. This string was generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need. 1. If you want, you can select the **Edit** button to the right of each setting and see or copy its value. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row::: :::column span="2":::- **Step 1.** In a new browser window: + **Step 1:** In a new browser window: 1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore](https://github.com/Azure-Samples/msdocs-app-service-sqldb-dotnetcore). 1. Select **Fork**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the App Service page, in the left menu, select **Deployment Center**. + **Step 2:** In the App Service page, in the left menu, select **Deployment Center**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-2.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 3.** In the Deployment Center page: + **Step 3:** In the Deployment Center page: 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 4.** Back the GitHub page of the forked sample, open Visual Studio Code in the browser by pressing the `.` key. + **Step 4:** Back the GitHub page of the forked sample, open Visual Studio Code in the browser by pressing the `.` key. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-4.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 5.** In Visual Studio Code in the browser: + **Step 5:** In Visual Studio Code in the browser: 1. Open *DotNetCoreSqlDb/appsettings.json* in the explorer. 1. Change the connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`, which matches the connection string created in App Service earlier. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 6.** + **Step 6:** 1. Open *DotNetCoreSqlDb/Program.cs* in the explorer. 1. In the `options.UseSqlServer` method, change the connection string name `MyDbConnection` to `AZURE_SQL_CONNECTIONSTRING`. This is where the connection string is used by the sample application. 1. Remove the `builder.Services.AddDistributedMemoryCache();` method and replace it with the following code. It changes your code from using an in-memory cache to the Redis cache in Azure, and it does so by using `AZURE_REDIS_CONNECTIONSTRING` from earlier. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 7.** + **Step 7:** 1. Open *.github/workflows/main_msdocs-core-sql-XYZ* in the explorer. This file was created by the App Service create wizard. 1. Under the `dotnet publish` step, add a step to install the [Entity Framework Core tool](/ef/core/cli/dotnet) with the command `dotnet tool install -g dotnet-ef`. 1. Under the new step, add another step to generate a database [migration bundle](/ef/core/managing-schemas/migrations/applying?tabs=dotnet-core-cli#bundles) in the deployment package: `dotnet ef migrations bundle --runtime linux-x64 -p DotNetCoreSqlDb/DotNetCoreSqlDb.csproj -o ${{env.DOTNET_ROOT}}/myapp/migrate`. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 8.** + **Step 8:** 1. Select the **Source Control** extension. 1. In the textbox, type a commit message like `Configure DB & Redis & add migration bundle`. 1. Select **Commit and Push**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 9.** Back in the Deployment Center page in the Azure portal: + **Step 9:** Back in the Deployment Center page in the Azure portal: 1. Select **Logs**. A new deployment run is already started from your committed changes. 1. In the log item for the deployment run, select the **Build/Deploy Logs** entry with the latest timestamp. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 10.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes a few minutes. + **Step 10:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes a few minutes. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-10.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-deploy-sample-code-10.png"::: With the SQL Database protected by the virtual network, the easiest way to run R :::row::: :::column span="2":::- **Step 1.** Back in the App Service page, in the left menu, select **SSH**. + **Step 1:** Back in the App Service page, in the left menu, select **SSH**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-generate-db-schema-1.png"::: With the SQL Database protected by the virtual network, the easiest way to run R :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the SSH terminal: + **Step 2:** In the SSH terminal: 1. Run `cd /home/site/wwwroot`. Here are all your deployed files. 1. Run the migration bundle that's generated by the GitHub workflow with `./migrate`. If it succeeds, App Service is connecting successfully to the SQL Database. Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. With the SQL Database protected by the virtual network, the easiest way to run R :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end::: With the SQL Database protected by the virtual network, the easiest way to run R :::row-end::: :::row::: :::column span="2":::- **Step 2.** Add a few tasks to the list. + **Step 2:** Add a few tasks to the list. Congratulations, you're running a secure data-driven ASP.NET Core app in Azure App Service. :::column-end::: :::column::: Azure App Service captures all messages logged to the console to assist you in d :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. :::column-end::: Azure App Service captures all messages logged to the console to assist you in d :::row-end::: :::row::: :::column span="2":::- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. + **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-stream-diagnostic-logs-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row::: :::column span="2":::- **Step 1.** In the search bar at the top of the Azure portal: + **Step 1:** In the search bar at the top of the Azure portal: 1. Enter the resource group name. 1. Select the resource group. :::column-end::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the resource group page, select **Delete resource group**. + **Step 2:** In the resource group page, select **Delete resource group**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-dotnetcore-sqldb-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-dotnetcore-sqldb-app/azure-portal-clean-up-resources-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 3.** + **Step 3:** 1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end::: |
app-service | Tutorial Java Quarkus Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md | |
app-service | Tutorial Java Spring Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md | |
app-service | Tutorial Java Tomcat Connect Managed Identity Postgresql Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-tomcat-connect-managed-identity-postgresql-database.md | |
app-service | Tutorial Nodejs Mongodb App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md | Last updated 09/06/2022 ms.role: developer ms.devlang: javascript-+++ # Deploy a Node.js + MongoDB web app to Azure Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row::: :::column span="2":::- **Step 1.** In the Azure portal: + **Step 1:** In the Azure portal: 1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows. + **Step 2:** In the **Create Web App + Database** page, fill out the form as follows. 1. *Resource Group* → Select **Create new** and use a name of **msdocs-expressjs-mongodb-tutorial**. 1. *Region* → Any Azure region near you. 1. *Name* → **msdocs-expressjs-mongodb-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: + **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: - **Resource group** → The container for all the created resources. - **App Service plan** → Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** → Represents your app and runs in the App Service plan. The creation wizard generated the MongoDB URI for you already, but your app need :::row::: :::column span="2":::- **Step 1.** In the App Service page, in the left menu, select Configuration. + **Step 1:** In the App Service page, in the left menu, select Configuration. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-get-connection-string-1.png"::: The creation wizard generated the MongoDB URI for you already, but your app need :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Application settings** tab of the **Configuration** page, create a `DATABASE_NAME` setting: + **Step 2:** In the **Application settings** tab of the **Configuration** page, create a `DATABASE_NAME` setting: 1. Select **New application setting**. 1. In the **Name** field, enter *DATABASE_NAME*. 1. In the **Value** field, enter the automatically generated database name from the creation wizard, which looks like *msdocs-expressjs-mongodb-XYZ-database*. The creation wizard generated the MongoDB URI for you already, but your app need :::row-end::: :::row::: :::column span="2":::- **Step 3.** + **Step 3:** 1. Scroll to the bottom of the page and select the connection string **MONGODB_URI**. It was generated by the creation wizard. 1. In the **Value** field, select the **Copy** button and paste the value in a text file for the next step. It's in the [MongoDB connection string URI format](https://www.mongodb.com/docs/manual/reference/connection-string/). 1. Select **Cancel**. The creation wizard generated the MongoDB URI for you already, but your app need :::row-end::: :::row::: :::column span="2":::- **Step 4.** + **Step 4:** 1. Using the same steps in **Step 2**, create an app setting named *DATABASE_URL* and set the value to the one you copied from the `MONGODB_URI` connection string (i.e. `mongodb://...`). 1. In the menu bar at the top, select **Save**. 1. When prompted, select **Continue**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row::: :::column span="2":::- **Step 1.** In a new browser window: + **Step 1:** In a new browser window: 1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app](https://github.com/Azure-Samples/msdocs-nodejs-mongodb-azure-sample-app). 1. Select **Fork**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. + **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-2.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 3.** In Visual Studio Code in the browser, open *config/connection.js* in the explorer. + **Step 3:** In Visual Studio Code in the browser, open *config/connection.js* in the explorer. In the `getConnectionInfo` function, see that the app settings you created earlier for the MongoDB connection are used (`DATABASE_URL` and `DATABASE_NAME`). :::column-end::: :::column::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**. + **Step 4:** Back in the App Service page, in the left menu, select **Deployment Center**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-4.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-4.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 5.** In the Deployment Center page: + **Step 5:** In the Deployment Center page: 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 6.** In the Deployment Center page: + **Step 6:** In the Deployment Center page: 1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes. + **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-7.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-deploy-sample-code-7.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 2.** Add a few tasks to the list. + **Step 2:** Add a few tasks to the list. Congratulations, you're running a secure data-driven Node.js app in Azure App Service. :::column-end::: :::column::: Azure App Service captures all messages logged to the console to assist you in d :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. :::column-end::: Azure App Service captures all messages logged to the console to assist you in d :::row-end::: :::row::: :::column span="2":::- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. + **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-stream-diagnostic-logs-2.png"::: Azure App Service provides a web-based diagnostics console named [Kudu](./resour :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **Advanced Tools**. 1. Select **Go**. You can also navigate directly to `https://<app-name>.scm.azurewebsites.net`. :::column-end::: Azure App Service provides a web-based diagnostics console named [Kudu](./resour :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the Kudu page, select **Deployments**. + **Step 2:** In the Kudu page, select **Deployments**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-2.png" alt-text="A screenshot of the main page in the Kudu SCM app showing the different information available about the hosting environment." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-2.png"::: Azure App Service provides a web-based diagnostics console named [Kudu](./resour :::row-end::: :::row::: :::column span="2":::- **Step 3.** Go back to the Kudu homepage and select **Site wwwroot**. + **Step 3:** Go back to the Kudu homepage and select **Site wwwroot**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-4.png" alt-text="A screenshot showing site wwwroot selected." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-inspect-kudu-4.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row::: :::column span="2":::- **Step 1.** In the search bar at the top of the Azure portal: + **Step 1:** In the search bar at the top of the Azure portal: 1. Enter the resource group name. 1. Select the resource group. :::column-end::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the resource group page, select **Delete resource group**. + **Step 2:** In the resource group page, select **Delete resource group**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-nodejs-mongodb-app/azure-portal-clean-up-resources-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 3.** + **Step 3:** 1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end::: |
app-service | Tutorial Php Mysql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md | Title: 'Tutorial: PHP app with MySQL and Redis' description: Learn how to get a PHP app working in Azure, with connection to a MySQL database and a Redis cache in Azure. Laravel is used in the tutorial.-++ ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Last updated 06/30/2023-+ # Tutorial: Deploy a PHP, MySQL, and Redis app to Azure App Service Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row::: :::column span="2":::- **Step 1.** In the Azure portal: + **Step 1:** In the Azure portal: 1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows. + **Step 2:** In the **Create Web App + Database** page, fill out the form as follows. 1. *Resource Group* → Select **Create new** and use a name of **msdocs-laravel-mysql-tutorial**. 1. *Region* → Any Azure region near you. 1. *Name* → **msdocs-laravel-mysql-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: + **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: - **Resource group** → The container for all the created resources. - **App Service plan** → Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** → Represents your app and runs in the App Service plan. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row::: :::column span="2":::- **Step 1.** In the App Service page, in the left menu, select **Configuration**. + **Step 1:** In the App Service page, in the left menu, select **Configuration**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-get-connection-string-1.png"::: Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 2.** + **Step 2:** 1. Find app settings that begin with **AZURE_MYSQL_**. They were generated from the new MySQL database by the creation wizard. 1. Also, find app settings that begin with **AZURE_REDIS_**. They were generated from the new Redis cache by the creation wizard. To set up your application, this name is all you need. 1. If you want, you can select the **Edit** button to the right of each setting and see or copy its value. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 3.** In the **Application settings** tab of the **Configuration** page, create a `CACHE_DRIVER` setting: + **Step 3:** In the **Application settings** tab of the **Configuration** page, create a `CACHE_DRIVER` setting: 1. Select **New application setting**. 1. In the **Name** field, enter *CACHE_DRIVER*. 1. In the **Value** field, enter *redis*. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 4.** Using the same steps in **Step 3**, create the following app settings: + **Step 4:** Using the same steps in **Step 3**, create the following app settings: - **MYSQL_ATTR_SSL_CA**: Use */home/site/wwwroot/ssl/DigiCertGlobalRootCA.crt.pem* as the value. This app setting points to the path of the [TLS/SSL certificate you need to access the MySQL server](../mysql/flexible-server/how-to-connect-tls-ssl.md#download-the-public-ssl-certificate). It's included in the sample repository for convenience. - **LOG_CHANNEL**: Use *stderr* as the value. This setting tells Laravel to pipe logs to stderr, which makes it available to the App Service logs. - **APP_DEBUG**: Use *true* as the value. It's a [Laravel debugging variable](https://laravel.com/docs/10.x/errors#configuration) that enables debug mode pages. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row::: :::column span="2":::- **Step 1.** In a new browser window: + **Step 1:** In a new browser window: 1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/laravel-tasks](https://github.com/Azure-Samples/laravel-tasks). 1. Select **Fork**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. + **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-2.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 3.** In Visual Studio Code in the browser, open *config/database.php* in the explorer. Find the `mysql` section and make the following changes: + **Step 3:** In Visual Studio Code in the browser, open *config/database.php* in the explorer. Find the `mysql` section and make the following changes: 1. Replace `DB_HOST` with `AZURE_MYSQL_HOST`. 1. Replace `DB_DATABASE` with `AZURE_MYSQL_DBNAME`. 1. Replace `DB_USERNAME` with `AZURE_MYSQL_USERNAME`. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 4.** In *config/database.php* scroll to the Redis `cache` section and make the following changes: + **Step 4:** In *config/database.php* scroll to the Redis `cache` section and make the following changes: 1. Replace `REDIS_HOST` with `AZURE_REDIS_HOST`. 1. Replace `REDIS_PASSWORD` with `AZURE_REDIS_PASSWORD`. 1. Replace `REDIS_PORT` with `AZURE_REDIS_PORT`. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 5.** + **Step 5:** 1. Select the **Source Control** extension. 1. In the textbox, type a commit message like `Configure DB & Redis variables`. 1. Select **Commit and Push**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 6.** Back in the App Service page, in the left menu, select **Deployment Center**. + **Step 6:** Back in the App Service page, in the left menu, select **Deployment Center**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png" alt-text="A screenshot showing how to open the deployment center in App Service." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-6.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 7.** In the Deployment Center page: + **Step 7:** In the Deployment Center page: 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 8.** In the Deployment Center page: + **Step 8:** In the Deployment Center page: 1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 9.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes. + **Step 9:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 15 minutes. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-9.png" alt-text="A screenshot showing a GitHub run in progress." lightbox="./media/tutorial-php-mysql-app/azure-portal-deploy-sample-code-9.png"::: The creation wizard puts the MySQL database server behind a private endpoint, so :::row::: :::column span="2":::- **Step 1.** Back in the App Service page, in the left menu, select **SSH**. + **Step 1:** Back in the App Service page, in the left menu, select **SSH**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-generate-db-schema-1.png"::: The creation wizard puts the MySQL database server behind a private endpoint, so :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the SSH terminal: + **Step 2:** In the SSH terminal: 1. Run `cd /home/site/wwwroot`. Here are all your deployed files. 1. Run `php artisan migrate --force`. If it succeeds, App Service is connecting successfully to the MySQL database. Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. The creation wizard puts the MySQL database server behind a private endpoint, so :::row::: :::column span="2":::- **Step 1.** + **Step 1:** 1. From the left menu, select **Configuration**. 1. Select the **General settings** tab. :::column-end::: The creation wizard puts the MySQL database server behind a private endpoint, so :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the General settings tab: + **Step 2:** In the General settings tab: 1. In the **Startup Command** box, enter the following command: *cp /home/site/wwwroot/default /etc/nginx/sites-available/default && service nginx reload*. 1. Select **Save**. The command replaces the Nginx configuration file in the PHP container and restarts Nginx. This configuration ensures that the same change is made to the container each time it starts. The creation wizard puts the MySQL database server behind a private endpoint, so :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end::: The creation wizard puts the MySQL database server behind a private endpoint, so :::row-end::: :::row::: :::column span="2":::- **Step 2.** Add a few tasks to the list. + **Step 2:** Add a few tasks to the list. Congratulations, you're running a secure data-driven PHP app in Azure App Service. :::column-end::: :::column::: Azure App Service captures all messages logged to the console to assist you in d :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. :::column-end::: Azure App Service captures all messages logged to the console to assist you in d :::row-end::: :::row::: :::column span="2":::- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. + **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-stream-diagnostic-logs-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row::: :::column span="2":::- **Step 1.** In the search bar at the top of the Azure portal: + **Step 1:** In the search bar at the top of the Azure portal: 1. Enter the resource group name. 1. Select the resource group. :::column-end::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the resource group page, select **Delete resource group**. + **Step 2:** In the resource group page, select **Delete resource group**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-php-mysql-app/azure-portal-clean-up-resources-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 3.** + **Step 3:** 1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end::: |
app-service | Tutorial Python Postgresql App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md | description: Create a Python Django or Flask web app with a PostgreSQL database ms.devlang: python Last updated 02/28/2023--zone_pivot_groups: deploy-python-web-app-postgresql +++ # Deploy a Python (Django or Flask) web app with PostgreSQL in Azure In this tutorial, you'll deploy a data-driven Python web app (**[Django](https:/ * An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). * Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) - ## Sample application Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row::: :::column span="2":::- **Step 1.** In the Azure portal: + **Step 1:** In the Azure portal: 1. Enter "web app database" in the search bar at the top of the Azure portal. 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Create Web App + Database** page, fill out the form as follows. + **Step 2:** In the **Create Web App + Database** page, fill out the form as follows. 1. *Resource Group* → Select **Create new** and use a name of **msdocs-python-postgres-tutorial**. 1. *Region* → Any Azure region near you. 1. *Name* → **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps :::row-end::: :::row::: :::column span="2":::- **Step 3.** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: + **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: - **Resource group** → The container for all the created resources. - **App Service plan** → Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. - **App Service** → Represents your app and runs in the App Service plan. The creation wizard generated the connectivity variables for you already as [app :::row::: :::column span="2":::- **Step 1.** In the App Service page, in the left menu, select Configuration. + **Step 1:** In the App Service page, in the left menu, select Configuration. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png"::: The creation wizard generated the connectivity variables for you already as [app :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable. + **Step 2:** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png"::: The creation wizard generated the connectivity variables for you already as [app :::row-end::: :::row::: :::column span="2":::- **Step 3.** In a terminal or command prompt, run the following Python script to generate a unique secret: `python -c 'import secrets; print(secrets.token_hex())'`. Copy the output value to use in the next step. + **Step 3:** In a terminal or command prompt, run the following Python script to generate a unique secret: `python -c 'import secrets; print(secrets.token_hex())'`. Copy the output value to use in the next step. :::column-end::: :::column::: :::column-end::: :::row-end::: :::row::: :::column span="2":::- **Step 4.** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**. + **Step 4:** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png"::: The creation wizard generated the connectivity variables for you already as [app :::row-end::: :::row::: :::column span="2":::- **Step 5.** Select **Save**. + **Step 5:** Select **Save**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row::: :::column span="2":::- **Step 1.** In a new browser window: + **Step 1:** In a new browser window: 1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app). 1. Select **Fork**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. + **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-2.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer. + **Step 3:** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer. See the environment variables being used in the production environment, including the app settings that you saw in the configuration page. :::column-end::: :::column::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**. + **Step 4:** Back in the App Service page, in the left menu, select **Deployment Center**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-4.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 5.** In the Deployment Center page: + **Step 5:** In the Deployment Center page: 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 6.** In the Deployment Center page: + **Step 6:** In the Deployment Center page: 1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes. + **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png" alt-text="A screenshot showing a GitHub run in progress (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-flask-7.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row::: :::column span="2":::- **Step 1.** In a new browser window: + **Step 1:** In a new browser window: 1. Sign in to your GitHub account. 1. Navigate to [https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app). 1. Select **Fork**. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. + **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-2.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 3.** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer. + **Step 3:** In Visual Studio Code in the browser, open *azureproject/production.py* in the explorer. See the environment variables being used in the production environment, including the app settings that you saw in the configuration page. :::column-end::: :::column::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 4.** Back in the App Service page, in the left menu, select **Deployment Center**. + **Step 4:** Back in the App Service page, in the left menu, select **Deployment Center**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-4.png"::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 5.** In the Deployment Center page: + **Step 5:** In the Deployment Center page: 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. 1. Sign in to your GitHub account and follow the prompt to authorize Azure. 1. In **Organization**, select your account. In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 6.** In the Deployment Center page: + **Step 6:** In the Deployment Center page: 1. Select **Logs**. A deployment run is already started. 1. In the log item for the deployment run, select **Build/Deploy Logs**. :::column-end::: In this step, you'll configure GitHub deployment using GitHub Actions. It's just :::row-end::: :::row::: :::column span="2":::- **Step 7.** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes. + **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png" alt-text="A screenshot showing a GitHub run in progress (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-deploy-sample-code-django-7.png"::: With the PostgreSQL database protected by the virtual network, the easiest way t :::row::: :::column span="2":::- **Step 1.** Back in the App Service page, in the left menu, select **SSH**. + **Step 1:** Back in the App Service page, in the left menu, select **SSH**. 1. Select **Go**. :::column-end::: :::column::: With the PostgreSQL database protected by the virtual network, the easiest way t :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the SSH terminal, run `flask db upgrade`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations). + **Step 2:** In the SSH terminal, run `flask db upgrade`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations). Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. :::column-end::: :::column::: With the PostgreSQL database protected by the virtual network, the easiest way t :::row::: :::column span="2":::- **Step 1.** Back in the App Service page, in the left menu, select **SSH**. + **Step 1:** Back in the App Service page, in the left menu, select **SSH**. 1. Select **Go**. :::column-end::: :::column::: With the PostgreSQL database protected by the virtual network, the easiest way t :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations). + **Step 2:** In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations). Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. :::column-end::: :::column::: With the PostgreSQL database protected by the virtual network, the easiest way t :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **Overview**. 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end::: With the PostgreSQL database protected by the virtual network, the easiest way t :::row-end::: :::row::: :::column span="2":::- **Step 2.** Add a few restaurants to the list. + **Step 2:** Add a few restaurants to the list. Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL. :::column-end::: :::column::: Azure App Service captures all messages output to the console to help you diagno :::row::: :::column span="2":::- **Step 1.** In the App Service page: + **Step 1:** In the App Service page: 1. From the left menu, select **App Service logs**. 1. Under **Application logging**, select **File System**. 1. In the top menu, select **Save**. Azure App Service captures all messages output to the console to help you diagno :::row-end::: :::row::: :::column span="2":::- **Step 2.** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. + **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-stream-diagnostic-logs-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row::: :::column span="2":::- **Step 1.** In the search bar at the top of the Azure portal: + **Step 1:** In the search bar at the top of the Azure portal: 1. Enter the resource group name. 1. Select the resource group. :::column-end::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the resource group page, select **Delete resource group**. + **Step 2:** In the resource group page, select **Delete resource group**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-clean-up-resources-2.png"::: When you're finished, you can delete all of the resources from your Azure subscr :::row-end::: :::row::: :::column span="2":::- **Step 3.** + **Step 3:** 1. Enter the resource group name to confirm your deletion. 1. Select **Delete**. :::column-end::: If you can't connect to the SSH session, then the app itself has failed to start If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database. ---## Provision and deploy using the Azure Developer CLI --Sample Python application templates using the Flask and Django framework are provided for this tutorial. The [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) greatly streamlines the process of provisioning application resources and deploying code on Azure. For a more step-by-step approach using the Azure portal and other tools, toggle to the **Azure portal** approach at the top of the page. --The Azure Developer CLI (azd) provides end-to-end support for project initialization, provisioning, deploying, monitoring and scaffolding a CI/CD pipeline to run against real Azure resources. You can use `azd` to provision and deploy the resources for the sample application in an automated and streamlined way. --Follow the steps below to setup the Azure Developer CLI and provision and deploy the sample application: --1. Install the Azure Developer CLI. For a full list of supported installation options and tools, visit the [installation guide](/azure/developer/azure-developer-cli/install-azd). -- ### [Windows](#tab/windows) -- ```azdeveloper - powershell -ex AllSigned -c "Invoke-RestMethod 'https://aka.ms/install-azd.ps1' | Invoke-Expression" - ``` -- ### [macOS/Linux](#tab/mac-linux) -- ```azdeveloper - curl -fsSL https://aka.ms/install-azd.sh | bash - ``` -- --1. Run the `azd init` command to initialize the `azd` app template. Include the `--template` parameter to specify the name of an existing `azd` template you wish to use. More information about working with templates is available on the [choose an `azd` template](/azure/developer/azure-developer-cli/azd-templates) page. -- ### [Flask](#tab/flask) -- For this tutorial, Flask users should specify the [Python (Flask) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app.git) template. - - ```bash - azd init --template msdocs-flask-postgresql-sample-app - ``` - - ### [Django](#tab/django) -- For this tutorial, Django users should specify the [Python (Django) web app with PostgresSQL](https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git) template. -- ```bash - azd init --template msdocs-django-postgresql-sample-app - ``` --1. Run the `azd auth login` command to sign-in to Azure. -- ```bash - azd auth login - ``` --1. Run the `azd up` command to provision the necessary Azure resources and deploy the app code. The `azd up` command will also prompt you to select the desired subscription and location to deploy to. -- ```bash - azd up - ``` --1. When the `azd up` command finishes running, the URL for your deployed web app in the console will be printed. Click, or copy and paste the web app URL into your browser to explore the running app and verify that it is working correctly. All of the Azure resources and application code were set up for you by the `azd up` command. -- The name of the resource group that was created is also displayed in the console output. Locate the resource group in the Azure portal to see all of the provisioned resources. -- :::image type="content" border="False" source="./media/tutorial-python-postgresql-app/azd-resources-small.png" lightbox="./media/tutorial-python-postgresql-app/azd-resources.png" alt-text="A screenshot showing the resources deployed by the Azure Developer CLI."::: --The Azure Developer CLI also enables you to configure your application to use a CI/CD pipeline for deployments, setup monitoring functionality, and even remove the provisioned resources if you want to tear everything down. For more information about these additional workflows, visit the project [README](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/blob/main/README.md). --## Explore the completed azd project template workflow --The sections ahead review the steps that `azd` handled for you in more depth. You can explore this workflow to better understand the requirements for deploying your own apps to Azure. When you ran `azd up`, the Azure Developer CLI completed the following steps: --> [!NOTE] -> You can also use the steps outlined in the **Azure portal** version of this flow to gain additional insights into the tasks that `azd` completed for you. --### 1. Cloned and initialized the project --The `azd init` command cloned the sample app project template to your machine. The project template includes the following components: --* **Source code**: The code and assets for a Flask or Django web app that can be used for local development or deployed to Azure. -* **Bicep files**: Infrastructure as code (IaC) files that are used by `azd` to create the necessary resources in Azure. -* **Configuration files**: Essential configuration files such as `azure.yaml` that are used by `azd` to provision, deploy and wire resources together to produce a fully fledged application. --### 2. Provisioned the Azure resources --The `azd up` command created all of the resources for the sample application in Azure using the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project template. [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep) is a declarative language used to manage Infrastructure as Code in Azure. Some of the key resources and configurations created by the template include: --* **Resource group**: A resource group was created to hold all of the other provisioned Azure resources. The resource group keeps your resources well organized and easier to manage. The name of the resource group is based off of the environment name you specified during the `azd up` initialization process. -* **Azure Virtual Network**: A virtual network was created to enable the provisioned resources to securely connect and communicate with one another. Related configurations such as setting up a private DNS zone link were also applied. -* **Azure App Service plan**: An App Service plan was created to host App Service instances. App Service plans define what compute resources are available for one or more web apps. -* **Azure App Service**: An App Service instance was created in the new App Service plan to host and run the deployed application. In this case a Linux instance was created and configured to run Python apps. Additional configurations were also applied to the app service, such as setting the Postgres connection string and secret keys. -* **Azure Database for PostgreSQL**: A Postgres database and server were created for the app hosted on App Service to connect to. The required admin user, network and connection settings were also configured. -* **Azure Application Insights**: Application insights was set up and configured for the app hosted on the App Service. This service enables detailed telemetry and monitoring for your application. --You can inspect the Bicep files in the [`infra`](https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app/tree/main/infra) folder of the project to understand how each of these resources were provisioned in more detail. The `resources.bicep` file defines most of the different services created in Azure. For example, the App Service plan and App Service web app instance were created and connected using the following Bicep code: --### [Flask](#tab/flask) --```yaml -resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = { - name: '${prefix}-service-plan' - location: location - tags: tags - sku: { - name: 'B1' - } - properties: { - reserved: true - } -} --resource web 'Microsoft.Web/sites@2022-03-01' = { - name: '${prefix}-app-service' - location: location - tags: union(tags, { 'azd-service-name': 'web' }) - kind: 'app,linux' - properties: { - serverFarmId: appServicePlan.id - siteConfig: { - alwaysOn: true - linuxFxVersion: 'PYTHON|3.10' - ftpsState: 'Disabled' - appCommandLine: 'startup.sh' - } - httpsOnly: true - } - identity: { - type: 'SystemAssigned' - } -``` --### [Django](#tab/django) --```yml -resource appServicePlan 'Microsoft.Web/serverfarms@2021-03-01' = { - name: '${prefix}-service-plan' - location: location - tags: tags - sku: { - name: 'B1' - } - properties: { - reserved: true - } -} --resource web 'Microsoft.Web/sites@2022-03-01' = { - name: '${prefix}-app-service' - location: location - tags: union(tags, { 'azd-service-name': 'web' }) - kind: 'app,linux' - properties: { - serverFarmId: appServicePlan.id - siteConfig: { - alwaysOn: true - linuxFxVersion: 'PYTHON|3.10' - ftpsState: 'Disabled' - appCommandLine: 'startup.sh' - } - httpsOnly: true - } - identity: { - type: 'SystemAssigned' - } --``` ----The Azure Database for PostgreSQL was also created using the following Bicep: --```yml -resource postgresServer 'Microsoft.DBforPostgreSQL/flexibleServers@2022-01-20-preview' = { - location: location - tags: tags - name: pgServerName - sku: { - name: 'Standard_B1ms' - tier: 'Burstable' - } - properties: { - version: '12' - administratorLogin: 'postgresadmin' - administratorLoginPassword: databasePassword - storage: { - storageSizeGB: 128 - } - backup: { - backupRetentionDays: 7 - geoRedundantBackup: 'Disabled' - } - network: { - delegatedSubnetResourceId: virtualNetwork::databaseSubnet.id - privateDnsZoneArmResourceId: privateDnsZone.id - } - highAvailability: { - mode: 'Disabled' - } - maintenanceWindow: { - customWindow: 'Disabled' - dayOfWeek: 0 - startHour: 0 - startMinute: 0 - } - } -- dependsOn: [ - privateDnsZoneLink - ] -} -``` --### 3. Deployed the application --The `azd up` command also deployed the sample application code to the provisioned Azure resources. The Developer CLI understands how to deploy different parts of your application code to different services in Azure using the `azure.yaml` file at the root of the project. The `azure.yaml` file specifies the app source code location, the type of app, and the Azure Service that should host that app. --Consider the following `azure.yaml` file. These configurations tell the Azure Developer CLI that the Python code that lives at the root of the project should be deployed to the created App Service. --### [Flask](#tab/flask) --```yml -name: flask-postgresql-sample-app -metadata: - template: flask-postgresql-sample-app@0.0.1-beta -- web: - project: . - language: py - host: appservice -``` --### [Django](#tab/django) --```yml -name: django-postgresql-sample-app -metadata: - template: django-postgresql-sample-app@0.0.1-beta -- web: - project: . - language: py - host: appservice -``` ----## Remove the resources --Once you are finished experimenting with your sample application, you can run the `azd down` command to remove the app from Azure. Removing resources helps to avoid unintended costs or unused services in your Azure subscription. --```bash -azd down -``` -- ## Frequently asked questions - [How much does this setup cost?](#how-much-does-this-setup-cost) |
app-service | Tutorial Ruby Postgres App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-ruby-postgres-app.md | description: Learn how to get a Linux Ruby app working in Azure App Service, wit ms.devlang: ruby Last updated 06/18/2020-+ # Build a Ruby and Postgres app in Azure App Service on Linux |
app-service | Tutorial Secure Domain Certificate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-secure-domain-certificate.md | Title: 'Tutorial: Secure app with a custom domain and certificate' description: Learn how to secure your brand with App Service using a custom domain and enabling App Service managed certificate. ++ Last updated 01/31/2023 You need to scale your app up to **Basic** tier. **Basic** tier fulfills the min :::row::: :::column span="2":::- **Step 1.** In the Azure portal: + **Step 1:** In the Azure portal: 1. Enter the name of your app in the search bar at the top. 1. Select your named resource with the type **App Service**. :::column-end::: You need to scale your app up to **Basic** tier. **Basic** tier fulfills the min :::row-end::: :::row::: :::column span="2":::- **Step 2.** In your app's management page: + **Step 2:** In your app's management page: 1. In the left navigation, select **Scale up (App Service plan)**. 1. Select the checkbox for **Basic B1**. 1. Select **Select**. For more information on app scaling, see [Scale up an app in Azure App Service]( :::row::: :::column span="2":::- **Step 1.** In your app's management page: + **Step 1:** In your app's management page: 1. In the left menu, select **Custom domains**. 1. Select **Add custom domain**. :::column-end::: For more information on app scaling, see [Scale up an app in Azure App Service]( :::row-end::: :::row::: :::column span="2":::- **Step 2.** In the **Add custom domain** dialog: + **Step 2:** In the **Add custom domain** dialog: 1. For **Domain provider**, select **All other domain services**. 1. For **TLS/SSL certificate**, select **App Service Managed Certificate**. 1. For Domain, specify a fully qualified domain name you want based on the domain you own. For example, if you own `contoso.com`, you can use *www.contoso.com*. For each custom domain in App Service, you need two DNS records with your domain :::row::: :::column span="2":::- **Step 1.** Back in the **Add custom domain** dialog in the Azure portal, select **Validate**. + **Step 1:** Back in the **Add custom domain** dialog in the Azure portal, select **Validate**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-secure-domain-certificate/configure-custom-domain-validate.png" alt-text="A screenshot showing how to validate your DNS record settings in the Add a custom domain dialog." lightbox="./media/tutorial-secure-domain-certificate/configure-custom-domain-validate.png" border="true"::: For each custom domain in App Service, you need two DNS records with your domain :::row-end::: :::row::: :::column span="2":::- **Step 2.** If the **Domain validation** section shows green check marks next for both domain records, then you've configured them correctly. Select **Add**. If it shows any red X, fix any errors in the DNS record settings in your domain provider's website. + **Step 2:** If the **Domain validation** section shows green check marks next for both domain records, then you've configured them correctly. Select **Add**. If it shows any red X, fix any errors in the DNS record settings in your domain provider's website. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-secure-domain-certificate/configure-custom-domain-add.png" alt-text="A screenshot showing the Add button activated after validation." lightbox="./media/tutorial-secure-domain-certificate/configure-custom-domain-add.png" border="true"::: For each custom domain in App Service, you need two DNS records with your domain :::row-end::: :::row::: :::column span="2":::- **Step 3.** You should see the custom domain added to the list. You may also see a red X with **No binding**. Wait a few minutes for App Service to create the managed certificate for your custom domain. When the process is complete, the red X becomes a green check mark with **Secured**. + **Step 3:** You should see the custom domain added to the list. You may also see a red X with **No binding**. Wait a few minutes for App Service to create the managed certificate for your custom domain. When the process is complete, the red X becomes a green check mark with **Secured**. :::column-end::: :::column::: :::image type="content" source="./media/tutorial-secure-domain-certificate/add-custom-domain-complete.png" alt-text="A screenshot showing the custom domains page with the new secured custom domain." lightbox="./media/tutorial-secure-domain-certificate/add-custom-domain-complete.png" border="true"::: See [Add a private certificate to your app](configure-ssl-certificate.md) and [S - [Map an existing custom DNS name to Azure App Service](app-service-web-tutorial-custom-domain.md) - [Purchase an App Service domain](manage-custom-dns-buy-domain.md) - [Add a private certificate to your app](configure-ssl-certificate.md)-- [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)+- [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md) |
app-service | Tutorial Send Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md | description: Learn how to invoke business processes from your App Service app. S Last updated 04/08/2020 ms.devlang: csharp, javascript, php, python, ruby-+ # Tutorial: Send email and invoke other business processes from App Service |
app-service | Web Sites Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/web-sites-monitor.md | ms.assetid: d273da4e-07de-48e0-b99d-4020d84a425e Last updated 06/29/2023 + |
app-service | Webjobs Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md | description: Learn how to use WebJobs to run background tasks in Azure App Servi ms.assetid: af01771e-54eb-4aea-af5f-f883ff39572b Last updated 7/30/2023--++ #Customer intent: As a web developer, I want to leverage background tasks to keep my application running smoothly. adobe-target: true |
application-gateway | Application Gateway Backend Health Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-backend-health-troubleshooting.md | To increase the timeout value, follow these steps: 4. If you're using Azure default DNS, check with your domain name registrar about whether proper A record or CNAME record mapping has been completed. 5. If the domain is private or internal, try to resolve it from a VM in the same virtual network. If you can resolve it, restart Application Gateway and check again. To restart Application Gateway, you need to [stop](/powershell/module/az.network/stop-azapplicationgateway) and [start](/powershell/module/az.network/start-azapplicationgateway) by using the PowerShell commands described in these linked resources. -### Updates to the DNS entries of the backend pool --**Message:** The backend health status could not be retrieved. This happens when an NSG/UDR/Firewall on the application gateway subnet is blocking traffic on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case of the v2 SKU or if the FQDN configured in the backend pool could not be resolved to an IP address. To learn more visit - https://aka.ms/UnknownBackendHealth. --**Cause:** Application Gateway resolves the DNS entries for the backend pool at time of startup and doesn't update them dynamically while running. --**Resolution:** --Application Gateway must be restarted after any modification to the backend server DNS entries to begin to use the new IP addresses. This operation can be completed via Azure PowerShell or Azure CLI. --#### Azure PowerShell --``` -# Get Azure Application Gateway -$appgw=Get-AzApplicationGateway -Name <appgw_name> -ResourceGroupName <rg_name> - -# Stop the Azure Application Gateway -Stop-AzApplicationGateway -ApplicationGateway $appgw - -# Start the Azure Application Gateway -Start-AzApplicationGateway -ApplicationGateway $appgw -``` --#### Azure CLI --``` -# Stop the Azure Application Gateway -az network application-gateway stop -n <appgw_name> -g <rg_name> --# Start the Azure Application Gateway -az network application-gateway start -n <appgw_name> -g <rg_name> -``` - ### TCP connect error **Message:** Application Gateway could not connect to the backend. Check that the backend responds on the port used for the probe. Also check whether any NSG/UDR/Firewall is blocking access to the Ip and port of this backend. OR </br> **Solution:** To resolve this issue, verify that the certificate on your server was created properly. For example, you can use [OpenSSL](https://www.openssl.org/docs/manmaster/man1/verify.html) to verify the certificate and its properties and then try reuploading the certificate to the Application Gateway HTTP settings. -## Backend health status: unknown +## Backend health status: Unknown ++### Updates to the DNS entries of the backend pool ++**Message:** The backend health status could not be retrieved. This happens when an NSG/UDR/Firewall on the application gateway subnet is blocking traffic on ports 65503-65534 in case of v1 SKU, and ports 65200-65535 in case of the v2 SKU or if the FQDN configured in the backend pool could not be resolved to an IP address. To learn more visit - https://aka.ms/UnknownBackendHealth. ++**Cause:** For FQDN (Fully Qualified Domain Name)-based backend targets, the Application Gateway caches and uses the last-known-good IP address if it fails to get a response for the subsequent DNS lookup. A PUT operation on a gateway in this state would clear its DNS cache altogether. As a result, there will not be any destination address to which the gateway can reach. ++**Resolution:** +Check and fix the DNS servers to ensure it is serving a response for the given FDQN's DNS lookup. You must also check if the DNS servers are reachable through your application gateway's Virtual Network. +### Other reasons If the backend health is shown as Unknown, the portal view will resemble the following screenshot: ![Application Gateway backend health - Unknown](./media/application-gateway-backend-health-troubleshooting/appgwunknown.png) |
application-gateway | How Application Gateway Works | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/how-application-gateway-works.md | When an application gateway sends the original request to the backend server, it > - **Contains an internally resolvable FQDN or a private IP address**, the application gateway routes the request to the backend server by using its instance private IP addresses. > - **Contains an external endpoint or an externally resolvable FQDN**, the application gateway routes the request to the backend server by using its frontend public IP address. If the subnet contains [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md), the application gateway will route the request to the service via its private IP address. DNS resolution is based on a private DNS zone or custom DNS server, if configured, or it uses the default Azure-provided DNS. If there isn't a frontend public IP address, one is assigned for the outbound external connectivity. +### Backend server DNS resolution ++When a backend pool's server is configured with a Fully Qualified Domain Name (FQDN), Application Gateway performs a DNS lookup to get the domain name's IP address(es). The IP value is stored in your application gateway's cache to enable it to reach the targets faster when serving incoming requests. ++The Application Gateway retains this cached information for the period equivalent to that DNS record's TTL (time to live) and performs a fresh DNS lookup once the TTL expires. If a gateway detects a change in IP address for its subsequent DNS query, it will start routing the traffic to this updated destination. In case of problems such as the DNS lookup failing to receive a response or the record no longer exists, the gateway continues to use the last-known-good IP address(es). This ensures minimal impact on the data path. ++> [!IMPORTANT] +> * When using custom DNS servers with Application Gateway's Virtual Network, it is crucial that all servers are identical and respond consistently with the same DNS values. +> * Users of on-premises custom DNS servers must ensure connectivity to Azure DNS through [Azure DNS Private Resolver](../dns/private-resolver-hybrid-dns.md) (recommended) or DNS forwarder VM when using a Private DNS zone for Private endpoint. + ### Modifications to the request Application gateway inserts six additional headers to all requests before it forwards the requests to the backend. These headers are x-forwarded-for, x-forwarded-port, x-forwarded-proto, x-original-host, x-original-url, and x-appgw-trace-id. The format for x-forwarded-for header is a comma-separated list of IP:port. |
application-gateway | Monitor Application Gateway Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md | sslEnabled_s | Does the client request have SSL enabled| ## See Also <!-- replace below with the proper link to your main monitoring service article -->-- See [Monitoring Azure Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.+- See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway. - See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources. |
application-gateway | Tutorial Ingress Controller Add On Existing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md | You'll now deploy a new application gateway, to simulate having an existing appl ```azurecli-interactive az network public-ip create -n myPublicIp -g myResourceGroup --allocation-method Static --sku Standard az network vnet create -n myVnet -g myResourceGroup --address-prefix 10.0.0.0/16 --subnet-name mySubnet --subnet-prefix 10.0.0.0/24 -az network application-gateway create -n myApplicationGateway -l eastus -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet --priority 100 +az network application-gateway create -n myApplicationGateway -g myResourceGroup --sku Standard_v2 --public-ip-address myPublicIp --vnet-name myVnet --subnet mySubnet --priority 100 ``` > [!NOTE] |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
automation | Automation Manage Send Joblogs Log Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-manage-send-joblogs-log-analytics.md | Title: Forward Azure Automation job data to Azure Monitor logs description: This article tells how to send job status and runbook job streams to Azure Monitor logs. Previously updated : 03/10/2022- Last updated : 08/28/2023++ # Forward Azure Automation diagnostic logs to Azure Monitor |
automation | Automation Runbook Output And Messages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-output-and-messages.md | Title: Configure runbook output and message streams description: This article tells how to implement error handling logic and describes output and message streams in Azure Automation runbooks. Previously updated : 11/03/2020 Last updated : 08/28/2023 The following table briefly describes each stream with its behavior in the Azure The output stream is used for the output of objects created by a script or workflow when it runs correctly. Azure Automation primarily uses this stream for objects to be consumed by parent runbooks that call the [current runbook](automation-child-runbooks.md). When a parent [calls a runbook inline](automation-child-runbooks.md#call-a-child-runbook-by-using-inline-execution), the child returns data from the output stream to the parent. -Your runbook uses the output stream to communicate general information to the client only if it is never called by another runbook. As a best practice, however, you runbooks should typically use the [verbose stream](#write-output-to-verbose-stream) to communicate general information to the user. +Your runbook uses the output stream to communicate general information to the client only if it's never called by another runbook. As a best practice, however, your runbooks should typically use the [verbose stream](#write-output-to-verbose-stream) to communicate general information to the user. Have your runbook write data to the output stream using [Write-Output](/powershell/module/microsoft.powershell.utility/write-output). Alternatively, you can put the object on its own line in the script. $object ### Handle output from a function -When a runbook function writes to the output stream, the output is passed back to the runbook. If the runbook assigns that output to a variable, the output is not written to the output stream. Writing to any other streams from within the function writes to the corresponding stream for the runbook. Consider the following sample PowerShell Workflow runbook. +When a runbook function writes to the output stream, the output is passed back to the runbook. If the runbook assigns that output to a variable, the output isn't written to the output stream. Writing to any other streams from within the function writes to the corresponding stream for the runbook. Consider the following sample PowerShell Workflow runbook. ```powershell Workflow Test-Runbook The following are examples of output data types: #### Declare output data type in a workflow -A workflow specifies the data type of its output using the [OutputType attribute](/powershell/module/microsoft.powershell.core/about/about_functions_outputtypeattribute). This attribute has no effect during runtime, but it provides you an indication at design time of the expected output of the runbook. As the tool set for runbooks continues to evolve, the importance of declaring output data types at design time increases. Therefore it's a best practice to include this declaration in any runbooks that you create. +A workflow specifies the data type of its output using the [OutputType attribute](/powershell/module/microsoft.powershell.core/about/about_functions_outputtypeattribute). This attribute has no effect during runtime, but it provides you with an indication at design time of the expected output of the runbook. As the tool set for runbooks continues to evolve, the importance of declaring output data types at design time increases. Therefore it's a best practice to include this declaration in any runbooks that you create. The following sample runbook outputs a string object and includes a declaration of its output type. If your runbook outputs an array of a certain type, then you should still specify the type as opposed to an array of the type. To declare an output type in a graphical or graphical PowerShell Workflow runboo > [!NOTE] > After you enter a value in the **Output Type** field in the Input and Output properties pane, be sure to click outside the control so that it recognizes your entry. -The following example shows two graphical runbooks to demonstrate the Input and Output feature. Applying the modular runbook design model, you have one runbook as the Authenticate Runbook template managing authentication with Azure using the Run As account. The second runbook, which normally performs core logic to automate a given scenario, in this case executes the Authenticate Runbook template. It displays the results to your Test output pane. Under normal circumstances, you would have this runbook do something against a resource leveraging the output from the child runbook. +The following example shows two graphical runbooks to demonstrate the Input and Output feature. Applying the modular runbook design model, you have one runbook as the Authenticate Runbook template managing authentication with Azure using [Managed identities](automation-security-overview.md#managed-identities). The second runbook, which normally performs core logic to automate a given scenario, in this case executes the Authenticate Runbook template. It displays the results to your Test output pane. Under normal circumstances, you would have this runbook do something against a resource leveraging the output from the child runbook. -Here is the basic logic of the **AuthenticateTo-Azure** runbook.<br> ![Authenticate Runbook Template Example](media/automation-runbook-output-and-messages/runbook-authentication-template.png). +Here's the basic logic of the **AuthenticateTo-Azure** runbook.<br> ![Authenticate Runbook Template Example](media/automation-runbook-output-and-messages/runbook-authentication-template.png). -The runbook includes the output type `Microsoft.Azure.Commands.Profile.Models.PSAzureContext`, which returns the authentication profile properties.<br> ![Runbook Output Type Example](media/automation-runbook-output-and-messages/runbook-input-and-output-add-blade.png) +The runbook includes the output type `Microsoft.Azure.Commands.Profile.Models.PSAzureProfile`, which returns the authentication profile properties.<br> ![Runbook Output Type Example](media/automation-runbook-output-and-messages/runbook-input-and-output-add-blade.png) -While this runbook is straightforward, there is one configuration item to call out here. The last activity executes the `Write-Output` cmdlet to write profile data to a variable using a PowerShell expression for the `Inputobject` parameter. This parameter is required for `Write-Output`. +While this runbook is straightforward, there's one configuration item to call out here. The last activity executes the `Write-Output` cmdlet to write profile data to a variable using a PowerShell expression for the `Inputobject` parameter. This parameter is required for `Write-Output`. The second runbook in this example, named **Test-ChildOutputType**, simply defines two activities.<br> ![Example Child Output Type Runbook](media/automation-runbook-output-and-messages/runbook-display-authentication-results-example.png) -The first activity calls the **AuthenticateTo-Azure** runbook. The second activity runs the `Write-Verbose` cmdlet with **Data source** set to **Activity output**. Also, **Field path** is set to **Context.Subscription.SubscriptionName**, the context output from the **AuthenticateTo-Azure** runbook.<br> ![Write-Verbose Cmdlet Parameter Data Source](media/automation-runbook-output-and-messages/runbook-write-verbose-parameters-config.png) +The first activity calls the **AuthenticateTo-Azure** runbook. The second activity runs the `Write-Verbose` cmdlet with **Data source** set to **Activity output**. Also, **Field path** is set to **Context.Subscription.Name**, the context output from the **AuthenticateTo-Azure** runbook. + The resulting output is the name of the subscription.<br> ![Test-ChildOutputType Runbook Results](media/automation-runbook-output-and-messages/runbook-test-childoutputtype-results.png) Write-Error -Message "This is an error message that will stop the runbook becaus ### Write output to debug stream -Azure Automation uses the debug message stream for interactive users. By default Azure Automation does not capture any debug stream data, only output, error, and warning data are captured as well as verbose data if the runbook is configured to capture it. +Azure Automation uses the debug message stream for interactive users. By default Azure Automation doesn't capture any debug stream data, only output, error, and warning data are captured as well as verbose data if the runbook is configured to capture it. In order to capture debug stream data, you have to perform two actions in your runbooks: Write-Verbose -Message "This is a verbose message." You can use the **Configure** tab of the Azure portal to configure a runbook to log progress records. The default setting is to not log the records, to maximize performance. In most cases, you should keep the default setting. Turn on this option only to troubleshoot or debug a runbook. -If you enable progress record logging, your runbook writes a record to job history before and after each activity runs. Testing a runbook does not display progress messages even if the runbook is configured to log progress records. +If you enable progress record logging, your runbook writes a record to job history before and after each activity runs. Testing a runbook doesn't display progress messages even if the runbook is configured to log progress records. > [!NOTE] > The [Write-Progress](/powershell/module/microsoft.powershell.utility/write-progress) cmdlet is not valid in a runbook, since this cmdlet is intended for use with an interactive user. For more information about configuring integration with Azure Monitor Logs to co ## Next steps +* For sample queries, see [Sample queries for job logs and job streams](automation-manage-send-joblogs-log-analytics.md#job-streams) * To work with runbooks, see [Manage runbooks in Azure Automation](manage-runbooks.md).-* If you are unfamiliar with PowerShell scripting, see [PowerShell](/powershell/scripting/overview) documentation. +* If you're unfamiliar with PowerShell scripting, see [PowerShell](/powershell/scripting/overview) documentation. * For the Azure Automation PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation). |
automation | Automation Use Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md | description: This article tells how to use Azure AD within Azure Automation as t Last updated 05/26/2023 -+ # Use Azure AD to authenticate to Azure |
automation | Context Switching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/context-switching.md | description: This article explains context switching and how to avoid runbook is Previously updated : 09/27/2021 Last updated : 08/18/2023 #Customer intent: As a developer, I want to understand Azure context so that I can avoid error when running multiple runbooks. While you may not come across an issue if you don't follow these recommendations `The subscription named <subscription name> cannot be found.` ```error-Get-AzVM : The client '<automation-runas-account-guid>' with object id '<automation-runas-account-guid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '. +Get-AzVM : The client '<clientid>' with object id '<objectid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '. ErrorCode: AuthorizationFailed StatusCode: 403 ReasonPhrase: Forbidden Operation |
automation | Delete Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md | After the Automation account is successfully unlinked from the workspace, perfor To delete your Automation account linked to a Log Analytics workspace in support of Update Management, Change Tracking and Inventory, and/or Start/Stop VMs during off-hours, perform the following steps. -### Step 1. Delete the solution from the linked workspace +### Step 1: Delete the solution from the linked workspace # [Azure portal](#tab/azure-portal) Remove-AzMonitorLogAnalyticsSolution -ResourceGroupName "resourceGroupName" -Nam -### Step 2. Unlink workspace from Automation account +### Step 2: Unlink workspace from Automation account There are two options for unlinking the Log Analytics workspace from your Automation account. You can perform this process from the Automation account or from the linked workspace. To unlink from the workspace, perform the following steps. While it attempts to unlink the Automation account, you can track the progress under **Notifications** from the menu. -### Step 3. Delete Automation account +### Step 3: Delete Automation account After the Automation account is successfully unlinked from the workspace, perform the steps in the [standalone Automation account](#delete-a-standalone-automation-account) section to delete the account. |
automation | Manage Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-office-365.md | description: This article tells how to use Azure Automation to manage Office 365 Last updated 11/05/2020 + # Manage Office 365 services To publish and then schedule your runbook, see [Manage runbooks in Azure Automat * For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md). * For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md). * If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md).-* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview). +* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview). |
automation | Manage Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-runbooks.md | Title: Manage runbooks in Azure Automation description: This article tells how to manage runbooks in Azure Automation. Previously updated : 06/29/2023 Last updated : 08/28/2023 foreach ($item in $output) { ## Next steps +* For sample queries, see [Sample queries for job logs and job streams](automation-manage-send-joblogs-log-analytics.md#job-streams) * To learn details of runbook management, see [Runbook execution in Azure Automation](automation-runbook-execution.md). * To prepare a PowerShell runbook, see [Edit textual runbooks in Azure Automation](automation-edit-textual-runbook.md). * To troubleshoot issues with runbook execution, see [Troubleshoot runbook issues](troubleshoot/runbooks.md). |
automation | Manage Sql Server In Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/manage-sql-server-in-automation.md | + + Title: Manage databases in Azure SQL databases using Azure Automation +description: This article explains on how to use Azure SQL server database using a system assigned managed identity in Azure Automation. + Last updated : 06/26/2023++++# Manage databases in Azure SQL database using Azure Automation ++This article describes the procedure to connect and manage databases in Azure SQL database using Azure Automation's [system-assigned managed identity](enable-managed-identity-for-automation.md). With Azure Automation, you can manage databases in Azure SQL Database by using the [latest Az PowerShell cmdlets](https://learn.microsoft.com/powershell/module/) that are available in [Azure Az PowerShell](https://learn.microsoft.com/powershell/azure/new-azureps-module-az?view=azps-10.2.0). ++Azure Automation has these Azure Az PowerShell cmdlets available out of the box, so that you can perform all the SQL database management tasks within the service. You can also pair these cmdlets in Azure Automation with the cmdlets of other Azure services to automate complex tasks across Azure services and across third-party systems. ++Azure Automation can also issue T-SQL (Transact SQL) commands against the SQL servers using PowerShell. ++To run the commands against the database, you need to do the following: +- Ensure that Automation account has a system-assigned managed identity. +- Provide the appropriate permissions to the Automation managed identity. +- Configure the SQL server to utilize Azure Active Directory authentication. +- Create a user on the SQL server that maps to the Automation account managed identity. +- Create a runbook to connect and execute the commands. +- (Optional) If the SQL server is protected by a firewall, create a Hybrid Runbook Worker (HRW), install the SQL modules on that server, and add the HRW IP address to the allowlist on the firewall. +++## Connect to Azure SQL database using System-assigned Managed identity ++To allow access from the Automation system managed identity to the Azure SQL database, follow these steps: ++1. If the Automation system managed identity is **OFF**, do the following: + 1. Sign in to the [Azure portal](https://portal.azure.com). + 1. Go to your Automation account. + 1. In the Automation account page, under **Account Settings**, select **Identity**. + 1. Under the **System assigned** tab, select the **Status** as **ON**. + + :::image type="content" source="./media/manage-sql-server-in-automation/system-assigned-managed-identity-status-on-inline.png" alt-text="Screenshot of setting the status to ON for System assigned managed identity." lightbox="./media/manage-sql-server-in-automation/system-assigned-managed-identity-status-on-expanded.png"::: ++1. After the System Managed Identity is **ON**, you must provide the account the required access using these steps: + 1. In the **Automation account | Identity** page, **System assigned** tab, under permissions, select **Azure role assignments**. + 1. In the Azure role assignments page, select **+Add role assignment (preview)**. + 1. In the **Add role assignment (preview)**, select the **Scope** as *SQL*, select the **Subscription**, **Resource** from the drop-down and **Role** according to minimum required permissions, and then select **Save**. + + :::image type="content" source="./media/manage-sql-server-in-automation/add-role-assignment-inline.png" alt-text="Screenshot of adding role assignment when the system assigned managed identity's status is set to ON." lightbox="./media/manage-sql-server-in-automation/add-role-assignment-expanded.png"::: ++1. Configure the SQL server for Active Directory authentication by using these steps: + 1. Go to [Azure portal](https://portal.azure.com) home page and select **SQL servers**. + 1. In the **SQL server** page, under **Settings**, select **Azure Active Directory**. + 1. Select **Set admin** to configure SQL server for AD authentication. ++1. Add authentication on the SQL side by using these steps: + 1. Go to [Azure portal](https://portal.azure.com) home page and select **SQL servers**. + 1. In the **SQL server** page, under **Settings**, select **SQL Databases**. + 1. Select your database to go to the SQL database page and select **Query editor (preview)** and execute the following two queries: + - CREATE USER "AutomationAccount" + - FROM EXTERNAL PROVIDER WITH OBJECT_ID= `ObjectID` + - EXEC sp_addrolemember `dbowner`, "AutomationAccount" + - Automation account - replace with your Automation account's name + - Object ID - replace with object (principal) ID for your system managed identity principal from step 1. ++## Sample code ++### Connection to Azure SQL Server ++ ```sql + if ($($env:computerName) -eq "Client") {"Runbook running on Azure Client sandbox"} else {"Runbook running on " + $env:computerName} + Disable-AzContextAutosave -Scope Process + Connect-AzAccount -Identity + $Token = (Get-AZAccessToken -ResourceUrl https://database.windows.net).Token + Invoke-Sqlcmd -ServerInstance azuresqlserverxyz.database.windows.net -Database MyDBxyz -AccessToken $token -query 'select * from TableXYZ' + ``` +### Check account permissions on the SQL side ++```sql +SELECT roles.[name] as role_name, members.name as [user_name] +from sys.database_role_members +Join sys.database_principals roles on database_role_members.role_principal_id= roles.principal_id +join sys.database_principals members on database_role_members.member_principal_id=members.principal_id +Order By +roles.[name], members.[name] +``` ++> [!NOTE] +> When a SQL server is running behind a firewall, you must run the Azure Automation runbook on a machine in your own network. Ensure that you configure this machine as a Hybrid Runbook Worker so that the IP address or network is not blocked by the firewall. For more information on how to configure a machine as a Hybrid Worker, see [create a hybrid worker](extension-based-hybrid-runbook-worker-install.md). ++### Use Hybrid worker +When you use a Hybrid worker, the modules that your runbook uses, must be installed locally from an elevated PowerShell prompt. For example, `- Install-module Az.Accounts and Install-module SqlServer`. To find the required module names, run a command on each cmdlet and then check the source. For example, to check module name for cmdlet `Connect-AzAccounts` which is part of the Az.Account module, run the command: `get-command Connect-AzAccount` ++> [!NOTE] +> We recommend that you add the following code on the top of any runbook that's intended to run on a Hybrid worker: `if ($($env:computerName) -eq "CLIENT") {"Runbook running on Azure CLIENT"} else {"Runbook running on " + $env:computerName}`. The code allows you to see the node it's running on and in case you accidentally run it on Azure cloud instead of the Hybrid worker, then it helps to determine the reason a runbook didn't work. +++## Next steps ++* For details of credential use, see [Manage credentials in Azure Automation](shared-resources/credentials.md). +* For information about modules, see [Manage modules in Azure Automation](shared-resources/modules.md). +* If you need to start a runbook, see [Start a runbook in Azure Automation](start-runbooks.md). +* For PowerShell details, see [PowerShell Docs](/powershell/scripting/overview). |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/overview.md | Azure Automation supports management throughout the lifecycle of your infrastruc - Collect and store information about Azure resources. - Perform SQL monitoring checks & reporting. - Check website availability. -* **Dev/test automation scenarios** - Start and start resources, scale resources, etc. +* **Dev/test automation scenarios** - Stop and start resources, scale resources, etc. * **Governance related automation** - Automatically apply or update tags, locks, etc. * **Azure Site Recovery** - orchestrate pre/post scripts defined in a Site Recovery DR workflow. * **Azure Virtual Desktop** - orchestrate scaling of VMs or start/stop VMs based on utilization. You can review the prices associated with Azure Automation on the [pricing](http ## Next steps > [!div class="nextstepaction"]-> [Create an Automation account](./quickstarts/create-azure-automation-account-portal.md) +> [Create an Automation account](./quickstarts/create-azure-automation-account-portal.md) |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
automation | Python Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md | Title: Manage Python 2 packages in Azure Automation description: This article tells how to manage Python 2 packages in Azure Automation. Previously updated : 10/29/2021 Last updated : 08/21/2023 For information on managing Python 3 packages, see [Manage Python 3 packages](./ ## Import packages -1. In your Automation account, select **Python packages** under **Shared Resources**. Click **+ Add a Python package**. +1. In your Automation account, select **Python packages** under **Shared Resources**. Select **+ Add a Python package**. :::image type="content" source="media/python-packages/add-python-package.png" alt-text="Screenshot of the Python packages page shows Python packages in the left menu and Add a Python package highlighted."::: For information on managing Python 3 packages, see [Manage Python 3 packages](./ :::image type="content" source="media/python-packages/upload-package.png" alt-text="Screenshot shows the Add Python Package page with an uploaded tar.gz file selected."::: -After a package has been imported, it's listed on the **Python packages** page in your Automation account. To remove a package, select the package and click **Delete**. +After a package has been imported, it's listed on the **Python packages** page in your Automation account. To remove a package, select the package and select **Delete**. :::image type="content" source="media/python-packages/package-list.png" alt-text="Screenshot shows the Python 2.7.x packages page after a package has been imported."::: ## Import packages with dependencies -Azure automation doesn't resolve dependencies for Python packages during the import process. There are two ways to import a package with all its dependencies. Only one of the following steps needs to be used to import the packages into your Automation account. +Azure Automation doesn't resolve dependencies for Python packages during the import process. There are two ways to import a package with all its dependencies. Only one of the following steps needs to be used to import the packages into your Automation account. ### Manually download Once the packages are downloaded, you can import them into your automation accou ### Runbook - To obtain a runbook, [import Python 2 packages from pypi into Azure Automation account](https://github.com/azureautomation/import-python-2-packages-from-pypi-into-azure-automation-account) from the Azure Automation GitHub organization into your Automation account. Make sure the Run Settings are set to **Azure** and start the runbook with the parameters. The runbook requires a Run As account for the Automation account to work. For each parameter make sure you start it with the switch as seen in the following list and image: + To obtain a runbook, [import Python 2 packages from pypi into Azure Automation account](https://github.com/azureautomation/import-python-2-packages-from-pypi-into-azure-automation-account) from the Azure Automation GitHub organization into your Automation account. Make sure the Run Settings are set to **Azure** and start the runbook with the parameters. Ensure that Managed identity is enabled for your Automation account and has Automation Contributor access for successful import of package. For each parameter make sure you start it with the switch as seen in the following list and image: * -s \<subscriptionId\> * -g \<resourceGroup\> Once the packages are downloaded, you can import them into your automation accou :::image type="content" source="media/python-packages/import-python-runbook.png" alt-text="Screenshot shows the Overview page for import_py2package_from_pypi with the Start Runbook pane on the right side."::: -The runbook allows you to specify what package to download. For example, use of the `Azure` parameter downloads all Azure modules and all dependencies (about 105). --After the runbook is complete, you can check the **Python packages** under **Shared Resources** in your Automation account to verify that the package has been imported correctly. +The runbook allows you to specify what package to download. For example, use of the `Azure` parameter downloads all Azure modules and all dependencies (about 105). After the runbook is complete, you can check the **Python packages** under **Shared Resources** in your Automation account to verify that the package has been imported correctly. ## Use a package in a runbook -With a package imported, you can use it in a runbook. The following example uses the [Azure Automation utility package](https://github.com/azureautomation/azure_automation_utility). This package makes it easier to use Python with Azure Automation. To use the package, follow the instructions in the GitHub repository and add it to the runbook. For example, you can use `from azure_automation_utility import get_automation_runas_credential` to import the function for retrieving the Run As account. +With a package imported, you can use it in a runbook. Add the following code to list all the resource groups in an Azure subscription: ```python-import azure.mgmt.resource -import automationassets -from azure_automation_utility import get_automation_runas_credential --# Authenticate to Azure using the Azure Automation RunAs service principal -runas_connection = automationassets.get_automation_connection("AzureRunAsConnection") -azure_credential = get_automation_runas_credential() --# Intialize the resource management client with the RunAs credential and subscription -resource_client = azure.mgmt.resource.ResourceManagementClient( - azure_credential, - str(runas_connection["SubscriptionId"])) --# Get list of resource groups and print them out -groups = resource_client.resource_groups.list() -for group in groups: - print group.name +#!/usr/bin/env python +import os +import requests +# printing environment variables +endPoint = os.getenv('IDENTITY_ENDPOINT') + "?resource=https://management.azure.com/" +identityHeader = os.getenv('IDENTITY_HEADER') +payload = {} +headers = { + 'X-IDENTITY-HEADER': identityHeader, + 'Metadata': 'True' +} +response = requests.request("GET", endPoint, headers=headers, data=payload) +print response.text ``` -> [!NOTE] -> The Python `automationassets` package is not available on pypi.org, so it's not available for import onto a Windows machine. - ## Develop and test runbooks offline To develop and test your Python 2 runbooks offline, you can use the [Azure Automation Python emulated assets](https://github.com/azureautomation/python_emulated_assets) module on GitHub. This module allows you to reference your shared resources such as credentials, variables, connections, and certificates. ## Next steps -To prepare a Python runbook, see [Create a Python runbook](./learn/automation-tutorial-runbook-textual-python-3.md). +To prepare a Python runbook, see [Create a Python runbook](./learn/automation-tutorial-runbook-textual-python-3.md). |
automation | Create Azure Automation Account Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/create-azure-automation-account-portal.md | Title: Quickstart - Create an Azure Automation account using the portal description: This quickstart helps you to create a new Automation account using Azure portal. Previously updated : 04/12/2023 Last updated : 08/28/2023 -+ #Customer intent: As an administrator, I want to create an Automation account so that I can further use the Automation services. |
automation | Runbook Input Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md | Title: Configure runbook input parameters in Azure Automation description: This article tells how to configure runbook input parameters, which allow data to be passed to a runbook when it's started. Previously updated : 05/26/2023 Last updated : 08/18/2023 You can configure input parameters for PowerShell, PowerShell Workflow, graphica You assign values to the input parameters for a runbook when you start it. You can start a runbook from the Azure portal, a web service, or PowerShell. You can also start one as a child runbook that is called inline in another runbook. -### Configure input parameters in PowerShell runbooks +## Configure input parameters in PowerShell runbooks PowerShell and PowerShell Workflow runbooks in Azure Automation support input parameters that are defined through the following properties. To illustrate the configuration of input parameters for a graphical runbook, let A graphical runbook uses these major runbook activities: -* Configuration of the Azure Run As account to authenticate with Azure. +* Authenticate with Azure using managed identity configured for automation account. * Definition of a [Get-AzVM](/powershell/module/az.compute/get-azvm) cmdlet to get VM properties. * Use of the [Write-Output](/powershell/module/microsoft.powershell.utility/write-output) activity to output the VM names. |
automation | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
automation | Hybrid Runbook Worker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/hybrid-runbook-worker.md | description: This article tells how to troubleshoot and resolve issues that aris Last updated 04/26/2023 -+ # Troubleshoot agent-based Hybrid Runbook Worker issues in Automation |
automation | Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md | Title: Troubleshoot Azure Automation runbook issues description: This article tells how to troubleshoot and resolve issues with Azure Automation runbooks. Previously updated : 03/06/2023 Last updated : 08/18/2023 It fails with the following error: ### Cause -The naming convention is not being followed. Ensure that your runbook name starts with a letter and can contain letters, numbers, underscores, and dashes. The naming convention requirements are now being enforced starting with the Az module version 1.9 through the portal and cmdlets. +Code that was introduced in [1.9.0](https://www.powershellgallery.com/packages/Az.Automation/1.9.0) version of the Az.Automation module verifies the names of the runbooks to start and incorrectly flags runbooks with multiple "-" characters or with an "_" character in the name as invalid. ### Workaround -We recommend that you follow the runbook naming convention or revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module where the naming convention isn't enforced. +We recommend that you revert to [1.8.0 version](https://www.powershellgallery.com/packages/Az.Automation/1.8.0) of the module. +### Resolution ++Currently, we are working to deploy a fix to address this issue. ## Diagnose runbook issues To determine what's wrong, follow these steps: 1. If the error appears to be transient, try adding retry logic to your authentication routine to make authenticating more robust. ```powershell- # Get the connection "AzureRunAsConnection" - $connectionName = "AzureRunAsConnection" - $servicePrincipalConnection = Get-AutomationConnection -Name $connectionName - $logonAttempt = 0 $logonResult = $False To determine what's wrong, follow these steps: $LogonAttempt++ #Logging in to Azure... $connectionResult = Connect-AzAccount `- -ServicePrincipal ` - -Tenant $servicePrincipalConnection.TenantId ` - -ApplicationId $servicePrincipalConnection.ApplicationId ` - -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint - Start-Sleep -Seconds 30 } ``` The runbook isn't using the correct context when running. This may be because th You may see errors like this one: ```error-Get-AzVM : The client '<automation-runas-account-guid>' with object id '<automation-runas-account-guid>' does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '. +Get-AzVM : The client '<client-id>' with object id '<object-id> does not have authorization to perform action 'Microsoft.Compute/virtualMachines/read' over scope '/subscriptions/<subcriptionIdOfSubscriptionWichDoesntContainTheVM>/resourceGroups/REsourceGroupName/providers/Microsoft.Compute/virtualMachines/VMName '. ErrorCode: AuthorizationFailed StatusCode: 403 ReasonPhrase: Forbidden Operation To use a service principal with Azure Resource Manager cmdlets, see [Creating se Your runbook fails with an error similar to the following example: ```error-Exception: A task was canceled. +Exception: A task was cancelled. ``` ### Cause |
automation | Shared Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/shared-resources.md | Title: Troubleshoot Azure Automation shared resource issues description: This article tells how to troubleshoot and resolve issues with Azure Automation shared resources. Previously updated : 01/27/2021 Last updated : 08/24/2023 To create or update a Run As account, you must have appropriate [permissions](.. If the problem is because of a lock, verify that the lock can be removed. Then go to the resource that is locked in Azure portal, right-click the lock, and select **Delete**. +> [!NOTE] +> Azure Automation Run As account will retire on **September 30, 2023** and will be replaced with Managed Identities. Ensure that you start migrating your runbooks to use [managed identities](../automation-security-overview.md#managed-identities). For more information, see [migrating from an existing Run As accounts to managed identity](../migrate-run-as-accounts-managed-identity.md#sample-scripts) to start migrating the runbooks from Run As accounts to managed identities before **September 30, 2023**. ++ ### <a name="iphelper"></a>Scenario: You receive the error "Unable to find an entry point named 'GetPerAdapterInfo' in DLL 'iplpapi.dll'" when executing a runbook #### Issue |
automation | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md | -> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. [Update management center (Preview)](../../update-center/overview.md) (UMC) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md). -> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC. +> - Automation Update management relies on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. [Azure Update Manager (preview)](../../update-center/overview.md) (AUM) is the v2 version of Automation Update management and the future of Update management in Azure. AUM is a native service in Azure and does not rely on [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../../azure-monitor/agents/agents-overview.md). +> - Guidance for migrating from Automation Update management to Azure Update Manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Azure Update manager or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Azure Update Manager (preview). You can use Update Management in Azure Automation to manage operating system updates for your Windows and Linux virtual machines in Azure, physical or VMs in on-premises environments, and in other cloud environments. You can quickly assess the status of available updates and manage the process of installing required updates for your machines reporting to Update Management. |
automation | Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/plan-deployment.md | -## Step 1 - Automation account +## Step 1: Automation account Update Management is an Azure Automation feature, and therefore requires an Automation account. You can use an existing Automation account in your subscription, or create a new account dedicated only for Update Management and no other Automation features. -## Step 2 - Azure Monitor Logs +## Step 2: Azure Monitor Logs Update Management depends on a Log Analytics workspace in Azure Monitor to store assessment and update status log data collected from managed machines. Integration with Log Analytics also enables detailed analysis and alerting in Azure Monitor. You can use an existing workspace in your subscription, or create a new one dedicated only for Update Management. If you are new to Azure Monitor Logs and the Log Analytics workspace, you should review the [Design a Log Analytics workspace](../../azure-monitor/logs/workspace-design.md) deployment guide. -## Step 3 - Supported operating systems +## Step 3: Supported operating systems Update Management supports specific versions of the Windows Server and Linux operating systems. Before you enable Update Management, confirm that the target machines meet the [operating system requirements](operating-system-requirements.md). -## Step 4 - Log Analytics agent +## Step 4: Log Analytics agent The [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) for Windows and Linux is required to support Update Management. The agent is used for both data collection, and the Automation system Hybrid Runbook Worker role to support Update Management runbooks used to manage the assessment and update deployments on the machine. For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../. If your IT security policies do not allow machines on the network to connect to the internet, you can set up a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) and then configure the machine to connect through the gateway to Azure Automation and Azure Monitor. -## Step 6 - Permissions +## Step 6: Permissions To create and manage update deployments, you need specific permissions. To learn about these permissions, see [Role-based access - Update Management](../automation-role-based-access-control.md#update-management-permissions). -## Step 7 - Windows Update Agent +## Step 7: Windows Update Agent Azure Automation Update Management relies on the Windows Update Agent to download and install Windows updates. There are specific group policy settings that are used by Windows Update Agent (WUA) on machines to connect to Windows Server Update Services (WSUS) or Microsoft Update. These group policy settings are also used to successfully scan for software update compliance, and to automatically update the software updates. To review our recommendations, see [Configure Windows Update settings for Update Management](configure-wuagent.md). -## Step 8 - Linux repository +## Step 8: Linux repository VMs created from the on-demand Red Hat Enterprise Linux (RHEL) images available in Azure Marketplace are registered to access the Red Hat Update Infrastructure (RHUI) that's deployed in Azure. Any other Linux distribution must be updated from the distribution's online file repository by using methods supported by that distribution. To classify updates on Red Hat Enterprise version 6, you need to install the yum-security plugin. On Red Hat Enterprise Linux 7, the plugin is already a part of yum itself and there's no need to install anything. For more information, see the following Red Hat [knowledge article](https://access.redhat.com/solutions/10021). -## Step 9 - Plan deployment targets +## Step 9: Plan deployment targets Update Management allows you to target updates to a dynamic group representing Azure or non-Azure machines, so you can ensure that specific machines always get the right updates at the most convenient times. A dynamic group is resolved at deployment time and is based on the following criteria: |
azure-app-configuration | Concept Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-geo-replication.md | To add once these links become available: Each replica created will add extra charges. Reference the [App Configuration pricing page](https://azure.microsoft.com/pricing/details/app-configuration/) for details. As an example, if your origin is a standard tier configuration store and you have five replicas, you would be charged the rate of six standard tier configuration stores for your system, but each of your replica's isolated quota and requests are included in this charge. +## Monitoring ++To offer insights into the characteristics of the geo-replication feature, App Configuration provides a metric named **Replication Latency**. The replication latency metric describes how long it takes for data to replicate from one region to another. ++For more information on the replication latency metric and other App Configuration metrics see [Monitoring App Configuration data reference](./monitor-app-configuration-reference.md). + ## Next steps > [!div class="nextstepaction"] |
azure-app-configuration | Monitor App Configuration Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration-reference.md | Resource Provider and Type: [App Configuration Platform Metrics](../azure-monito | Http Incoming Request Duration | Milliseconds | Server side duration of an Http Request | | Throttled Http Request Count | Count | Throttled requests are Http requests that receive a response with a status code of 429 | | Daily Storage Usage | Percent | Represents the amount of storage in use as a percentage of the maximum allowance. This metric is updated at least once daily. |+| Replication Latency | Milliseconds | Represents the average time it takes for a replica to be consistent with current state. | For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). App Configuration has the following dimensions associated with its metr | Http Incoming Request Duration | The supported dimensions are the **HttpStatusCode**, **AuthenticationScheme**, and **Endpoint** of each request. **AuthenticationScheme** can be filtered by AAD or HMAC authentication. | | Throttled Http Request Count | The **Endpoint** of each request is included as a dimension. | | Daily Storage Usage | This metric does not have any dimensions. |+| Replication Latency | The **Endpoint** of the replica that data was replicated to is included as a dimension. | For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics). |
azure-app-configuration | Monitor App Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/monitor-app-configuration.md | You can analyze metrics for App Configuration with metrics from other Azure serv * Http Incoming Request Duration * Throttled Http Request Count (Http status code 429 Responses) * Daily Storage Usage+* Replication Latency In the portal, navigate to the **Metrics** section and select the **Metric Namespaces** and **Metrics** you want to analyze. This screenshot shows you the metrics view when selecting **Http Incoming Request Count** for your configuration store. |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-app-configuration | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
azure-arc | Create Data Controller Using Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md | You can use an online tool to base64 encode your desired username and password o PowerShell ```console-[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('<your string to encode here>')) +[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) #Example-#[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('example')) +#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) ``` |
azure-arc | Create Postgresql Server Kubernetes Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-postgresql-server-kubernetes-native-tools.md | To create a PostgreSQL server using Kubernetes tools, you will need to have the ## Overview -To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the _postgresqls_ custom resource definitions. +To create a PostgreSQL server, you need to create a Kubernetes secret to store your postgres administrator login and password securely and a PostgreSQL server custom resource based on the `postgresqls` custom resource definitions. ## Create a yaml file You can use an online tool to base64 encode your desired username and password o PowerShell ```console-[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('<your string to encode here>')) +[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('<your string to encode here>')) #Example-#[Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes('example')) +#[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes('example')) ``` |
azure-arc | Agent Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md | Title: "Upgrade Azure Arc-enabled Kubernetes agents" Previously updated : 09/09/2022 Last updated : 08/28/2023 description: "Control agent upgrades for Azure Arc-enabled Kubernetes" az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest With automatic upgrade enabled, the agent polls Azure hourly to check for a newer version. When a newer version becomes available, it triggers a Helm chart upgrade for the Azure Arc agents. +> [!IMPORTANT] +> Be sure you allow [connectivity to all required endpoints](network-requirements.md). In particular, connectivity to `dl.k8s.io` is required for automatic upgrades. + To opt out of automatic upgrade, specify the `--disable-auto-upgrade` parameter while connecting the cluster to Azure Arc. The following command connects a cluster to Azure Arc with auto-upgrade disabled: Azure Arc-enabled Kubernetes follows the standard [semantic versioning scheme](h While the schedule may vary, a new minor version of Azure Arc-enabled Kubernetes agents is released approximately once per month. -The following command upgrades the agent to version 1.8.14: +The following command manually upgrades the agent to version 1.8.14: ```azurecli az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.8.14 |
azure-arc | Conceptual Agent Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-agent-overview.md | Title: "Azure Arc-enabled Kubernetes agent overview" Previously updated : 12/07/2022 Last updated : 08/24/2023 description: "Learn about the Azure Arc agents deployed on the Kubernetes clusters when connecting them to Azure Arc." description: "Learn about the Azure Arc agents deployed on the Kubernetes cluste [Azure Arc-enabled Kubernetes](overview.md) provides a centralized, consistent control plane to manage policy, governance, and security across Kubernetes clusters in different environments. -Azure Arc agents are deployed on Kubernetes clusters when you [connect them to Azure Arc](quickstart-connect-cluster.md), This article provides an overview of these agents. +Azure Arc agents are deployed on Kubernetes clusters when you [connect them to Azure Arc](quickstart-connect-cluster.md). This article provides an overview of these agents. ## Deploy agents to your cluster Most on-premises datacenters enforce strict network rules that prevent inbound communication on the network boundary firewall. Azure Arc-enabled Kubernetes works with these restrictions by not requiring inbound ports on the firewall. Azure Arc agents require outbound communication to a [set list of network endpoints](network-requirements.md). +This diagram provides a high-level view of Azure Arc components. Kubernetes clusters in on-premises datacenters or different clouds are connected to Azure through the Azure Arc agents. This allows the clusters to be managed in Azure using management tools and Azure services. The clusters can also be accessed through offline management tools. + :::image type="content" source="media/architectural-overview.png" alt-text="Diagram showing an architectural overview of the Azure Arc-enabled Kubernetes agents." lightbox="media/architectural-overview.png"::: The following high-level steps are involved in [connecting a Kubernetes cluster to Azure Arc](quickstart-connect-cluster.md): The following high-level steps are involved in [connecting a Kubernetes cluster ## Next steps * Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md).+* View release notes to see [details about the latest agent versions](release-notes.md). * Learn about [upgrading Azure Arc-enabled Kubernetes agents](agent-upgrade.md). * Learn more about the creating connections between your cluster and a Git repository as a [configuration resource with Azure Arc-enabled Kubernetes](./conceptual-configurations.md). |
azure-arc | Conceptual Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/conceptual-gitops-flux2.md | GitOps is enabled in an Azure Arc-enabled Kubernetes or AKS cluster as a `Micros ### Version support -The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. --Starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023), ARM64-based clusters are supported. +The most recent version of the Flux v2 extension (`microsoft.flux`) and the two previous versions (N-2) are supported. We generally recommend that you use the [most recent version](extensions-release.md#flux-gitops) of the extension. Starting with `microsoft.flux` version 1.7.0, ARM64-based clusters are supported. > [!NOTE] > If you have been using Flux v1, we recommend [migrating to Flux v2](conceptual-gitops-flux2.md#migrate-from-flux-v1) as soon as possible. |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 04/17/2023 Last updated : 08/22/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." For more information, see [Tutorial: Deploy applications using GitOps with Flux The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. -### 1.7.3 (April 2023) +> [!IMPORTANT] +> Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0). -Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) +### 1.7.6 (August 2023) -- source-controller: v0.36.1-- kustomize-controller: v0.35.1-- helm-controller: v0.31.2-- notification-controller: v0.33.0-- image-automation-controller: v0.31.0-- image-reflector-controller: v0.26.1+> [!NOTE] +> We have started to roll out this release across regions. We'll remove this note once version 1.7.6 is available to all supported regions. ++Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) ++- source-controller: v1.0.1 +- kustomize-controller: v1.0.1 +- helm-controller: v0.35.0 +- notification-controller: v1.0.0 +- image-automation-controller: v0.35.0 +- image-reflector-controller: v0.29.1 Changes made for this version: -- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2)-- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true`-- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled) +- Configurations with `ssh` authentication type were intermittently failing to reconcile with GitHub due to an updated [RSA SSH host key](https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/). This release updates the SSH key entries to match the ones mentioned in [GitHub's SSH key fingerprints documentation](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/githubs-ssh-key-fingerprints). -### 1.7.0 (March 2023) +### 1.7.5 (August 2023) -Flux version: [Release v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0) +Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) -- source-controller: v0.34.0-- kustomize-controller: v0.33.0-- helm-controller: v0.29.0-- notification-controller: v0.31.0-- image-automation-controller: v0.29.0-- image-reflector-controller: v0.24.0+- source-controller: v1.0.1 +- kustomize-controller: v1.0.1 +- helm-controller: v0.35.0 +- notification-controller: v1.0.0 +- image-automation-controller: v0.35.0 +- image-reflector-controller: v0.29.1 Changes made for this version: -- Upgrades Flux to [v0.39.0](https://github.com/fluxcd/flux2/releases/tag/v0.39.0)-- Flux extension is now supported on ARM64-based clusters+- Upgrades Flux to [v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1) +- Promotes some APIs to v1. This change should not affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0). +- Adds support for [Helm drift detection](tutorial-use-gitops-flux2.md#helm-drift-detection) and [OOM watch](tutorial-use-gitops-flux2.md#helm-oom-watch). -### 1.6.4 (February 2023) +### 1.7.4 (June 2023) ++Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) ++- source-controller: v0.36.1 +- kustomize-controller: v0.35.1 +- helm-controller: v0.31.2 +- notification-controller: v0.33.0 +- image-automation-controller: v0.31.0 +- image-reflector-controller: v0.26.1 Changes made for this version: -- Disabled extension reconciler (which attempts to restore the Flux extension if it fails). This resolves a potential bug where, if the reconciler is unable to recover a failed Flux extension and `prune` is set to `true`, the extension and deployed objects may be deleted.+- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` will be set to `true` for all Flux configurations, and `postBuild` will be null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55)) -### 1.6.3 (December 2022) +- Adds support for optional properties [`waitForReconciliation`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1299C14-L1299C35) and [`reconciliationWaitDuration`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1304). -Flux version: [Release v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0) + By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment will wait for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72)) ++### 1.7.3 (April 2023) ++Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) -- source-controller: v0.32.1-- kustomize-controller: v0.31.0-- helm-controller: v0.27.0-- notification-controller: v0.29.0-- image-automation-controller: v0.27.0-- image-reflector-controller: v0.23.0+- source-controller: v0.36.1 +- kustomize-controller: v0.35.1 +- helm-controller: v0.31.2 +- notification-controller: v0.33.0 +- image-automation-controller: v0.31.0 +- image-reflector-controller: v0.26.1 Changes made for this version: -- Upgrades Flux to [v0.37.0](https://github.com/fluxcd/flux2/releases/tag/v0.37.0)-- Adds exception for [aad-pod-identity in flux extension](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-azure-ad-pod-identity-enabled)-- Enables reconciler for flux extension+- Upgrades Flux to [v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.41.2) +- Fixes issue causing resources that were deployed as part of Flux configuration to persist even when the configuration was deleted with prune flag set to `true` +- Kubelet identity support for image-reflector-controller by [installing the microsoft.flux extension in a cluster with kubelet identity enabled](troubleshooting.md#flux-v2installing-the-microsoftflux-extension-in-a-cluster-with-kubelet-identity-enabled) ## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes |
azure-arc | Monitor Gitops Flux 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/monitor-gitops-flux-2.md | Title: Monitor GitOps (Flux v2) status and activity Previously updated : 08/11/2023 Last updated : 08/17/2023 description: Learn how to monitor status, compliance, resource consumption, and reconciliation activity for GitOps with Flux v2. Follow these steps to import dashboards that let you monitor Flux extension depl > [!NOTE] > These steps describe the process for importing the dashboard to [Azure Managed Grafana](/azure/managed-grafana/overview). You can also [import this dashboard to any Grafana instance](https://grafana.com/docs/grafana/latest/dashboards/manage-dashboards/#import-a-dashboard). With this option, a service principal must be used; managed identity is not supported for data connection outside of Azure Managed Grafana. -1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Reader** level permissions. You can check your access by going to **Access control (IAM)** on the Grafana instance. -1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it a Reader role on the subscription(s): +1. Create an Azure Managed Grafana instance by using the [Azure portal](/azure/managed-grafana/quickstart-managed-grafana-portal) or [Azure CLI](/azure/managed-grafana/quickstart-managed-grafana-cli). Ensure that you're able to access Grafana by selecting its endpoint on the Overview page. You need at least **Grafana Editor** level permissions to view and edit dashboards. You can check your access by going to **Access control (IAM)** on the Grafana instance. +1. If you're using a managed identity for the Azure Managed Grafana instance, follow these steps to assign it the **Monitoring Reader** role on the subscription(s): 1. In the Azure portal, navigate to the subscription that you want to add. 1. Select **Access control (IAM)**. 1. Select **Add role assignment**.- 1. Select the **Reader** role, then select **Next**. + 1. Select the **Monitoring Reader** role, then select **Next**. 1. On the **Members** tab, select **Managed identity**, then choose **Select members**. 1. From the **Managed identity** list, select the subscription where you created your Azure Managed Grafana Instance. Then select **Azure Managed Grafana** and the name of your Azure Managed Grafana instance. 1. Select **Review + Assign**. - If you're using a service principal, grant the **Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.) + If you're using a service principal, grant the **Monitoring Reader** role to the service principal that you'll use for your data source connection. Follow these same steps, but select **User, group, or service principal** in the **Members** tab, then select your service principal. (If you aren't using Azure Managed Grafana, you must use a service principal for data connection access.) 1. [Create the Azure Monitor Data Source connection](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) in your Azure Managed Grafana instance. This connection lets the dashboard access Azure Resource Graph data. 1. Download the [GitOps Flux - Application Deployments Dashboard](https://github.com/Azure/fluxv2-grafana-dashboards/blob/main/dashboards/GitOps%20Flux%20-%20Application%20Deployments%20Dashboard.json). |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/network-requirements.md | Title: Azure Arc-enabled Kubernetes network requirements description: Learn about the networking requirements to connect Kubernetes clusters to Azure Arc. Previously updated : 03/07/2023 Last updated : 08/15/2023 |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 # |
azure-arc | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/release-notes.md | Title: "What's new with Azure Arc-enabled Kubernetes" Previously updated : 05/23/2023 Last updated : 08/21/2023 description: "Learn about the latest releases of Arc-enabled Kubernetes." Azure Arc-enabled Kubernetes is updated on an ongoing basis. To stay up to date > > We generally recommend using the most recent versions of the agents. The [version support policy](agent-upgrade.md#version-support-policy) covers the most recent version and the two previous versions (N-2). +## July 2023 ++### Arc agents - Version 1.12.5 ++- Alpine base image powering our Arc agent containers has been updated from 3.7.12 to 3.18.0 + ## May 2023 ### Arc agents - Version 1.11.7 |
azure-arc | System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md | Title: "Azure Arc-enabled Kubernetes system requirements" Previously updated : 04/27/2023 Last updated : 08/28/2023 description: Learn about the system requirements to connect Kubernetes clusters to Azure Arc. For a multi-node Kubernetes cluster environment, pods can get scheduled on diffe ## Management tool requirements -Connecting a cluster to Azure Arc requires [Helm 3](https://helm.sh/docs/intro/install), version 3.7.0 or earlier. --You'll also need to use either Azure CLI or Azure PowerShell. +To connect a cluster to Azure Arc, you'll need to use either Azure CLI or Azure PowerShell. For Azure CLI: For Azure PowerShell: Install-Module -Name Az.ConnectedKubernetes ``` +> [!NOTE] +> When you deploy the Azure Arc agents to a cluster, Helm v. 3.6.3 will be installed in the `.azure` folder of the deployment machine. This [Helm 3](https://helm.sh/docs/) installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine. + ## Azure AD identity requirements To connect your cluster to Azure Arc, you must have an Azure AD identity (user or service principal) which can be used to log in to [Azure CLI](/cli/azure/authenticate-azure-cli) or [Azure PowerShell](/powershell/azure/authenticate-azureps) and connect your cluster to Azure Arc. |
azure-arc | Tutorial Use Gitops Flux2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md | Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 06/29/2023 Last updated : 08/16/2023 To deploy applications using GitOps with Flux v2, you need: #### For Azure Arc-enabled Kubernetes clusters -* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023). +* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops). [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server). False whl k8s-extension C:\Users\somename\.azure\c #### For Azure Arc-enabled Kubernetes clusters -* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#170-march-2023). +* An Azure Arc-enabled Kubernetes connected cluster that's up and running. ARM64-based clusters are supported starting with [`microsoft.flux` version 1.7.0](extensions-release.md#flux-gitops). [Learn how to connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). If you need to connect through an outbound proxy, then assure you [install the Arc agents with proxy settings](./quickstart-connect-cluster.md?tabs=azure-cli#connect-using-an-outbound-proxy-server). spec: When you use this annotation, the deployed HelmRelease is patched with the reference to the configured source. Currently, only `GitRepository` source is supported. +### Helm drift detection ++[Drift detection for Helm releases](https://fluxcd.io/flux/components/helm/helmreleases/#drift-detection ) isn't enabled by default. Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm drift detection by running the following command: ++```azurecli +az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.detectDrift=true +``` ++### Helm OOM watch ++Starting with [`microsoft.flux` v1.7.5](extensions-release.md#flux-gitops), you can enable Helm OOM watch. For more information, see [Enable Helm near OOM detection](https://fluxcd.io/flux/cheatsheets/bootstrap/#enable-helm-near-oom-detection). ++Be sure to review potential [remediation strategies](https://fluxcd.io/flux/components/helm/helmreleases/#configuring-failure-remediation) and apply them as needed when enabling this feature. ++To enable OOM watch, run the following command: ++```azurecli +az k8s-extension update --resource-group <resource-group> --cluster-name <cluster-name> --name flux --cluster-type <cluster-type> --config helm-controller.outOfMemoryWatch.enabled=true helm-controller.outOfMemoryWatch.memoryThreshold=70 helm-controller.outOfMemoryWatch.interval=700ms +``` ++If you don't specify values for `memoryThreshold` and `outOfMemoryWatch`, the default memory threshold is set to 95%, with the interval at which to check the memory utilization set to 500 ms. + ## Delete the Flux configuration and extension Use the following commands to delete your Flux configuration and, if desired, the Flux extension itself. For AKS clusters, you can't use the Azure portal to delete the extension. Instea az k8s-extension delete -g <resource-group> -c <cluster-name> -n flux -t managedClusters --yes ``` ++ ## Next steps * Read more about [configurations and GitOps](conceptual-gitops-flux2.md). |
azure-arc | Workload Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/workload-management.md | To deploy the sample, run the following script: mkdir kalypso && cd kalypso curl -fsSL -o deploy.sh https://raw.githubusercontent.com/microsoft/kalypso/main/deploy/deploy.sh chmod 700 deploy.sh-./deploy.sh -c -p <prefix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2> +./deploy.sh -c -p <prefix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2> ``` This script may take 10-15 minutes to complete. After it's done, it reports the execution result in the output like this: Created AKS clusters in kalypso-rg resource group: > If something goes wrong with the deployment, you can delete the created resources with the following command: > > ```bash-> ./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2> +> ./deploy.sh -d -p <preix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2> > ``` ### Sample overview With this file, Application Team requests Kubernetes compute resources from the To register the application, open a terminal and use the following script: ```bash-export org=<github org> +export org=<GitHub org> export prefix=<prefix> # clone the control-plane repo spec: branch: dev secretRef: name: repo-secret- url: https://github.com/<github org>/<prefix>-app-gitops + url: https://github.com/<GitHub org>/<prefix>-app-gitops apiVersion: kustomize.toolkit.fluxcd.io/v1beta2 kind: Kustomization When no longer needed, delete the resources that you created. To do so, run the ```bash # In kalypso folder-./deploy.sh -d -p <preix. e.g. kalypso> -o <github org. e.g. eedorenko> -t <github token> -l <azure-location. e.g. westus2> +./deploy.sh -d -p <preix. e.g. kalypso> -o <GitHub org. e.g. eedorenko> -t <GitHub token> -l <azure-location. e.g. westus2> ``` ## Next steps |
azure-arc | Network Requirements Consolidated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md | Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 02/01/2023 Last updated : 08/15/2023 Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernete [!INCLUDE [network-requirements](kubernetes/includes/network-requirements.md)] -For an example, see [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](kubernetes/quickstart-connect-cluster.md). +For more information, see [Azure Arc-enabled Kubernetes network requirements](kubernetes/network-requirements.md). ## Azure Arc-enabled data services Connectivity to Arc-enabled server endpoints is required for: [!INCLUDE [network-requirements](servers/includes/network-requirements.md)] -For examples, see [Connected Machine agent network requirements](servers/network-requirements.md)]. +For more information, see [Connected Machine agent network requirements](servers/network-requirements.md). ## Azure Arc resource bridge (preview) This section describes additional networking requirements specific to deploying [!INCLUDE [network-requirements](resource-bridge/includes/network-requirements.md)] +For more information, see [Azure Arc resource bridge (preview) network requirements](resource-bridge/network-requirements.md). + ## Azure Arc-enabled System Center Virtual Machine Manager (preview) Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires: |
azure-arc | Network Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md | Title: Azure Arc resource bridge (preview) network requirements description: Learn about network requirements for Azure Arc resource bridge (preview) including URLs that must be allowlisted. Previously updated : 01/30/2023 Last updated : 08/24/2023 # Azure Arc resource bridge (preview) network requirements This article describes the networking requirements for deploying Azure Arc resou ## Additional network requirements -In addition, resource bridge (preview) requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud). +In addition, Arc resource bridge (preview) requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud). > [!NOTE] > The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md). ## SSL proxy configuration -If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services. +If using a proxy, Arc resource bridge must be configured for proxy so that it can connect to the Azure services. -- To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. +- To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. -- The format of the certificate file is *Base-64 encoded X.509 (.CER)*. +- The format of the certificate file is *Base-64 encoded X.509 (.CER)*. -- Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. +- Only pass the single proxy certificate. If a certificate bundle is passed, the deployment will fail. -- The proxy server endpoint can't be a .local domain. +- The proxy server endpoint can't be a `.local` domain. -- The proxy server has to be reachable from all IPs within the IP address prefix, including the control plane and appliance VM IPs. +- The proxy server has to be reachable from all IPs within the IP address prefix, including the control plane and appliance VM IPs. -There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: +There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: - SSL certificate for your SSL proxy (so that the management machine and appliance VM trust your proxy FQDN and can establish an SSL connection to it) - SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted. -In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, this may impact your ability to download the required images (~3.5 GB) within the allotted time (90 min). +In order to deploy Arc resource bridge, images need to be downloaded to the management machine and then uploaded to the on-premises private cloud gallery. If your proxy server throttles download speed, you may not be able to download the required images (~3.5 GB) within the allotted time (90 min). ## Exclusion list for no proxy -The following table contains the list of addresses that must be excluded by using the `-noProxy` parameter in the `createconfig` command. +If a proxy server is being used, the following table contains the list of addresses that should be excluded from proxy by configuring the `noProxy` settings. | **IP Address** | **Reason for exclusion** | | -- | | | localhost, 127.0.0.1 | Localhost traffic |-| .svc | Internal Kubernetes service traffic (.svc) where _.svc_ represents a wildcard name. This is similar to saying \*.svc, but none is used in this schema. | +| .svc | Internal Kubernetes service traffic (.svc) where *.svc* represents a wildcard name. This is similar to saying \*.svc, but none is used in this schema. | | 10.0.0.0/8 | private network address space | | 172.16.0.0/12 |Private network address space - Kubernetes Service CIDR | | 192.168.0.0/16 | Private network address space - Kubernetes Pod CIDR | The following table contains the list of addresses that must be excluded by usin The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16`. While these default values will work for many networks, you may need to add more subnet ranges and/or names to the exemption list. For example, you may want to exempt your enterprise namespace (.contoso.com) from being directed through the proxy. You can achieve that by specifying the values in the `noProxy` list. +> [!IMPORTANT] +> When listing multiple addresses for the `noProxy` settings, don't add a space after each comma to separate the addresses. The addresses must immediately follow the commas. + ## Next steps - Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details. - Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).----+- View [troubleshooting tips for networking issues](troubleshoot-resource-bridge.md#networking-issues). |
azure-arc | Troubleshoot Resource Bridge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md | Arc resource bridge consists of an appliance VM that is deployed to the on-premi To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm). + ## Networking issues ### Back-off pulling image error When trying to set the configuration for Arc resource bridge, you may receive an This occurs when a `.local` path is provided for a configuration setting, such as proxy, dns, datastore or management endpoint (such as vCenter). Arc resource bridge appliance VM uses Azure Linux OS, which doesn't support `.local` by default. A workaround could be to provide the IP address where applicable. + ### Azure Arc resource bridge is unreachable Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services. When deploying the resource bridge on VMware vCenter, you specify the folder in When deploying the resource bridge on VMware Vcenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that your user account has all of the following privileges in VMware vCenter and then try again. -``` ++```python "Datastore.AllocateSpace" "Datastore.Browse" "Datastore.DeleteFile" When deploying the resource bridge on VMware Vcenter, you may get an error sayin "Resource.AssignVMToPool" "Resource.HotMigrate" "Resource.ColdMigrate"+"Sessions.ValidateSession" "StorageViews.View" "System.Anonymous" "System.Read" If you don't see your problem here or you can't resolve your issue, try one of t - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. - [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).+ |
azure-arc | Agent Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md | Download for [Windows](https://download.microsoft.com/download/0/c/7/0c7a484b-e2 Agent version 1.33 contains a fix for [CVE-2023-38176](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2023-38176), a local elevation of privilege vulnerability. Microsoft recommends upgrading all agents to version 1.33 or later to mitigate this vulnerability. Azure Advisor can help you [identify servers that need to be upgraded](https://portal.azure.com/#view/Microsoft_Azure_Expert/RecommendationListBlade/recommendationTypeId/9d5717d2-4708-4e3f-bdda-93b3e6f1715b/recommendationStatus). Learn more about CVE-2023-38176 in the [Security Update Guide](https://msrc.microsoft.com/update-guide/en-US/vulnerability/CVE-2023-38176). +### Known issue ++[azcmagent check](azcmagent-check.md) validates a new endpoint in this release: `<geography>-ats.his.arc.azure.com`. This endpoint is reserved for future use and not required for the Azure Connected Machine agent to operate successfully. However, if you are using a private endpoint, this endpoint will fail the network connectivity check. You can safely ignore this endpoint in the results and should instead confirm that all other endpoints are reachable. ++This endpoint will be removed from `azcmagent check` in a future release. + ### Fixed - Fixed an issue that could cause a VM extension to disappear in Azure Resource Manager if it's installed with the same settings twice. After upgrading to agent version 1.33 or later, reinstall any missing extensions to restore the information in Azure Resource Manager. |
azure-arc | License Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/license-extended-security-updates.md | + + Title: License provisioning guidelines for Extended Security Updates for Windows Server 2012 +description: Learn about license provisioning guidelines for Extended Security Updates for Windows Server 2012 through Azure Arc. Last updated : 08/18/2023++++# License provisioning guidelines for Extended Security Updates for Windows Server 2012 ++Flexibility is critical when enrolling end of support infrastructure in Extended Security Updates (ESUs) through Azure Arc to receive critical patches. To give ease of options across virtualization and disaster recovery scenarios, you must first provision Windows Server 2012 Arc ESU licenses and then link those licenses to your Azure Arc-enabled servers. The linking and provisioning of licenses can be done through Azure portal, ARM templates, CLI, or Azure Policy. ++When provisioning WS2012 ESU licenses, you need to specify whether you'll need to select between virtual core and physical core licensing, select between standard and datacenter licensing, and attest to the number of associated cores (broken down by the number of 2-core and 16-core packs). To assist with this license provisioning process, this article provides general guidance and sample customer scenarios for planning your deployment of WS2012 ESUs through Azure Arc. ++## General guidance: Standard vs. Datacenter, Physical vs. Virtual Cores ++### Physical core licensing ++If you choose to license based on physical cores, the licensing requires a minimum of 16 physical cores per license. Most customers choose to license based on physical cores and select Standard or Datacenter edition to match their original Windows Server licensing. While Standard licensing can be applied to up to two virtual machines (VMs), Datacenter licensing has no limit to the number of VMs it can be applied to. Depending on the number of VMs covered, it may make sense to opt for the Datacenter license instead of the Standard license. ++### Virtual core licensing ++If you choose to license based on virtual cores, the licensing requires a minimum of eight virtual cores per Virtual Machine. There are two main scenarios where this model is advisable: ++1. If the VM is running on a third-party host or hyper scaler like AWS, GCP, or OCI. ++1. The Windows Server was licensed on a virtualization basis. In most cases, customers elect the Standard edition for virtual core-based licenses. ++An additional scenario (scenario 1, below) is a candidate for VM/Virtual core licensing when the WS2012 VMs are running on a newer Windows Server host (that is, Windows Server 2016 or later). ++> [!IMPORTANT] +> In all cases, customers are required to attest to their conformance with SA or SPLA. There is no exception for these requirements. Software Assurance or an equivalent Server Subscription is required for customers to purchase Extended Security Updates on-premises and in hosted environments. Customers will be able to purchase Extended Security Updates via Enterprise Agreement (EA), Enterprise Subscription Agreement (EAS), a Server & Cloud Enrollment (SCE), and Enrollment for Education Solutions (EES). On Azure, customers do not need Software Assurance to get free Extended Security Updates, but Software Assurance or Server Subscription is required to take advantage of the Azure Hybrid Benefit. +> ++## Scenario based examples: Compliant and Cost Effective Licensing ++### Scenario 1: Eight modern 32-core hosts (not Windows Server 2012). While each of these hosts are running four 8-core VMs, only one VM on each host is running Windows Server 2012 R2 ++In this scenario, you can use virtual core-based licensing to avoid covering the entire host by provisioning eight Windows Server 2012 Standard licenses for eight virtual cores each and link each of those licenses to the VMs running Windows Server 2012 R2. Alternatively, you could consider consolidating your Windows Server 2012 R2 VMs into two of the hosts to take advantage of physical core-based licensing options. ++### Scenario 2: A branch office with four VMs, each 8-cores, on a 32-core Windows Server 2012 Standard host ++In this case, you should provision two WS2012 Standard licenses for 16 physical cores each and apply to the four Arc-enabled servers. Alternatively, you could provision four WS2012 Standard licenses for eight virtual cores each and apply individually to the four Arc-enabled servers. ++### Scenario 3: Eight physical servers in retail stores, each server is standard with eight cores each and there's no virtualization ++In this scenario, you should apply eight WS2012 Standard licenses for 16 physical cores each and link each license to a physical server. Note that the 16 physical core minimum applies to the provisioned licenses. ++### Scenario 4: Multicloud environment with 12 AWS VMs, each of which have 12 cores and are running Windows Server 2012 R2 Standard ++In this scenario, you should apply 12 Windows Server 2012 Standard licenses with 12 virtual cores each, and link individually to each AWS VM. ++### Scenario 5: Customer has already purchased the traditional Windows Server 2012 ESUs through Volume Licensing ++In this scenario, the Azure Arc-enabled servers that have been enrolled in Extended Security Updates through an activated MAK Key are as enrolled in ESUs in Azure portal. You have the flexibility to switch from this key-based traditional ESU model to WS2012 ESUs enabled by Azure Arc between Year 1 and Year 2. ++### Scenario 6: Migrating or retiring your Azure Arc-enabled servers enrolled in Windows Server 2012 ESUs ++In this scenario, you can deactivate or decommission the ESU Licenses associated with these servers. If only part of the server estate covered by a license no longer requires ESUs, you can modify the ESU license details to reduce the number of associated cores. ++### Scenario 7: 128-core Windows Server 2012 Datacenter server running between 10 and 15 Windows Server 2012 R2 VMs that get provisioned and deprovisioned regularly + +In this scenario, you should provision a Windows Server 2012 Datacenter license associated with 128 physical cores and link this license to the Arc-enabled Windows Server 2012 R2 VMs running on it. The deletion of the underlying VM also deletes the corresponding Arc-enabled server resource, enabling you to link another Arc-enabled server. ++## Next steps ++* Find out more about [planning for Windows Server and SQL Server end of support](https://www.microsoft.com/en-us/windows-server/extended-security-updates) and [getting Extended Security Updates](/windows-server/get-started/extended-security-updates-deploy). ++* Learn about best practices and design patterns through the [Azure Arc landing zone accelerator for hybrid and multicloud](/azure/cloud-adoption-framework/scenarios/hybrid/arc-enabled-servers/eslz-identity-and-access-management). +* Learn more about [Arc-enabled servers](overview.md) and how they work with Azure through the Azure Connected Machine agent. +* Explore options for [onboarding your machines](plan-at-scale-deployment.md) to Azure Arc-enabled servers. |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | If two agents use the same configuration, you will encounter inconsistent behavi Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures. -* Windows Server 2008 R2 SP1, 2012 R2, 2016, 2019, and 2022 +* Windows Server 2008 R2 SP1, 2012, 2012 R2, 2016, 2019, and 2022 * Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI * Windows 10, 11 (see [client operating system guidance](#client-operating-system-guidance)) |
azure-arc | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
azure-arc | Administer Arc Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md | + + Title: Perform ongoing administration for Arc-enabled VMware vSphere +description: Learn how to perform administrator operations related to Azure Arc-enabled VMware vSphere + Last updated : 08/18/2023+++# Perform ongoing administration for Arc-enabled VMware vSphere ++In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview): ++- Upgrading the Azure Arc resource bridge (preview) +- Updating the credentials +- Collecting logs from the Arc resource bridge ++Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig that provides access to the Kubernetes cluster on the resource bridge VM. ++## Upgrading the Arc resource bridge ++Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates. ++> [!NOTE] +> To upgrade the Arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail. ++To upgrade to the latest version of the resource bridge, perform the following steps: ++1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location and vCenter Azure resources ++2. Find and delete the old Arc resource bridge **template** from your vCenter ++3. Download the script from the portal and update the following section in the script ++ ```powershell + $location = <Azure region of the resources> + + $applianceSubscriptionId = <subscription-id> + $applianceResourceGroupName = <resourcegroup-name> + $applianceName = <resource-bridge-name> + + $customLocationSubscriptionId = <subscription-id> + $customLocationResourceGroupName = <resourcegroup-name> + $customLocationName = <custom-location-name> + + $vCenterSubscriptionId = <subscription-id> + $vCenterResourceGroupName = <resourcegroup-name> + $vCenterName = <vcenter-name-in-azure> + ``` ++4. [Run the onboarding script](quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter ++ ``` powershell-interactive + ./resource-bridge-onboarding-script.ps1 --force + ``` ++5. [Provide the inputs](quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted. ++6. Once the onboarding is successfully completed, the resource bridge is upgraded to the latest version. ++## Updating the vSphere account credentials (using a new password or a new vSphere account after onboarding) ++Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provided during the onboarding to communicate with your vCenter server. These credentials are only persisted locally on the Arc resource bridge VM. ++As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services. You can also use the same steps in case you need to use a different vSphere account after onboarding. You must ensure the new account also has all the [required vSphere permissions](support-matrix-for-arc-enabled-vmware-vsphere.md#required-vsphere-account-privileges). ++There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both. ++- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade. +- **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere ++To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands. Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally: ++```azurecli +az account set -s <subscription id> +az arcappliance get-credentials -n <name of the appliance> -g <resource group name> +az arcappliance update-infracredentials vmware --kubeconfig kubeconfig +``` +For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). +++To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed. ++```azurecli +az connectedvmware vcenter connect --custom-location <name of the custom location> --location <Azure region> --name <name of the vCenter resource in Azure> --resource-group <resource group for the vCenter resource> --username <username for the vSphere account> --password <password to the vSphere account> +``` ++## Collecting logs from the Arc resource bridge ++For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command. ++To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address. ++```azurecli +az account set -s <subscription id> +az arcappliance get-credentials -n <name of the appliance> -g <resource group name> +az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory> +``` ++If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH ++```azurecli +az account set -s <subscription id> +az arcappliance get-credentials -n <name of the appliance> -g <resource group name> +az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX +``` ++## Next steps ++- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md) +- [Understand disaster recovery operations for resource bridge](recover-from-resource-bridge-deletion.md) |
azure-arc | Browse And Enable Vcenter Resources In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/browse-and-enable-vcenter-resources-in-azure.md | Title: Enable your VMware vCenter resources in Azure description: Learn how to browse your vCenter inventory and represent a subset of your VMware vCenter resources in Azure to enable self-service. Previously updated : 09/28/2021 Last updated : 08/18/2023 # Customer intent: As a VI admin, I want to represent a subset of my vCenter resources in Azure to enable self-service. In this section, you will enable resource pools, networks, and other non-VM reso 1. (Optional) Select **Install guest agent** and then provide the Administrator username and password of the guest operating system. - The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc-enabled VMware vSphere](manage-vmware-vms-in-azure.md). + The guest agent is the [Azure Arc connected machine agent](../servers/agent-overview.md). You can install this agent later by selecting the VM in the VM inventory view on your vCenter and selecting **Enable guest management**. For information on the prerequisites of enabling guest management, see [Manage VMware VMs through Arc-enabled VMware vSphere](perform-vm-ops-through-azure.md). 1. Select **Enable** to start the deployment of the VM represented in Azure. -For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](manage-access-to-arc-vmware-resources.md). +For information on the capabilities enabled by a guest agent, see [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). ## Next steps -- [Manage access to VMware resources through Azure RBAC](manage-access-to-arc-vmware-resources.md).+- [Manage access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). |
azure-arc | Day2 Operations Resource Bridge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/day2-operations-resource-bridge.md | - Title: Perform ongoing administration for Arc-enabled VMware vSphere -description: Learn how to perform day 2 administrator operations related to Azure Arc-enabled VMware vSphere - Previously updated : 09/15/2022----# Perform ongoing administration for Arc-enabled VMware vSphere --In this article, you'll learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview): --- Upgrading the Azure Arc resource bridge (preview)-- Updating the credentials-- Collecting logs from the Arc resource bridge--Each of these operations requires either SSH key to the resource bridge VM or the kubeconfig that provides access to the Kubernetes cluster on the resource bridge VM. --## Upgrading the Arc resource bridge --Azure Arc-enabled VMware vSphere requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge will be released to include security and feature updates. --> [!NOTE] -> To upgrade the Arc resource bridge VM to the latest version, you will need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations performed through Arc during this time might fail. --To upgrade to the latest version of the resource bridge, perform the following steps: --1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location and vCenter Azure resources --2. Find and delete the old Arc resource bridge **template** from your vCenter --3. Download the script from the portal and update the following section in the script -- ```powershell - $location = <Azure region of the resources> - - $applianceSubscriptionId = <subscription-id> - $applianceResourceGroupName = <resourcegroup-name> - $applianceName = <resource-bridge-name> - - $customLocationSubscriptionId = <subscription-id> - $customLocationResourceGroupName = <resourcegroup-name> - $customLocationName = <custom-location-name> - - $vCenterSubscriptionId = <subscription-id> - $vCenterResourceGroupName = <resourcegroup-name> - $vCenterName = <vcenter-name-in-azure> - ``` --4. [Run the onboarding script](quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter -- ``` powershell-interactive - ./resource-bridge-onboarding-script.ps1 --force - ``` --5. [Provide the inputs](quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted. --6. Once the onboarding is successfully completed, the resource bridge is upgraded to the latest version. --## Updating the vSphere account credentials (using a new password or a new vSphere account after onboarding) --Azure Arc-enabled VMware vSphere uses the vSphere account credentials you provided during the onboarding to communicate with your vCenter server. These credentials are only persisted locally on the Arc resource bridge VM. --As part of your security practices, you might need to rotate credentials for your vCenter accounts. As credentials are rotated, you must also update the credentials provided to Azure Arc to ensure the functioning of Azure Arc-enabled VMware services. You can also use the same steps in case you need to use a different vSphere account after onboarding. You must ensure the new account also has all the [required vSphere permissions](support-matrix-for-arc-enabled-vmware-vsphere.md#required-vsphere-account-privileges). --There are two different sets of credentials stored on the Arc resource bridge. You can use the same account credentials for both. --- **Account for Arc resource bridge**. This account is used for deploying the Arc resource bridge VM and will be used for upgrade.-- **Account for VMware cluster extension**. This account is used to discover inventory and perform all VM operations through Azure Arc-enabled VMware vSphere--To update the credentials of the account for Arc resource bridge, run the following Azure CLI commands . Run the commands from a workstation that can access cluster configuration IP address of the Arc resource bridge locally: --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance update-infracredentials vmware --kubeconfig kubeconfig -``` -For more details on the commands see [`az arcappliance get-credentials`](/cli/azure/arcappliance#az-arcappliance-get-credentials) and [`az arcappliance update-infracredentials vmware`](/cli/azure/arcappliance/update-infracredentials#az-arcappliance-update-infracredentials-vmware). ---To update the credentials used by the VMware cluster extension on the resource bridge. This command can be run from anywhere with `connectedvmware` CLI extension installed. --```azurecli -az connectedvmware vcenter connect --custom-location <name of the custom location> --location <Azure region> --name <name of the vCenter resource in Azure> --resource-group <resource group for the vCenter resource> --username <username for the vSphere account> --password <password to the vSphere account> -``` --## Collecting logs from the Arc resource bridge --For any issues encountered with the Azure Arc resource bridge, you can collect logs for further investigation. To collect the logs, use the Azure CLI [`Az arcappliance log`](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware) command. --To save the logs to a destination folder, run the following commands. These commands need connectivity to cluster configuration IP address. --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance logs vmware --kubeconfig kubeconfig --out-dir <path to specified output directory> -``` --If the Kubernetes cluster on the resource bridge isn't in functional state, you can use the following commands. These commands require connectivity to IP address of the Azure Arc resource bridge VM via SSH --```azurecli -az account set -s <subscription id> -az arcappliance get-credentials -n <name of the appliance> -g <resource group name> -az arcappliance logs vmware --out-dir <path to specified output directory> --ip XXX.XXX.XXX.XXX -``` --## Next steps --- [Troubleshoot common issues related to resource bridge](../resource-bridge/troubleshoot-resource-bridge.md)-- [Understand disaster recovery operations for resource bridge](disaster-recovery.md) |
azure-arc | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/disaster-recovery.md | - Title: Perform disaster recovery operations -description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios. -- Previously updated : 08/16/2022---# Recover from accidental deletion of resource bridge VM --In this article, you'll learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. --## Recovering the Arc resource bridge in case of VM deletion --To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps. --1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources. --2. Find and delete the old Arc resource bridge template from your vCenter. --3. Download the [onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure. -- ```powershell - $location = <Azure region of the resources> - $applianceSubscriptionId = <subscription-id> - $applianceResourceGroupName = <resource-group-name> - $applianceName = <resource-bridge-name> - - $customLocationSubscriptionId = <subscription-id> - $customLocationResourceGroupName = <resource-group-name> - $customLocationName = <custom-location-name> - - $vCenterSubscriptionId = <subscription-id> - $vCenterResourceGroupName = <resource-group-name> - $vCenterName = <vcenter-name-in-azure> - ``` --4. [Run the onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter. -- ``` powershell-interactive - ./resource-bridge-onboarding-script.ps1 --force - ``` --5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted. --6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again. --## Next steps --[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md) --If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support: --- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).-- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.-- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md | + + Title: Install Arc agent at scale for your VMware VMs +description: Learn how to enable guest management at scale for Arc enabled VMware vSphere VMs. + Last updated : 08/21/2023++#Customer intent: As an IT infra admin, I want to install arc agents to use Azure management services for VMware VMs. +++# Install Arc agents at scale for your VMware VMs ++In this article, you will learn how to install Arc agents at scale for VMware VMs and use Azure management capabilities. ++## Prerequisites ++Ensure the following before you install Arc agents at scale for VMware VMs: ++- The resource bridge must be in running state. +- The vCenter must be in connected state. +- The user account must have permissions listed in Azure Arc VMware Administrator role. +- All the target machines are: + - Powered on and the resource bridge has network connectivity to the host running the VM. + - Running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). + - Able to connect through the firewall to communicate over the internet, and [these URLs](../servers/network-requirements.md#urls) aren't blocked. ++ > [!NOTE] + > If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo`, and add `<username> ALL=(ALL) NOPASSWD:ALL` at the end of the file. Ensure you replace `<username>`. <br> <br>If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. ++## Install Arc agents at scale from portal ++An admin can install agents for multiple machines from the Azure portal if the machines share the same administrator credentials. ++1. Navigate to **Azure Arc center** and select **vCenter resource**. ++2. Select all the machines and choose **Enable in Azure** option. ++3. Select **Enable guest management** checkbox to install Arc agents on the selected machine. ++4. If you want to connect the Arc agent via proxy, provide the proxy server details. ++5. Provide the administrator username and password for the machine. ++> [!NOTE] +> For Windows VMs, the account must be part of local administrator group; and for Linux VM, it must be a root account. +++## Next steps ++[Set up and manage self-service access to VMware resources through Azure RBAC](setup-and-manage-self-service-access.md). |
azure-arc | Manage Access To Arc Vmware Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-access-to-arc-vmware-resources.md | - Title: Manage access to VMware resources through Azure Role-Based Access Control -description: Learn how to manage access to your on-premises VMware resources through Azure Role-Based Access Control (RBAC). - Previously updated : 11/08/2021--#Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure ---# Manage access to VMware resources through Azure Role-Based Access Control --Once your VMware vCenter resources have been enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure and allow your teams to deploy and manage VMs. --## Arc-enabled VMware vSphere built-in roles --There are three built-in roles to meet your access control requirements. You can apply these roles to a whole subscription, resource group, or a single resource. --- **Azure Arc VMware Administrator** role - used by administrators--- **Azure Arc VMware Private Cloud User** role - used by anyone who needs to deploy and manage VMs--- **Azure Arc VMware VM Contributor** role - used by anyone who needs to deploy and manage VMs--### Azure Arc VMware Administrator role --The **Azure Arc VMware Administrator** role is a built-in role that provides permissions to perform all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. Assign this role to users or groups that are administrators managing Azure Arc-enabled VMware vSphere deployment. --### Azure Arc VMware Private Cloud User role --The **Azure Arc VMware Private Cloud User** role is a built-in role that provides permissions to use the VMware vSphere resources made accessible through Azure. Assign this role to any users or groups that need to deploy, update, or delete VMs. --We recommend assigning this role at the individual resource pool (or host or cluster), virtual network, or template with which you want the user to deploy VMs. --### Azure Arc VMware VM Contributor --The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations. Assign this role to any users or groups that need to deploy, update, or delete VMs. --We recommend assigning this role for the subscription or resource group to which you want the user to deploy VMs. --## Assigning the roles to users/groups --1. Go to the [Azure portal](https://portal.azure.com). --2. Search and navigate to the subscription, resource group, or the resource at which scope you want to provide this role. --3. To find the Arc-enabled VMware vSphere resources like resource pools, clusters, hosts, datastores, networks, or virtual machine templates: - 1. navigate to the resource group and select the **Show hidden types** checkbox. - 2. search for *"VMware"*. --4. Click on **Access control (IAM)** in the table of contents on the left. --5. Click on **Add role assignments** on the **Grant access to this resource**. --6. Select the custom role you want to assign (one of **Azure Arc VMware Administrator**, **Azure Arc VMware Private Cloud User**, or **Azure Arc VMware VM Contributor**). --7. Search for the Azure Active Directory (Azure AD) user or group to which you want to assign this role. --8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission. --9. Repeat the above steps for each scope and role. --## Next steps --- [Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). |
azure-arc | Manage Vmware Vms In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/manage-vmware-vms-in-azure.md | - Title: Manage VMware virtual machines Azure Arc -description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent. - Previously updated : 11/10/2021----# Manage VMware VMs in Azure through Arc-enabled VMware vSphere --In this article, you will learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as: --- Start, stop, and restart a VM--- Control access and add Azure tags--- Add, remove, and update network interfaces--- Add, remove, and update disks and update VM size (CPU cores, memory)--- Enable guest management--- Install extensions (enabling guest management is required)---To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM. --## Supported extensions and management services --### Windows extensions --|Extension |Publisher |Type | -|-|-|--| -|Custom Script extension |Microsoft.Compute | CustomScriptExtension | -|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent | -|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute | HybridWorkerForWindows| ---### Linux extensions --|Extension |Publisher |Type | -|-|-|--| -|Custom Script extension |Microsoft.Azure.Extensions |CustomScript | -|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux | -|Azure Automation Hybrid Runbook Worker extension (preview) | Microsoft.Compute | HybridWorkerForLinux| --## Enable guest management --Before you can install an extension, you must enable guest management on the VMware VM. --1. Make sure your target machine: -- - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). -- - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) are not blocked. -- - has VMware tools installed and running. -- - is powered on and the resource bridge has network connectivity to the host running the VM. -- >[!NOTE] - >If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo` and add `<username> ALL=(ALL) NOPASSWD:ALL` to the end of the file. Make sure to replace `<username>`. - > - >If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Search for and select the VMware VM and select **Configuration**. --3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**. -- For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group. --## Install the LogAnalytics extension --1. From your browser, go to the [Azure portal](https://portal.azure.com). --1. Search for and select the VMware VM that you want to install extension. --1. Navigate to **Extensions** and select **Add**. --1. Select the extension you want to install. Based on the extension, you'll need to provide configuration details, such as the workspace ID and primary key for Log Analytics extension. Then select **Review + create**. --The deployment starts the installation of the extension on the selected VM. --## Delete a VM --If you no longer need the VM, you can delete it. --1. From your browser, go to the [Azure portal](https://portal.azure.com). --2. Search for and select the VM you want to delete. --3. In the single VM view, select on **Delete**. --4. When prompted, confirm that you want to delete it. -->[!NOTE] ->This also deletes the VM in your VMware vCenter. --## Next steps --[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md) |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md | Title: What is Azure Arc-enabled VMware vSphere (preview)? description: Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 09/15/2022 Last updated : 08/21/2023 # What is Azure Arc-enabled VMware vSphere (preview)? -Azure Arc-enabled VMware vSphere (preview) extends Azure governance and management capabilities to VMware vSphere infrastructure. With Azure Arc-enabled VMware vSphere, you get a consistent management experience across Azure and VMware vSphere infrastructure. +Azure Arc-enabled VMware vSphere (preview) is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure. Arc-enabled VMware vSphere (preview) allows you to: -- Perform various VMware virtual machine (VM) lifecycle operations directly from Azure, such as create, start/stop, resize, and delete.+- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Arc at scale. ++- Perform various virtual machine (VM) operations directly from Azure, such as create, resize, delete, and power cycle operations such as start/stop/restart on VMware VMs consistently with Azure. - Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control](../../role-based-access-control/overview.md) (RBAC). -- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments. You can also discover and onboard existing VMware VMs to Azure.+- Install the Arc-connected machine agent at scale on VMware VMs to [govern, protect, configure, and monitor](../servers/overview.md#supported-cloud-operations) them. ++- Browse your VMware vSphere resources (VMs, templates, networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments. ++## Onboard resources to Azure management at scale ++Azure services such as Microsoft Defender for Cloud, Azure Monitor, Azure Update Management Center, and Azure Policy provide a rich set of capabilities to secure, monitor, patch, and govern off-Azure resources via Arc. ++By using Arc-enabled VMware vSphere's capabilities to discover your VMware estate and install the Arc agent at scale, you can simplify onboarding your entire VMware vSphere estate to these services. ++## Set up self-service access for your teams to use vSphere resources using Azure Arc ++Arc-enabled VMware vSphere extends Azure's control plane (Azure Resource Manager) to VMware vSphere infrastructure. This enables you to use Azure AD-based identity management, granular Azure RBAC, and ARM templates to help your app teams and developers get self-service access to provision and manage VMs on VMware vSphere environment, providing greater agility. ++1. Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure. ++2. Administrators can then use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure. -- Conduct governance and monitoring operations across Azure and VMware VMs by enabling guest management (installing the [Azure Arc-enabled servers Connected Machine agent](../servers/agent-overview.md)).+3. Administrators can provide app teams/developers fine-grained permissions on those VMware resources through Azure RBAC. ++4. App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart). ++5. App teams can use ARM templates/Bicep (Infrastructure as Code) to deploy VMs as part of CI/CD pipelines. ## How does it work? -To deliver this experience, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview), which is a virtual appliance, in your vSphere environment. It connects your vCenter Server to Azure. Azure Arc resource bridge (preview) enables you to represent the VMware resources in Azure and do various operations on them. +Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure. -## Supported VMware vSphere versions +When a VMware vCenter Server is connected to Azure, an automatic discovery of the inventory of vSphere resources is performed. This inventory data is continuously kept in sync with the vCenter Server. -Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 6.7, 7 and 8. +All guest OS-based capabilities are provided by enabling guest management (installing the Arc agent) on the VMs. Once guest management is enabled, VM extensions can be installed to use the Azure management capabilities. You can perform virtual hardware operations such as resizing, deleting, adding disks, and power cycling without guest management enabled. -> [!NOTE] -> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point. +## How is Arc-enabled VMware vSphere different from Arc-enabled Servers -## Supported scenarios +The easiest way to think of this is as follows: -The following scenarios are supported in Azure Arc-enabled VMware vSphere (preview): +- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there may, in fact, not even be a host hypervisor in some cases. -- Virtualized Infrastructure Administrators/Cloud Administrators can connect a vCenter instance to Azure and browse the VMware virtual machine inventory in Azure.+- Azure Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere also provides guest operating system managementΓÇöin fact, it uses the same components as Azure Arc-enabled servers. -- Administrators can use the Azure portal to browse VMware vSphere inventory and register virtual machines resource pools, networks, and templates into Azure. They can also enable guest management on many registered virtual machines at once.+You have the flexibility to start with either option, and incorporate the other one later without any disruption. With both the options, you will enjoy the same consistent experience. -- Administrators can provide app teams/developers fine-grained permissions on those VMware resources through Azure RBAC. -- App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).+## Supported VMware vSphere versions -- App teams and administrators can install extensions such as the Log Analytics agent, Custom Script Extension, Dependency Agent, and Azure Automation Hybrid Runbook Worker extension on the virtual machines and do operations supported by the extensions.+Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 6.7, 7, and 8. +> [!NOTE] +> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point. ## Supported regions You can use Azure Arc-enabled VMware vSphere (preview) in these supported regions:- - Australia East - Canada Central - East US+- East US 2 +- North Europe - Southeast Asia - UK South - West Europe+- West US 2 +- West US 3 ++For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all) page. -For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc®ions=all) page ## Data Residency -Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the region the customer deploys the service instance in. +Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the region the customer deploys the service instance in. ## Next steps -- [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).-- View the [support matrix for Arc-enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md).+- Plan your resource bridge deployment by reviewing the [support matrix for Arc-enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md). +- Once ready, [connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). - Try out Arc-enabled VMware vSphere by using the [Azure Arc Jumpstart](https://azurearcjumpstart.io/azure_arc_jumpstart/azure_arc_vsphere/). |
azure-arc | Perform Vm Ops Through Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md | + + Title: Perform VM operations on VMware VMs through Azure +description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent. + Last updated : 08/18/2023+++# Manage VMware VMs in Azure through Arc-enabled VMware vSphere ++In this article, you will learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as: ++- Start, stop, and restart a VM ++- Control access and add Azure tags ++- Add, remove, and update network interfaces ++- Add, remove, and update disks and update VM size (CPU cores, memory) ++- Enable guest management ++- Install extensions (enabling guest management is required) +++To perform guest OS operations on Arc-enabled VMs, you must enable guest management on the VMs. When you enable guest management, the Arc Connected Machine Agent is installed on the VM. ++## Supported extensions and management services ++### Windows extensions ++|Extension |Publisher |Type | +|-|-|--| +|Custom Script extension |Microsoft.Compute | CustomScriptExtension | +|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |MicrosoftMonitoringAgent | +|Azure Automation Hybrid Runbook Worker extension (preview) |Microsoft.Compute | HybridWorkerForWindows| +++### Linux extensions ++|Extension |Publisher |Type | +|-|-|--| +|Custom Script extension |Microsoft.Azure.Extensions |CustomScript | +|Log Analytics agent |Microsoft.EnterpriseCloud.Monitoring |OmsAgentForLinux | +|Azure Automation Hybrid Runbook Worker extension (preview) | Microsoft.Compute | HybridWorkerForLinux| ++## Enable guest management ++Before you can install an extension, you must enable guest management on the VMware VM. ++1. Make sure your target machine: ++ - is running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). ++ - is able to connect through the firewall to communicate over the internet and these [URLs](../servers/network-requirements.md#urls) are not blocked. ++ - has VMware tools installed and running. ++ - is powered on and the resource bridge has network connectivity to the host running the VM. ++ >[!NOTE] + >If you're using a Linux VM, the account must not prompt for login on sudo commands. To override the prompt, from a terminal, run `sudo visudo` and add `<username> ALL=(ALL) NOPASSWD:ALL` to the end of the file. Make sure to replace `<username>`. + > + >If your VM template has these changes incorporated, you won't need to do this for the VM created from that template. ++1. From your browser, go to the [Azure portal](https://portal.azure.com). ++2. Search for and select the VMware VM and select **Configuration**. ++3. Select **Enable guest management** and provide the administrator username and password to enable guest management. Then select **Apply**. ++ For Linux, use the root account, and for Windows, use an account that is a member of the Local Administrators group. ++## Install the LogAnalytics extension ++1. From your browser, go to the [Azure portal](https://portal.azure.com). ++1. Search for and select the VMware VM that you want to install extension. ++1. Navigate to **Extensions** and select **Add**. ++1. Select the extension you want to install. Based on the extension, you'll need to provide configuration details, such as the workspace ID and primary key for Log Analytics extension. Then select **Review + create**. ++The deployment starts the installation of the extension on the selected VM. ++## Delete a VM ++If you no longer need the VM, you can delete it. ++1. From your browser, go to the [Azure portal](https://portal.azure.com). ++2. Search for and select the VM you want to delete. ++3. In the single VM view, select on **Delete**. ++4. When prompted, confirm that you want to delete it. ++>[!NOTE] +>This also deletes the VM in your VMware vCenter. ++## Next steps ++[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md) |
azure-arc | Quick Start Create A Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md | Title: Create a virtual machine on VMware vCenter using Azure Arc description: In this quickstart, you'll learn how to create a virtual machine on VMware vCenter using Azure Arc Previously updated : 09/29/2021 Last updated : 08/18/2023 # Customer intent: As a self-service user, I want to provision a VM using vCenter resources through Azure so that I can deploy my code Once your administrator has connected a VMware vCenter to Azure, represented VMw ## Next steps -- [Perform operations on VMware VMs in Azure](manage-vmware-vms-in-azure.md)+- [Perform operations on VMware VMs in Azure](perform-vm-ops-through-azure.md) |
azure-arc | Recover From Resource Bridge Deletion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md | + + Title: Perform disaster recovery operations +description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios. ++ Last updated : 08/18/2023+++# Recover from accidental deletion of resource bridge VM ++In this article, you'll learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail. ++## Recovering the Arc resource bridge in case of VM deletion ++To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps. ++1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and vCenter Azure resources. ++2. Find and delete the old Arc resource bridge template from your vCenter. ++3. Download the [onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure. ++ ```powershell + $location = <Azure region of the resources> + $applianceSubscriptionId = <subscription-id> + $applianceResourceGroupName = <resource-group-name> + $applianceName = <resource-bridge-name> + + $customLocationSubscriptionId = <subscription-id> + $customLocationResourceGroupName = <resource-group-name> + $customLocationName = <custom-location-name> + + $vCenterSubscriptionId = <subscription-id> + $vCenterResourceGroupName = <resource-group-name> + $vCenterName = <vcenter-name-in-azure> + ``` ++4. [Run the onboarding script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#run-the-script) again with the `--force` parameter. ++ ``` powershell-interactive + ./resource-bridge-onboarding-script.ps1 --force + ``` ++5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted. ++6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again. ++## Next steps ++[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md) ++If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support: ++- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). +- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. +- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-arc | Setup And Manage Self Service Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/setup-and-manage-self-service-access.md | + + Title: Set up and manage self-service access to VMware resources through Azure RBAC +description: Learn how to manage access to your on-premises VMware resources through Azure Role-Based Access Control (RBAC). + Last updated : 08/21/2023+# Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure +++# Set up and manage self-service access to VMware resources ++Once your VMware vSphere resources are enabled in Azure, the final step in setting up a self-service experience for your teams is to provide them with access. This article describes how to use built-in roles to manage granular access to VMware resources through Azure Role-based Access Control (RBAC) and allow your teams to deploy and manage VMs. ++## Prerequisites ++- Your vCenter must be connected to Azure Arc. +- Your vCenter resources such as Resourcepools/clusters/hosts, networks, templates, and datastores must be Arc-enabled. +- You must have User Access Administrator or Owner role at the scope (resource group/subscription) to assign roles to other users. +++## Provide access to use Arc-enabled vSphere resources ++To provision VMware VMs and change their size, add disks, change network interfaces, or delete them, your users need to have permissions on the compute, network, storage, and to the VM template resources that they will use. These permissions are provided by the built-in **Azure Arc VMware Private Cloud User** role. ++You must assign this role on individual resource pool (or cluster or host), network, datastore, and template that a user or a group needs to access. ++1. Go to the [**VMware vCenters (preview)** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter). ++2. Search and select your vCenter. ++3. Navigate to the **Resourcepools/clusters/hosts** in **vCenter inventory** section in the table of contents. ++3. Find and select resourcepool (or cluster or host). This will take you to the Arc resource representing the resourcepool. ++4. Select **Access control (IAM)** in the table of contents. ++5. Select **Add role assignments** on the **Grant access to this resource**. ++6. Select **Azure Arc VMware Private Cloud User** role and select **Next**. ++7. Select **Select members** and search for the Azure Active Directory (Azure AD) user or group that you want to provide access. ++8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission. ++9. Select **Review + assign** to complete the role assignment. ++10. Repeat steps 3-9 for each datastore, network, and VM template that you want to provide access to. ++If you have organized your vSphere resources into a resource group, you can provide the same role at the resource group scope. ++Your users now have access to VMware vSphere cloud resources. However, your users will also need to have permissions on the subscription/resource group where they would like to deploy and manage VMs. ++## Provide access to subscription or resource group where VMs will be deployed ++In addition to having access to VMware vSphere resources through the **Azure Arc VMware Private Cloud User**, your users must have permissions on the subscription and resource group where they deploy and manage VMs. ++The **Azure Arc VMware VM Contributor** role is a built-in role that provides permissions to conduct all VMware virtual machine operations. ++1. Go to the [Azure portal](https://portal.azure.com/). ++2. Search and navigate to the subscription or resource group to which you want to provide access. ++3. Select **Access control (IAM)** in the table of contents on the left. ++4. Select **Add role assignments** on the **Grant access to this resource**. ++5. Select **Azure Arc VMware VM Contributor** role and select **Next**. ++6. Select the option **Select members**, and search for the Azure Active Directory (Azure AD) user or group that you want to provide access. ++8. Select the Azure AD user or group name. Repeat this for each user or group to which you want to grant this permission. ++9. Select on **Review + assign** to complete the role assignment. +++## Next steps ++[Create a VM using Azure Arc-enabled vSphere](quick-start-create-a-vm.md). |
azure-arc | Support Matrix For Arc Enabled Vmware Vsphere | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md | Title: Support matrix for Azure Arc-enabled VMware vSphere (preview) + Title: Plan for deployment description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more. Previously updated : 10/21/2022- Last updated : 08/18/2023 # Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere. |
azure-arc | Switch To New Preview Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md | + + Title: Switch to the new preview version +description: Learn to switch to the new preview version and use its capabilities + Last updated : 08/22/2023+# Customer intent: As a VI admin, I want to switch to the new preview version of Arc-enabled VMware vSphere (preview) and leverage the associated capabilities +++# Switch to the new preview version ++On August 21, 2023, we rolled out major changes to Azure Arc-enabled VMware vSphere preview. We are now announcing a new preview. By switching to the new preview version, you can use all the Azure management services that are available for Arc-enabled Servers. ++> [!NOTE] +> If you're new to Arc-enabled VMware vSphere (preview), you will be able to leverage the new capabilities by default. To get started with the preview, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md). +++## Switch to the new preview version (Existing preview customer) ++If you are an existing **Azure Arc-enabled VMware** customer, for VMs that are Azure-enabled, follow these steps to switch to the new preview version: ++>[!Note] +>If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc). ++1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource. ++2. Select all the virtual machines that are Azure enabled with the older preview version. ++3. Select **Remove from Azure**. ++ :::image type="VM Inventory view" source="media/switch-to-new-preview-version/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-preview-version/vm-inventory-view-expanded.png"::: ++4. After successful removal from Azure, enable the same resources again in Azure. ++5. Once the resources are re-enabled, the VMs are auto switched to the new preview version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**. ++ :::image type=" New VM browse view" source="media/switch-to-new-preview-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-preview-version/new-vm-browse-view-expanded.png"::: + +## Next steps ++[Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script). |
azure-arc | Troubleshoot Guest Management Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md | + + Title: Troubleshoot Guest Management Issues +description: Learn about how to troubleshoot the guest management issues for Arc-enabled VMware vSphere. + Last updated : 08/18/2023+# Customer intent: As a VI admin, I want to understand the troubleshooting process for guest management issues. ++# Troubleshoot Guest Management for Linux VMs ++This article provides information on how to troubleshoot and resolve the issues that may occur while you enable guest management on Arc-enabled VMware vSphere virtual machines. ++## Troubleshoot issues while enabling Guest Management on a domain-joined Linux VM ++**Error message**: Enabling Guest Management on a domain-joined Linux VM fails with the error message **InvalidGuestLogin: Failed to authenticate to the system with the credentials**. ++**Resolution**: Before you enable Guest Management on a domain-joined Linux VM using active directory credentials, follow these steps to set the configuration on the VM: ++1. In the SSSD configuration file (typically, */etc/sssd/sssd.conf*), add the following under the section for the domain: ++ [domain/contoso.com] + ad_gpo_map_batch = +vmtoolsd ++2. After making the changes to SSSD configuration, restart the SSSD process. If SSSD is running as a system process, run `sudo systemctl restart sssd` to restart it. ++### Additional information ++The parameter `ad_gpo_map_batch` according to the [sssd mainpage](https://jhrozek.fedorapeople.org/sssd/1.13.4/man/sssd-ad.5.html): ++A comma-separated list of Pluggable Authentication Module (PAM) service names for which GPO-based access control is evaluated based on the BatchLogonRight and DenyBatchLogonRight policy settings. ++It's possible to add another PAM service name to the default set by using **+service_name** or to explicitly remove a PAM service name from the default set by using **-service_name**. For example, to replace a default PAM service name for this sign in (for example, **crond**) with a custom PAM service name (for example, **my_pam_service**), use this configuration: ++`ad_gpo_map_batch = +my_pam_service, -crond` ++Default: The default set of PAM service names includes: ++- crond: ++ `vmtoolsd` PAM is enabled for SSSD evaluation. For any request coming through VMware tools, SSSD will be invoked since VMware tools use this PAM for authenticating to the Linux Guest VM. ++#### References ++- [Invoke-VMScript to an domain joined Ubuntu VM](https://communities.vmware.com/t5/VMware-PowerCLI-Discussions/Invoke-VMScript-to-an-domain-joined-Ubuntu-VM/td-p/2257554). +++## Troubleshoot issues while enabling Guest Management on RHEL-based Linux VMs ++Applies to: ++- RedHat Linux +- CentOS +- Rocky Linux +- Oracle Linux +- SUSE Linux +- SUSE Linux Enterprise Server +- Alma Linux +- Fedora +++**Error message**: Provisioning of the resource failed with Code: `AZCM0143`; Message: `install_linux_azcmagent.sh: installation error`. ++**Workaround** ++Before you enable the guest agent, follow these steps on the VM: ++1. Create file `vmtools_unconfined_rpm_script_kcs5347781.te` using the following: ++ `policy_module(vmtools_unconfined_rpm_script_kcs5347781, 1.0) + gen_require(` + type vmtools_unconfined_t; + ') + optional_policy(` + rpm_transition_script(vmtools_unconfined_t,system_r) + ')` ++2. Install the package to build the policy module: ++ `sudo yum -y install selinux-policy-devel` ++3. Compile the module: ++ `make -f /usr/share/selinux/devel/Makefile vmtools_unconfined_rpm_script_kcs5347781.pp` ++4. Install the module: ++ `sudo semodule -i vmtools_unconfined_rpm_script_kcs5347781.pp` ++### Additional information ++Track the issue through [BZ 1872245 - [VMware][RHEL 8] vmtools is not able to install rpms](https://bugzilla.redhat.com/show_bug.cgi?id=1872245). ++Upon executing a command using `vmrun` command, the context of the `yum` or `rpm` command is `vmtools_unconfined_t`. ++Upon `yum` or `rpm` executing scriptlets, the context is changed to `rpm_script_t`, which is currently denied because of the missing rule in the SELinux policy. ++#### References ++- [Executing yum/rpm commands using VMware tools facility (vmrun) fails in error when packages have scriptlets](https://access.redhat.com/solutions/5347781). ++## Next steps ++If you don't see your problem here or you can't resolve your issue, try one of the following channels for support: ++- Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html). ++- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. ++- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). |
azure-cache-for-redis | Cache Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md | By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-mon >[!NOTE] >In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md). >+ ### Advisor recommendations The **Advisor recommendations** on the left displays recommendations for your cache. During normal operations, no recommendations are displayed. Further information can be found on the **Recommendations** in the working pane You can monitor these metrics on the [Monitoring](cache-how-to-monitor.md) section of the Resource menu. -Each pricing tier has different limits for client connections, memory, and bandwidth. If your cache approaches maximum capacity for these metrics over a sustained period of time, a recommendation is created. For more information about the metrics and limits reviewed by the **Recommendations** tool, see the following table: - | Azure Cache for Redis metric | More information | | | | | Network bandwidth usage |[Cache performance - available bandwidth](./cache-planning-faq.yml#azure-cache-for-redis-performance) | New Azure Cache for Redis instances are configured with the following default Re | `maxmemory-samples` |3 |To save memory, LRU and minimal TTL algorithms are approximated algorithms instead of precise algorithms. By default Redis checks three keys and picks the one that was used less recently. | | `lua-time-limit` |5,000 |Max execution time of a Lua script in milliseconds. If the maximum execution time is reached, Redis logs that a script is still in execution after the maximum allowed time, and starts to reply to queries with an error. | | `lua-event-limit` |500 |Max size of script event queue. |-| `client-output-buffer-limit` `normalclient-output-buffer-limit` `pubsub` |0 0 032mb 8 mb 60 |The client output buffer limits can be used to force disconnection of clients that aren't reading data from the server fast enough for some reason. A common reason is that a Pub/Sub client can't consume messages as fast as the publisher can produce them. For more information, see [https://redis.io/topics/clients](https://redis.io/topics/clients). | +| `client-output-buffer-limit normal` / `client-output-buffer-limit pubsub` |`0 0 0` / `32mb 8mb 60` |The client output buffer limits can be used to force disconnection of clients that aren't reading data from the server fast enough for some reason. A common reason is that a Pub/Sub client can't consume messages as fast as the publisher can produce them. For more information, see [https://redis.io/topics/clients](https://redis.io/topics/clients). | <a name="databases"></a> Configuration and management of Azure Cache for Redis instances is managed by Mi - ACL - BGREWRITEAOF - BGSAVE-- CLUSTER - Cluster write commands are disabled, but read-only Cluster commands are permitted.+- CLUSTER - Cluster write commands are disabled, but read-only cluster commands are permitted. - CONFIG - DEBUG - MIGRATE - PSYNC - REPLICAOF+- REPLCONF - Azure cache for Redis instances don't allow customers to add external replicas. This [command](https://redis.io/commands/replconf/) is normally only sent by servers. - SAVE - SHUTDOWN - SLAVEOF For more information about Redis commands, see [https://redis.io/commands](https - [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-) - [Monitor Azure Cache for Redis](cache-how-to-monitor.md)+ |
azure-cache-for-redis | Cache How To Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md | The following list contains answers to commonly asked questions about Azure Cach - [After scaling, do I have to change my cache name or access keys?](#after-scaling-do-i-have-to-change-my-cache-name-or-access-keys) - [How does scaling work?](#how-does-scaling-work) - [Do I lose data from my cache during scaling?](#do-i-lose-data-from-my-cache-during-scaling)+- [Can I use all the features of Premium tier after scaling?](#can-i-use-all-the-features-of-premium-tier-after-scaling) - [Is my custom databases setting affected during scaling?](#is-my-custom-databases-setting-affected-during-scaling) - [Is my cache be available during scaling?](#is-my-cache-be-available-during-scaling) - [Are there scaling limitations with geo-replication?](#are-there-scaling-limitations-with-geo-replication) No, your cache name and keys are unchanged during a scaling operation. - When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved. - When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy. + ### Can I use all the features of Premium tier after scaling? ++No, some features can only be set when you create a cache in Premium tier, and are not available after scaling. ++These features cannot be added after you create the Premium cache: ++- VNet injection +- Adding zone redundancy +- Using multiple replicas per primary ++To use any of these features, you must create a new cache instance in the Premium tier. + ### Is my custom databases setting affected during scaling? If you configured a custom value for the `databases` setting during cache creation, keep in mind that some pricing tiers have different [databases limits](cache-configure.md#databases). Here are some considerations when scaling in this scenario: You can connect to your cache using the same [endpoints](cache-configure.md#prop ### Can I directly connect to the individual shards of my cache? -The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the redis-cli utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) on [https://redis.io](https://redis.io) in the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial). +The clustering protocol requires the client to make the correct shard connections, so the client should make share connections for you. With that said, each shard consists of a primary/replica cache pair, collectively known as a cache instance. You can connect to these cache instances using the Redis-CLI utility in the [unstable](https://redis.io/download) branch of the Redis repository at GitHub. This version implements basic support when started with the `-c` switch. For more information, see [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) on [https://redis.io](https://redis.io) in the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial). You need to use the `-p` switch to specify the correct port to connect to. Use the [CLUSTER NODES](https://redis.io/commands/cluster-nodes/) command to determine the exact ports used for the primary and replica nodes. The following port ranges are used: You need to use the `-p` switch to specify the correct port to connect to. Use t ### Can I configure clustering for a previously created cache? -Yes. First, ensure that your cache is premium by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time. +Yes. First, ensure that your cache is in the Premium tier by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time. >[!IMPORTANT]->You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves *differently* than a cache of the same size with *no* clustering. +>You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves _differently_ than a cache of the same size with _no_ clustering. All Enterprise and Enterprise Flash tier caches are always clustered. Unlike Basic, Standard, and Premium tier caches, Enterprise and Enterprise Flash ## Next steps - [Configure your maxmemory-reserved setting](cache-best-practices-memory-management.md#configure-your-maxmemory-reserved-setting)-- [[Best practices for scaling](cache-best-practices-scale.md)]+- [Best practices for scaling](cache-best-practices-scale.md) |
azure-cache-for-redis | Cache How To Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md | Before you upgrade, check the Redis version of a cache by selecting **Properties ## Upgrade using Azure CLI -To upgrade a cache from 4 to 6 using the Azure CLI, use the following command: +To upgrade a cache from 4 to 6 using the Azure CLI that is not using Private Endpoint, use the following command. ```azurecli-interactive az redis update --name cacheName --resource-group resourceGroupName --set redisVersion=6 ``` +### Private Endpoint ++If Private Endpoint is enabled on the cache, use the command that is appropriate based on whether `PublicNetworkAccess` is enabled or disabled: ++If `PublicNetworkAccess` is enabled: ++```azurecli + az redis update --name <cacheName> --resource-group <resourceGroupName> --set publicNetworkAccess=Enabled redisVersion=6 +``` ++If `PublicNetworkAccess` is disabled: ++```azurecli +az redis update --name <cacheName> --resource-group <resourceGroupName> --set publicNetworkAccess=Disabled redisVersion=6 +``` + ## Upgrade using PowerShell To upgrade a cache from 4 to 6 using PowerShell, use the following command: |
azure-cache-for-redis | Cache Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-private-link.md | -# Azure Cache for Redis with Azure Private Link +# What is Azure Cache for Redis with Azure Private Link? -In this article, you'll learn how to create a virtual network and an Azure Cache for Redis instance with a private endpoint using the Azure portal. You'll also learn how to add a private endpoint to an existing Azure Cache for Redis instance. +In this article, you learn how to create a virtual network and an Azure Cache for Redis instance with a private endpoint using the Azure portal. You also learn how to add a private endpoint to an existing Azure Cache for Redis instance. Azure Private Endpoint is a network interface that connects you privately and securely to Azure Cache for Redis powered by Azure Private Link. You can restrict public access to the private endpoint of your cache by disablin ## Create a private endpoint with a new Azure Cache for Redis instance -In this section, you'll create a new Azure Cache for Redis instance with a private endpoint. +In this section, you create a new Azure Cache for Redis instance with a private endpoint. ### Create a virtual network for your new cache In this section, you'll create a new Azure Cache for Redis instance with a priva | **Subscription** | Drop down and select your subscription. | The subscription under which to create this virtual network. | | **Resource group** | Drop down and select a resource group, or select **Create new** and enter a new resource group name. | Name for the resource group in which to create your virtual network and other resources. By putting all your app resources in one resource group, you can easily manage or delete them together. | | **Name** | Enter a virtual network name. | The name must: begin with a letter or number; end with a letter, number, or underscore; and contain only letters, numbers, underscores, periods, or hyphens. |- | **Region** | Drop down and select a region. | Select a [region](https://azure.microsoft.com/regions/) near other services that will use your virtual network. | + | **Region** | Drop down and select a region. | Select a [region](https://azure.microsoft.com/regions/) near other services that use your virtual network. | 5. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page. In this section, you'll create a new Azure Cache for Redis instance with a priva ### Create an Azure Cache for Redis instance with a private endpoint -To create a cache instance, follow these steps. +To create a cache instance, follow these steps: 1. Go back to the Azure portal homepage or open the sidebar menu, then select **Create a resource**. 1. On the **New** page, select **Databases** and then select **Azure Cache for Redis**. - :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis."::: + :::image type="content" source="media/cache-private-link/2-select-cache.png" alt-text="Select Azure Cache for Redis."::: 1. On the **New Redis Cache** page, configure the settings for your new cache. To create a cache instance, follow these steps. 1. Select the **Add** button to create your private endpoint. - :::image type="content" source="media/cache-private-link/3-add-private-endpoint.png" alt-text="In networking, add a private endpoint."::: + :::image type="content" source="media/cache-private-link/3-add-private-endpoint.png" alt-text="In networking, add a private endpoint."::: 1. On the **Create a private endpoint** page, configure the settings for your private endpoint with the virtual network and subnet you created in the last section and select **OK**. In this section, you'll add a private endpoint to an existing Azure Cache for Re ### Create a virtual network for your existing cache -To create a virtual network, follow these steps. +To create a virtual network, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Create a resource**. To create a virtual network, follow these steps. ### Create a private endpoint -To create a private endpoint, follow these steps. +To create a private endpoint, follow these steps: 1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions. To create a private endpoint, follow these steps. 1. In the **Resource** tab, select your subscription, choose the resource type as `Microsoft.Cache/Redis`, and then select the cache you want to connect the private endpoint to. 1. Select the **Next: Configuration** button at the bottom of the page.+ 1. Select the **Next: Virtual Network** button at the bottom of the page.+ 1. In the **Configuration** tab, select the virtual network and subnet you created in the previous section.+ 1. In the **Virtual Network** tab, select the virtual network and subnet you created in the previous section.+ 1. Select the **Next: Tags** button at the bottom of the page. 1. Optionally, in the **Tags** tab, enter the name and value if you wish to categorize the resource. For more information, see [Azure services DNS zone configuration](../private-lin ### How do I verify if my private endpoint is configured correctly? - Go to **Overview** in the Resource menu on the portal. You see the **Host name** for your cache in the working pane. Run a command like `nslookup <hostname>` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache. +Go to **Overview** in the Resource menu on the portal. You see the **Host name** for your cache in the working pane. Run a command like `nslookup <hostname>` from within the VNet that is linked to the private endpoint to verify that the command resolves to the private IP address for the cache. + :::image type="content" source="media/cache-private-link/cache-private-ip-address.png" alt-text="In the Azure portal, private endpoint D N S settings."::: ### How can I change my private endpoint to be disabled or enabled from public network access? There's a `publicNetworkAccess` flag that is `Disabled` by default. When set to `Enabled`, this flag is allows both public and private endpoint access to the cache. When set to `Disabled`, it allows only private endpoint access. You can set the value to `Disabled` or `Enabled` in the Azure portal or with a RESTful API PATCH request. -To change the value in the Azure portal, follow these steps. --1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions. +To change the value in the Azure portal, follow these steps: -1. Select the cache instance you want to change the public network access value. + 1. In the Azure portal, search for **Azure Cache for Redis**. Then, press enter or select it from the search suggestions. -1. On the left side of the screen, select **Private Endpoint**. + 1. Select the cache instance you want to change the public network access value. -1. Select the **Enable public network access** button. + 1. On the left side of the screen, select **Private Endpoint**. + 1. Select the **Enable public network access** button. + To change the value through a RESTful API PATCH request, see below and edit the value to reflect which flag you want for your cache.--```http -PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01 -{ "properties": { - "publicNetworkAccess":"Disabled" - } -} -``` + + ```http + PATCH https://management.azure.com/subscriptions/{subscription}/resourceGroups/{resourcegroup}/providers/Microsoft.Cache/Redis/{cache}?api-version=2020-06-01 + { "properties": { + "publicNetworkAccess":"Disabled" + } + } + + ``` + For more information, see [Redis - Update] (/rest/api/redis/Redis/Update?tabs=HTTP). ### How can I migrate my VNet injected cache to a Private Link cache? Control the traffic by using NSG rules for outbound traffic on source clients. D It's only linked to your VNet. Because it's not in your VNet, NSG rules don't need to be modified for dependent endpoints. -## Next steps +## Related content - To learn more about Azure Private Link, see the [Azure Private Link documentation](../private-link/private-link-overview.md). - To compare various network isolation options for your cache, see [Azure Cache for Redis network isolation options documentation](cache-network-isolation.md). |
azure-cache-for-redis | Cache Tutorial Functions Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md | -#CustomerIntent: As a < type of user >, I want < what? > so that < why? >. Last updated : 08/24/2023+#CustomerIntent: As a developer, I want a introductory example of using Azure Cache for Redis triggers with Azure Functions so that I can understand how to use the functions with a Redis cache. This tutorial shows how to implement basic triggers with Azure Cache for Redis a In this tutorial, you learn how to: > [!div class="checklist"]-> * Set up the necessary tools. -> * Configure and connect to a cache. -> * Create an Azure function and deploy code to it. -> * Confirm the logging of triggers. +> +> - Set up the necessary tools. +> - Configure and connect to a cache. +> - Create an Azure function and deploy code to it. +> - Confirm the logging of triggers. ## Prerequisites Creating the cache can take a few minutes. You can move to the next section whil 1. On the **Azure** tab, create a new function app by selecting the lightning bolt icon in the upper right of the **Workspace** tab. +1. Select **Create function...**. + :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-add-resource.png" alt-text="Screenshot that shows the icon for adding a new function from VS Code."::: 1. Select the folder that you created to start the creation of a new Azure Functions project. You get several on-screen prompts. Select: dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease 1. Go to your cache in the Azure portal, and then: 1. On the resource menu, select **Advanced settings**.+ 1. Scroll down to the **notify-keyspace-events** box and enter **KEA**. **KEA** is a configuration string that enables keyspace notifications for all keys and events. For more information on keyspace configuration strings, see the [Redis documentation](https://redis.io/docs/manual/keyspace-notifications/).+ 1. Select **Save** at the top of the window. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-keyspace-notifications.png" alt-text="Screenshot of advanced settings for Azure Cache for Redis in the portal."::: dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease 1. Create a new Azure function: 1. Go back to the **Azure** tab and expand your subscription.+ 1. Right-click **Function App**, and then select **Create Function App in Azure (Advanced)**. :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-create-function-app.png" alt-text="Screenshot of selections for creating a function app in VS Code."::: dotnet add package Microsoft.Azure.WebJobs.Extensions.Redis --prerelease :::image type="content" source="media/cache-tutorial-functions-getting-started/cache-log-stream.png" alt-text="Screenshot of a log stream for a function app resource on the resource menu." lightbox="media/cache-tutorial-functions-getting-started/cache-log-stream.png"::: -## Next step ++## Related content -> [!div class="nextstepaction"] -> [Create serverless event-based architectures by using Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md) +- [Overview of Azure functions for Azure Cache for Redis](/azure/azure-functions/functions-bindings-cache?tabs=in-process&pivots=programming-language-csharp) +- [Build a write-behind cache by using Azure Functions](cache-tutorial-write-behind.md) |
azure-cache-for-redis | Cache Tutorial Write Behind | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-write-behind.md | -#CustomerIntent: As a < type of user >, I want < what? > so that < why? >. Last updated : 08/24/2023+#CustomerIntent: As a developer, I want a practical example of using Azure Cache for Redis triggers with Azure Functions so that I can write applications that tie together a Redis cache and a database like Azure SQL. Every new item or new price written to the cache is then reflected in a SQL tabl In this tutorial, you learn how to: > [!div class="checklist"]-> * Configure a database, trigger, and connection strings. -> * Validate that triggers are working. -> * Deploy code to a function app. +> +> - Configure a database, trigger, and connection strings. +> - Validate that triggers are working. +> - Deploy code to a function app. ## Prerequisites In this tutorial, you learn how to: - Completion of the previous tutorial, [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md), with these resources provisioned: - Azure Cache for Redis instance - Azure Functions instance+ - A working knowledge of using Azure SQL - Visual Studio Code (VS Code) environment set up with NuGet packages installed ## Create and configure a new SQL database The SQL database is the backing database for this example. You can create a SQL database through the Azure portal or through your preferred method of automation. +For more information on creating a SQL database, see [Quickstart: Create a single database - Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart). + This example uses the portal: 1. Enter a database name and select **Create new** to create a new server to hold the database. + :::image type="content" source="media/cache-tutorial-write-behind/cache-create-sql.png" alt-text="Screenshot of creating an Azure SQL resource."::: + 1. Select **Use SQL authentication** and enter an admin sign-in and password. Be sure to remember these credentials or write them down. When you're deploying a server in production, use Azure Active Directory (Azure AD) authentication instead. + :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-authentication.png" alt-text="Screenshot of the authentication information for an Azure SQL resource."::: + 1. Go to the **Networking** tab and choose **Public endpoint** as a connection method. Select **Yes** for both firewall rules that appear. This endpoint allows access from your Azure function app. + :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-networking.png" alt-text="Screenshot of the networking setting for an Azure SQL resource."::: + 1. After validation finishes, select **Review + create** and then **Create**. The SQL database starts to deploy. -1. After deployment finishes, go to the resource in the Azure portal and select the **Query editor** tab. Create a new table called *inventory* that holds the data you'll write to it. Use the following SQL command to make a new table with two fields: +1. After deployment finishes, go to the resource in the Azure portal and select the **Query editor** tab. Create a new table called _inventory_ that holds the data you'll write to it. Use the following SQL command to make a new table with two fields: - `ItemName` lists the name of each item. - `Price` stores the price of the item. This example uses the portal: ); ``` -1. After the command finishes running, expand the *Tables* folder and verify that the new table was created. + :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-query-table.png" alt-text="Screenshot showing the creation of a table in Query Editor of an Azure SQL resource."::: ++1. After the command finishes running, expand the _Tables_ folder and verify that the new table was created. ## Configure the Redis trigger -First, make a copy of the same VS Code project that you used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as *RedisWriteBehindTrigger*, and open it in VS Code. +First, make a copy of the same VS Code project that you used in the previous tutorial. Copy the folder from the previous tutorial under a new name, such as _RedisWriteBehindTrigger_, and open it in VS Code. In this example, you use the [pub/sub trigger](cache-how-to-functions.md#redispubsubtrigger) to trigger on `keyevent` notifications. The goals of the example are: - Trigger every time a `SET` event occurs. A `SET` event happens when either new keys are written to the cache instance or the value of a key is changed. - After a `SET` event is triggered, access the cache instance to find the value of the new key.-- Determine if the key already exists in the *inventory* table in the SQL database.+- Determine if the key already exists in the _inventory_ table in the SQL database. - If so, update the value of that key. - If not, write a new row with the key and its value. To configure the trigger: 1. Import the `System.Data.SqlClient` NuGet package to enable communication with the SQL database. Go to the VS Code terminal and use the following command: - ```dos - dotnet add package System.Data.SqlClient + ```terminal + dotnet add package System.Data.SqlClient ``` -1. Copy and paste the following code in *redisfunction.cs* to replace the existing code: +1. Copy and paste the following code in _redisfunction.cs_ to replace the existing code: ```csharp using Microsoft.Extensions.Logging; To configure the trigger: ## Configure connection strings -You need to update the *local.settings.json* file to include the connection string for your SQL database. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this example: +You need to update the _local.settings.json_ file to include the connection string for your SQL database. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this example: ```json { You need to update the *local.settings.json* file to include the connection stri } ``` -You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. +To find the Redis connection string, go to the resource menu in the Azure Cache for Redis resource. The string is in the **Access Keys** area of **Settings**. -To find the Redis connection string, go to the resource menu in the Azure Cache for Redis resource. The string is in the **Access Keys** area. +To find the SQL database connection string, go to the resource menu in the SQL database resource. Under **Settings**, select **Connection strings**, and then select the **ADO.NET** tab. +The string is in the **ADO.NET (SQL authentication)** area. -To find the SQL database connection string, go to the resource menu in the SQL database resource, and then select the **ADO.NET** tab. The string is in the **Connection strings** area. +You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. > [!IMPORTANT] > This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information. To find the SQL database connection string, go to the resource menu in the SQL d ## Build and run the project 1. In VS Code, go to the **Run and debug tab** and run the project.+ 1. Go back to your Azure Cache for Redis instance in the Azure portal, and select the **Console** button to enter the Redis console. Try using some `SET` commands: - `SET apple 5.25` To find the SQL database connection string, go to the resource menu in the SQL d Confirm that the items written to your Azure Cache for Redis instance appear here. + :::image type="content" source="media/cache-tutorial-write-behind/cache-sql-query-result.png" alt-text="Screenshot showing the information has been copied to SQL from the cache instance."::: + ## Deploy the code to your function app +This tutorial builds on the previous tutorial. For more information, see [Deploy code to an Azure function](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started#deploy-code-to-an-azure-function). + 1. In VS Code, go to the **Azure** tab. 1. Find your subscription and expand it. Then, find the **Function App** section and expand that. To find the SQL database connection string, go to the resource menu in the SQL d ## Add connection string information +This tutorial builds on the previous tutorial. For more information on the `redisConnectionString`, see [Add connection string information](/azure/azure-cache-for-redis/cache-tutorial-functions-getting-started#add-connection-string-information). + 1. Go to your function app in the Azure portal. On the resource menu, select **Configuration**. 1. Select **New application setting**. For **Name**, enter **SQLConnectionString**. For **Value**, enter your connection string. If you ever want to clear the SQL database table without deleting it, you can us TRUNCATE TABLE [dbo].[inventory] ``` + ## Summary This tutorial and [Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) show how to use Azure Cache for Redis to trigger Azure function apps. They also show how to use Azure Cache for Redis as a write-behind cache with Azure SQL Database. Using Azure Cache for Redis with Azure Functions is a powerful combination that can solve many integration and performance problems. ## Related content -- [Create serverless event-based architectures by using Azure Cache for Redis and Azure Functions (preview)](cache-how-to-functions.md)-- [Build a write-behind cache by using Azure Functions](cache-tutorial-write-behind.md)+- [Overview of Azure functions for Azure Cache for Redis](/azure/azure-functions/functions-bindings-cache?tabs=in-process&pivots=programming-language-csharp) +- [Tutorial: Get started with Azure Functions triggers in Azure Cache for Redis](cache-tutorial-functions-getting-started.md) |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-cache-for-redis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
azure-functions | Add Bindings Existing Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/add-bindings-existing-function.md | Title: Connect functions to other Azure services description: Learn how to add bindings that connect to other Azure services to an existing function in your Azure Functions project. Previously updated : 04/29/2020 Last updated : 08/18/2023 +zone_pivot_groups: programming-languages-set-functions #Customer intent: As a developer, I need to know how to add a binding to an existing function so that I can integrate external services to my function. When you create a function, language-specific trigger code is added in your proj ## Local development -When you develop functions locally, you need to update the function code to add bindings. Using Visual Studio Code can make it easier to add bindings to a function. --### Visual Studio Code --When you use Visual Studio Code to develop your function and your function uses a function.json file, the Azure Functions extension can automatically add a binding to an existing function.json file. To learn more, see [Add input and output bindings](functions-develop-vs-code.md#add-input-and-output-bindings). +When you develop functions locally, you need to update the function code to add bindings. For languages that use function.json, [Visual Studio Code](#visual-studio-code) provides tooling to add bindings to a function. ### Manually add bindings based on examples -When adding a binding to an existing function, you'll need update both the function code and the function.json configuration file, if used by your language. Both .NET class library and Java functions use attributes instead of function.json, so you'll need to update that instead. +When adding a binding to an existing function, you need to add binding-specific attributes to the function definition in code. +When adding a binding to an existing function, you need to add binding-specific annotations to the function definition in code. +When adding a binding to an existing function, you need to update the function code and add a definition to the function.json configuration file. +When adding a binding to an existing function, you need update the function definition, depending on your model: ++#### [v2](#tab/python-v2) +You need to add binding-specific annotations to the function definition in code. +#### [v1](#tab/python-v1) +You need to update the function code and add a definition to the function.json configuration file. ++ Use the following table to find examples of specific binding types that you can use to guide you in updating an existing function. First, choose the language tab that corresponds to your project. [!INCLUDE [functions-bindings-code-example-chooser](../../includes/functions-bindings-code-example-chooser.md)] +### Visual Studio Code ++When you use Visual Studio Code to develop your function and your function uses a function.json file, the Azure Functions extension can automatically add a binding to an existing function.json file. To learn more, see [Add input and output bindings](functions-develop-vs-code.md#add-input-and-output-bindings). + ## Azure portal When you develop your functions in the [Azure portal](https://portal.azure.com), you add input and output bindings in the **Integrate** tab for a given function. The new bindings are added to either the function.json file or to the method attributes, depending on your language. The following articles show examples of how to add bindings to an existing function in the portal: |
azure-functions | Configure Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/configure-monitoring.md | Update-AzFunctionAppSetting -Name MyAppName -ResourceGroupName MyResourceGroupNa > [!NOTE] > Overriding the `host.json` through changing app settings will restart your function app.+> App settings that contain a period aren't supported when running on Linux in an Elastic Premium plan or a Dedicated (App Service) plan. In these hosting environments, you should continue to use the *host.json* file. ## Monitor function apps using Health check |
azure-functions | Create First Function Arc Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-arc-cli.md | On your local computer: # [C\#](#tab/csharp) + [.NET 6.0 SDK](https://dotnet.microsoft.com/download)-+ [Azure Functions Core Tools version 4.x](functions-run-local.md?tabs=v4%2Ccsharp#install-the-azure-functions-core-tools) + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [JavaScript](#tab/nodejs) + [Node.js](https://nodejs.org/) version 18. Node.js version 14 is also supported.-+ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cnode#install-the-azure-functions-core-tools). + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [Python](#tab/python) + [Python versions that are supported by Azure Functions](supported-languages.md#languages-by-runtime-version)-+ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cpython#install-the-azure-functions-core-tools) + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later # [PowerShell](#tab/powershell) + [PowerShell 7](/powershell/scripting/install/installing-powershell-core-on-windows)-+ [Azure Functions Core Tools version 4.x.](functions-run-local.md?tabs=v4%2Cpowershell#install-the-azure-functions-core-tools) + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later + PowerShell 7 requires version 1.2.5 of the connectedk8s Azure CLI extension, or a later version. It also requires version 0.1.3 of the appservice-kube Azure CLI extension, or a later version. Make sure you install the correct version of both of these extensions as you complete this quickstart article. [!INCLUDE [functions-arc-create-environment](../../includes/functions-arc-create-environment.md)] |
azure-functions | Create First Function Cli Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md | Before you begin, you must have the following: + [.NET 6.0 SDK](https://dotnet.microsoft.com/download). -+ [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x. - + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) [version 2.4](/cli/azure/release-notes-azure-cli#april-21-2020) or later. Before you begin, you must have the following: You also need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -### Prerequisite check --Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources: --# [Azure CLI](#tab/azure-cli) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ Run `dotnet --list-sdks` to check that the required versions are installed. --+ Run `az --version` to check that the Azure CLI version is 2.4 or later. --+ Run `az login` to sign in to Azure and verify an active subscription. --# [Azure PowerShell](#tab/azure-powershell) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ Run `dotnet --list-sdks` to check that the required versions are installed. --+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. --+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription. -- ## Create a local function project |
azure-functions | Create First Function Cli Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md | Before you begin, you must have the following: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x. - + The [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. + The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 or 11. The `JAVA_HOME` environment variable must be set to the install location of the correct version of the JDK. + [Apache Maven](https://maven.apache.org), version 3.0 or above. -### Prerequisite check --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ Run `az --version` to check that the Azure CLI version is 2.4 or later. --+ Run `az login` to sign in to Azure and verify an active subscription. ## Create a local function project |
azure-functions | Create First Function Cli Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md | Before you begin, you must have the following prerequisites: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x. --+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above - + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. Before you begin, you must have the following prerequisites: + [Node.js](https://nodejs.org/) version 18 or above. ::: zone-end -### Prerequisite check --Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources: --# [Azure CLI](#tab/azure-cli) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. --+ Run `az --version` to check that the Azure CLI version is 2.4 or later. --+ Run `az login` to sign in to Azure and verify an active subscription. --# [Azure PowerShell](#tab/azure-powershell) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. ::: zone pivot="nodejs-model-v4" -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ++ Make sure you install version v4.0.5095 of the Core Tools, or a later version. ::: zone-end -+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. --+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription. --- ## Create a local function project In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. |
azure-functions | Create First Function Cli Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md | Before you begin, you must have the following: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x. - + One of the following tools for creating Azure resources: + The Azure [Az PowerShell module](/powershell/azure/install-azure-powershell) version 9.4.0 or later. Before you begin, you must have the following: + [PowerShell 7.2](/powershell/scripting/install/installing-powershell-core-on-windows) -### Prerequisite check --Verify your prerequisites, which depend on whether you are using Azure CLI or Azure PowerShell for creating Azure resources: --# [Azure CLI](#tab/azure-cli) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ Run `az --version` to check that the Azure CLI version is 2.4 or later. --+ Run `az login` to sign in to Azure and verify an active subscription. --# [Azure PowerShell](#tab/azure-powershell) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ Run `(Get-Module -ListAvailable Az).Version` and verify version 9.4.0 or later. --+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription. -- ## Create a local function project |
azure-functions | Create First Function Cli Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-python.md | Title: Create a Python function from the command line - Azure Functions description: Learn how to create a Python function from the command line, then publish the local project to serverless hosting in Azure Functions. Previously updated : 07/15/2023 Last updated : 08/07/2023 ms.devlang: python Before you begin, you must have the following requirements in place: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](functions-run-local.md#v2) version 4.x. - + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. Before you begin, you must have the following requirements in place: [!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)] + ## <a name="create-venv"></a>Create and activate a virtual environment In a suitable folder, run the following commands to create and activate a virtual environment named `.venv`. Make sure that you're using Python 3.9, 3.8, or 3.7, which are supported by Azure Functions. py -m venv .venv You run all subsequent commands in this activated virtual environment. -## Create a local function project +## Create a local function -In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. +In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. +In this section, you create a function project that contains a single function. 1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime. ```console In Azure Functions, a function project is a container for one or more individual `func new` creates a subfolder matching the function name that contains a code file appropriate to the project's chosen language and a configuration file named *function.json*. ::: zone-end ::: zone pivot="python-mode-decorators" +In this section, you create a function project and add an HTTP triggered function. + 1. Run the `func init` command as follows to create a functions project in a folder named *LocalFunctionProj* with the specified runtime and the specified programming model version. ```console In Azure Functions, a function project is a container for one or more individual This folder contains various files for the project, including configuration files named [*local.settings.json*](functions-develop-local.md#local-settings-file) and [*host.json*](functions-host-json.md). Because *local.settings.json* can contain secrets downloaded from Azure, the file is excluded from source control by default in the *.gitignore* file. -1. The file `function_app.py` can include all functions within your project. To start with, there's already an HTTP function stored in the file. +1. The file `function_app.py` can include all functions within your project. Open this file and replace the existing contents with the following code that adds an HTTP triggered function named `HttpExample`: ```python import azure.functions as func app = func.FunctionApp() - @app.function_name(name="HttpTrigger1") + @app.function_name(name="HttpExample") @app.route(route="hello") def test_function(req: func.HttpRequest) -> func.HttpResponse:- return func.HttpResponse("HttpTrigger1 function processed a request!") + return func.HttpResponse("HttpExample function processed a request!") ``` 1. Open the local.settings.json project file and verify that the `AzureWebJobsFeatureFlags` setting has a value of `EnableWorkerIndexing`. This is required for Functions to interpret your project correctly as the Python v2 model. You'll add this same setting to your application settings after you publish your project to Azure. In the previous example, replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME ## Verify in Azure -Run the following command to view near real-time [streaming logs](functions-run-local.md#enable-streaming-logs) in Application Insights in the Azure portal. +Run the following command to view near real-time streaming logs in Application Insights in the Azure portal. ```console func azure functionapp logstream <APP_NAME> --browser |
azure-functions | Create First Function Cli Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md | Before you begin, you must have the following prerequisites: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 4.x. -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version v4.0.5095 or above - + One of the following tools for creating Azure resources: + [Azure CLI](/cli/azure/install-azure-cli) version 2.4 or later. Before you begin, you must have the following prerequisites: + [TypeScript](https://www.typescriptlang.org/) version 4+. ::: zone-end --### Prerequisite check --Verify your prerequisites, which depend on whether you're using Azure CLI or Azure PowerShell for creating Azure resources: --# [Azure CLI](#tab/azure-cli) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. --+ Run `az --version` to check that the Azure CLI version is 2.4 or later. --+ Run `az login` to sign in to Azure and verify an active subscription. --# [Azure PowerShell](#tab/azure-powershell) --+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version 4.x. ::: zone pivot="nodejs-model-v4" -+ In a terminal or command window, run `func --version` to check that the Azure Functions Core Tools are version v4.0.5095 or above. ++ Make sure you install version v4.0.5095 of the Core Tools, or a later version. ::: zone-end -+ Run `(Get-Module -ListAvailable Az).Version` and verify version 5.0 or later. --+ Run `Connect-AzAccount` to sign in to Azure and verify an active subscription. --- ## Create a local function project In Azure Functions, a function project is a container for one or more individual functions that each responds to a specific trigger. All functions in a project share the same local and hosting configurations. In this section, you create a function project that contains a single function. |
azure-functions | Create First Function Vs Code Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md | Before you get started, make sure you have the following requirements in place: [!INCLUDE [functions-requirements-visual-studio-code-csharp](../../includes/functions-requirements-visual-studio-code-csharp.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in C#. Later in this article, you'll publish your function code to Azure. After checking that the function runs correctly on your local computer, it's tim ## Next steps -You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to either Azure Cosmos DB or Azure Queue Storage. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp). +You have used [Visual Studio Code](functions-develop-vs-code.md?tabs=csharp) to create a function app with a simple HTTP-triggered function. In the next article, you expand that function by connecting to one of the core Azure storage services. To learn more about connecting to other Azure services, see [Add bindings to an existing function in Azure Functions](add-bindings-existing-function.md?tabs=csharp). > [!div class="nextstepaction"] > [Connect to Azure Cosmos DB](functions-add-output-binding-cosmos-db-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process) > [Connect to Azure Queue Storage](functions-add-output-binding-storage-queue-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process) > [Connect to Azure SQL](functions-add-output-binding-azure-sql-vs-code.md?pivots=programming-language-csharp&tabs=isolated-process) -[Azure Functions Core Tools]: functions-run-local.md -[Azure Functions extension for Visual Studio Code]: https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions |
azure-functions | Create First Function Vs Code Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-java.md | Title: Create a Java function using Visual Studio Code - Azure Functions description: Learn how to create a Java function, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/22/2022 Last updated : 08/03/2023 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B Before you get started, make sure you have the following requirements in place: [!INCLUDE [functions-requirements-visual-studio-code-java](../../includes/functions-requirements-visual-studio-code-java.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in Java. Later in this article, you'll publish your function code to Azure. |
azure-functions | Create First Function Vs Code Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-node.md | Title: Create a JavaScript function using Visual Studio Code - Azure Functions description: Learn how to create a JavaScript function, then publish the local Node.js project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 02/06/2023 Last updated : 08/03/2023 adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021 adobe-target-experience: Experience B Before you get started, make sure you have the following requirements in place: [!INCLUDE [functions-requirements-visual-studio-code-node-v4](../../includes/functions-requirements-visual-studio-code-node-v4.md)] ::: zone-end + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in JavaScript. Later in this article, you publish your function code to Azure. |
azure-functions | Create First Function Vs Code Other | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-other.md | Title: Create a function in Go or Rust using Visual Studio Code - Azure Functions description: Learn how to create a Go function as an Azure Functions custom handler, then publish the local project to serverless hosting in Azure Functions using the Azure Functions extension in Visual Studio Code. Previously updated : 06/22/2022 Last updated : 08/03/2023 ms.devlang: golang, rust Before you get started, make sure you have the following requirements in place: + The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 3.x. Use the `func --version` command to check that it is correctly installed. - + [Go](https://go.dev/doc/install), latest version recommended. Use the `go version` command to check your version. # [Rust](#tab/rust) Before you get started, make sure you have the following requirements in place: + The [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. -+ The [Azure Functions Core Tools](./functions-run-local.md#v2) version 3.x. Use the `func --version` command to check that it is correctly installed. - + Rust toolchain using [rustup](https://www.rust-lang.org/tools/install). Use the `rustc --version` command to check your version. + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions custom handlers project. Later in this article, you'll publish your function code to Azure. |
azure-functions | Create First Function Vs Code Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-powershell.md | Before you get started, make sure you have the following requirements in place: [!INCLUDE [functions-requirements-visual-studio-code-powershell](../../includes/functions-requirements-visual-studio-code-powershell.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in PowerShell. Later in this article, you'll publish your function code to Azure. |
azure-functions | Create First Function Vs Code Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md | Before you begin, make sure that you have the following requirements in place: + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) version 4.x. -+ The [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools), version 4.0.4785 or a later version. + Python versions that are [supported by Azure Functions](supported-languages.md#languages-by-runtime-version). For more information, see [How to install Python](https://wiki.python.org/moin/BeginnersGuide/Download). + [Visual Studio Code](https://code.visualstudio.com/) on one of the [supported platforms](https://code.visualstudio.com/docs/supporting/requirements#_platforms). Before you begin, make sure that you have the following requirements in place: [!INCLUDE [functions-x86-emulation-on-arm64-note](../../includes/functions-x86-emulation-on-arm64-note.md)] + ## <a name="create-an-azure-functions-project"></a>Create your local project In this section, you use Visual Studio Code to create a local Azure Functions project in Python. Later in this article, you'll publish your function code to Azure. |
azure-functions | Dotnet Isolated In Process Differences | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-in-process-differences.md | Use the following table to compare feature and functional differences between th <sup>3</sup> C# Script functions also run in-process and use the same libraries as in-process class library functions. For more information, see the [Azure Functions C# script (.csx) developer reference](functions-reference-csharp.md). -<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, support from some extensions is currently in preview, and Service Bus triggers do not yet support message settlement scenarios. +<sup>4</sup> Service SDK types include types from the [Azure SDK for .NET](/dotnet/azure/sdk/azure-sdk-for-dotnet) such as [BlobClient](/dotnet/api/azure.storage.blobs.blobclient). For the isolated process model, Service Bus triggers do not yet support message settlement scenarios. <sup>5</sup> ASP.NET Core types are not supported for .NET Framework. |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | For some service-specific binding types, binding data can be provided using type | Dependency | Version requirement | |-|-|-|[Microsoft.Azure.Functions.Worker]| For **Generally Available** extensions in the table below: 1.18.0 or later<br/>For extensions that have **preview support**: 1.15.0-preview1 | -|[Microsoft.Azure.Functions.Worker.Sdk]|For **Generally Available** extensions in the table below: 1.13.0 or later<br/>For extensions that have **preview support**: 1.11.0-preview1 | +|[Microsoft.Azure.Functions.Worker]| 1.18.0 or later | +|[Microsoft.Azure.Functions.Worker.Sdk]| 1.13.0 or later | When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`. Each trigger and binding extension also has its own minimum version requirement, | [Azure Service Bus][servicebus-sdk-types] | **Generally Available**<sup>2</sup> | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | | [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>3</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | -| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Preview support** | _SDK types not recommended.<sup>1</sup>_ | +| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ | | [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ | [blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types You can configure your isolated process application to emit logs directly [Appli ```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService-dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights --prerelease +dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights ``` You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file: |
azure-functions | Durable Functions Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md | Make sure to choose your Durable Functions development language at the top of th > [!IMPORTANT] > This article supports both Python v1 and Python v2 programming models for Durable Functions. -> The Python v2 programming model is currently in preview. ## Python v2 programming model Durable Functions provides preview support of the new [Python v2 programming mod Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) isn't currently supported for the v2 model with Durable Functions. You'll instead need to manage your extensions manually as follows: -1. Remove the `extensionBundle` section of your `host.json` as described in [this Functions article](../functions-run-local.md#install-extensions). +1. Remove the `extensionBundle` section of your `host.json` file. -1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview. +1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview. For more information, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). ::: zone-end |
azure-functions | Quickstart Mssql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-mssql.md | -Durable Functions supports several [storage providers](durable-functions-storage-providers.md), also known as "backends", for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this article, we walk through how to configure a Durable Functions app to utilize the [MSSQL storage provider](durable-functions-storage-providers.md#mssql). +Durable Functions supports several [storage providers](durable-functions-storage-providers.md), also known as _backends_, for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this article, we walk through how to configure a Durable Functions app to utilize the [MSSQL storage provider](durable-functions-storage-providers.md#mssql). > [!NOTE] > The MSSQL backend was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub state so that users get the benefits of modern, enterprise-grade DBMS infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation. If this isn't the case, we suggest you start with one of the following articles, > [!NOTE] > If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. -You'll need to install the latest version of the MSSQL storage provider Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project. +You need to install the latest version of the MSSQL storage provider Extension on NuGet, which for .NET means adding a reference to it in your `.csproj` file and building the project. You can also use the [`dotnet add package`](/dotnet/core/tools/dotnet-add-package) command to add extension packages. The Extension package to install depends on the .NET worker you're using: - For the _in-process_ .NET worker, install [`Microsoft.DurableTask.SqlServer.AzureFunctions`](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions). You can install the Extension using the following [Azure Functions Core Tools CL func extensions install --package <package name depending on your worker model> --version <latest version> ``` -For more information on installing Azure Functions Extensions via the Core Tools CLI, see [this guide](../functions-run-local.md#install-extensions). +For more information on installing Azure Functions Extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). ## Set up your Database For more information on installing Azure Functions Extensions via the Core Tools As the MSSQL backend is designed for portability, you have several options to set up your backing database. For example, you can set up an on-premises SQL Server instance, use a fully managed [Azure SQL DB](/azure/azure-sql/database/sql-database-paas-overview), or use any other SQL Server-compatible hosting option. -You can also do local, offline development with [SQL Server Express](https://www.microsoft.com/sql-server/sql-server-downloads) on your local Windows machine or use [SQL Server Docker image](https://hub.docker.com/_/microsoft-mssql-server) running in a Docker container. For ease of setup, we will focus on the latter. +You can also do local, offline development with [SQL Server Express](https://www.microsoft.com/sql-server/sql-server-downloads) on your local Windows machine or use [SQL Server Docker image](https://hub.docker.com/_/microsoft-mssql-server) running in a Docker container. For ease of setup, this article focuses on the latter. ### Set up your local Docker-based SQL Server -To run these steps, you will need a [Docker](https://www.docker.com/products/docker-desktop/) installation on your local machine. Below are PowerShell commands that you can use to set up a local SQL Server database on Docker. Note that PowerShell can be installed on Windows, macOS, or Linux using the installation instructions [here](/powershell/scripting/install/installing-powershell). +To run these steps, you need a [Docker](https://www.docker.com/products/docker-desktop/) installation on your local machine. Below are PowerShell commands that you can use to set up a local SQL Server database on Docker. Note that PowerShell can be installed on Windows, macOS, or Linux using the installation instructions [here](/powershell/scripting/install/installing-powershell). ```powershell # primary parameters docker run --name mssql-server -e 'ACCEPT_EULA=Y' -e "SA_PASSWORD=$pw" -e "MSSQL docker exec -d mssql-server /opt/mssql-tools/bin/sqlcmd -S . -U sa -P "$pw" -Q "CREATE DATABASE [$dbname] COLLATE $collation" ``` -After running these commands, you should have a local SQL Server running on Docker and listening on port `1443`. If port `1443` conflicts with another service, you can re-run these commands after changing the variable `$port` to a different value. +After running these commands, you should have a local SQL Server running on Docker and listening on port `1443`. If port `1443` conflicts with another service, you can rerun these commands after changing the variable `$port` to a different value. > [!NOTE] > To stop and delete a running container, you may use `docker stop <containerName>` and `docker rm <containerName>` respectively. You may use these commands to re-create your container, and to stop if after you're done with this quickstart. For more assistance, try `docker --help`. To validate your database installation, you can query for your new SQL database docker exec -it mssql-server /opt/mssql-tools/bin/sqlcmd -S . -U sa -P "$pw" -Q "SELECT name FROM sys.databases" ``` -If the database setup completed successfully, you should see the name of your created database (for example, "DurableDB") in the command-line output. +If the database setup completed successfully, you should see the name of your created database (for example, `DurableDB`) in the command-line output. ```bash name DurableDB ### Add your SQL connection string to local.settings.json -The MSSQL backend needs a connection string to your database. How to obtain a connection string largely depends on your specific MSSQL Server provider. Please review the documentation of your specific provider for information on how to obtain a connection string. +The MSSQL backend needs a connection string to your database. How to obtain a connection string largely depends on your specific MSSQL Server provider. Review the documentation of your specific provider for information on how to obtain a connection string. -If you used Docker commands above without changing any parameters, your connection string should be: +Using the previous Docker commands, without changing any parameters, your connection string should be: ``` Server=localhost,1433;Database=DurableDB;User Id=sa;Password=yourStrong(!)Password; Below is an example `local.settings.json` assigning the default Docker-based SQL ### Update host.json -Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. We'll also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. We'll set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one doesn't already exist, with collation `Latin1_General_100_BIN2_UTF8`. +Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. You must also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. Set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one doesn't already exist, with collation `Latin1_General_100_BIN2_UTF8`. ```json { Edit the storage provider section of the `host.json` file so it sets the `type` } ``` -The snippet above is a fairly *minimal* `host.json` example. Later, you may want to consider [additional parameters](https://microsoft.github.io/durabletask-mssql/#/quickstart?id=hostjson-configuration). +The snippet above is a fairly *minimal* `host.json` example. Later, you may want to consider [other parameters](https://microsoft.github.io/durabletask-mssql/#/quickstart?id=hostjson-configuration). ### Test locally InstanceID RuntimeStatus CreatedTime ## Run your app on Azure -To run your app in Azure, you will need a publicly accessible SQL Server instance. You can obtain one by creating an Azure SQL database. +To run your app in Azure, you'll need a publicly accessible SQL Server instance. You can obtain one by creating an Azure SQL database. ### Create an Azure SQL database You can follow [these](/azure/azure-sql/database/single-database-create-quicksta > [!NOTE] > Microsoft offers a [12-month free Azure subscription account](https://azure.microsoft.com/free/) if youΓÇÖre exploring Azure for the first time. -You may obtain your Azure SQL database's connection string by navigating to the database's blade in the Azure portal. Then, under Settings, select "Connection strings" and obtain the "ADO.NET" connection string. Make sure to provide your password in the template provided. +You may obtain your Azure SQL database's connection string by navigating to the database's blade in the Azure portal. Then, under **Settings**, select **Connection strings** and obtain the **ADO.NET** connection string. Make sure to provide your password in the template provided. Below is an example of the portal view for obtaining the Azure SQL connection string. ![An Azure connection string as found in the portal](./media/quickstart-mssql/mssql-azure-db-connection-string.png) -In the Azure portal, the connection string will have the database's password removed: it is replaced with `{your_password}`. Replace that segment with the password you used to create the database earlier in this section. If you forgot your password, you may reset it by navigating to the database's blade in the Azure portal, selecting your *Server name* in the "Essentials" view, and clicking "Reset password" in the resulting page. Below are some guiding images. +In the Azure portal, the connection string has the database's password removed: it's replaced with `{your_password}`. Replace that segment with the password you used to create the database earlier in this section. If you forgot your password, you may reset it by navigating to the database's blade in the Azure portal, selecting your *Server name* in the **Essentials** view, and also selecting **Reset password** in the resulting page. Below are some guiding images. ![The Azure SQL database view, with the Server name option highlighted](./media/quickstart-mssql/mssql-azure-reset-pass-1.png) In the Azure portal, the connection string will have the database's password rem ### Add connection string as an application setting -You need to add your database's connection string as an application setting. To do this through the Azure portal, first go to your Azure Functions App view. Then go under "Configuration", select "New application setting", and there you can assign "SQLDB_Connection" to map to a publicly accessible connection string. Below are some guiding images. +You need to add your database's connection string as an application setting. To do this through the Azure portal, first go to your Azure Functions App view. Then under **Configuration**, select **New application setting**, where you assign **SQLDB_Connection** to map to a publicly accessible connection string. Below are some guiding images. ![On the DB blade, go to Configuration, then click new application setting.](./media/quickstart-mssql/mssql-azure-environment-variable-1.png) ![Enter your connection string setting name, and its value.](./media/quickstart-mssql/mssql-azure-environment-variable-2.png) |
azure-functions | Quickstart Netherite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md | You can install the Extension using the following [Azure Functions Core Tools CL func extensions install --package <package name depending on your worker model> --version <latest version> ``` -For more information on installing Azure Functions Extensions via the Core Tools CLI, see [this guide](../functions-run-local.md#install-extensions). +For more information on installing Azure Functions Extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). ## Configure local.settings.json for local development |
azure-functions | Quickstart Python Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md | Azure Functions Core Tools lets you run an Azure Functions project on your local :::image type="content" source="media/quickstart-python-vscode/functions-f5.png" alt-text="Screenshot of Azure local output."::: 5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration.++5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`hello_orchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/hello_orchestrator`. ++ The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. -6. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. +2. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. The request will query the orchestration instance for the status. You must get an eventual response, which shows the instance has completed and includes the outputs or results of the durable function. It looks like: |
azure-functions | Functions Add Output Binding Cosmos Db Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md | app = func.FunctionApp() @app.function_name(name="HttpTrigger1") @app.route(route="hello", auth_level=func.AuthLevel.ANONYMOUS) @app.queue_output(arg_name="msg", queue_name="outqueue", connection="AzureWebJobsStorage")-@app.cosmos_db_output(arg_name="outputDocument", database_name="my-database", collection_name="my-container" connection_string_setting="CosmosDbConnectionString") +@app.cosmos_db_output(arg_name="outputDocument", database_name="my-database", collection_name="my-container", connection_string_setting="CosmosDbConnectionString") def test_function(req: func.HttpRequest, msg: func.Out[func.QueueMessage], outputDocument: func.Out[func.Document]) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') |
azure-functions | Functions Add Output Binding Storage Queue Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-storage-queue-vs-code.md | Because you're using a Queue storage output binding, you must have the Storage b Your project has been configured to use [extension bundles](functions-bindings-register.md#extension-bundles), which automatically installs a predefined set of extension packages. -Extension bundles usage is enabled in the *host.json* file at the root of the project, which appears as follows: +Extension bundles is already enabled in the *host.json* file at the root of the project, which should look like the following example: Now, you can add the storage output binding to your project. |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | Controls the timeout, in seconds, when connected to streaming logs. The default |-|-| |SCM_LOGSTREAM_TIMEOUT|`1800`| -The above sample value of `1800` sets a timeout of 30 minutes. To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs). +The above sample value of `1800` sets a timeout of 30 minutes. For more information, see [Enable streaming execution logs in Azure Functions](streaming-logs.md). ## WEBSITE\_CONTENTAZUREFILECONNECTIONSTRING |
azure-functions | Functions Bindings Cache Trigger Redislist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redislist.md | The `RedisListTrigger` pops new elements from a list and surfaces those entries | Lists | Yes | Yes | Yes | > [!IMPORTANT]-> Redis triggers are not currently supported on Azure Functions Consumption plan. +> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). > ## Example The following sample polls the key `listTest` at a localhost Redis instance at ` ::: zone-end ::: zone pivot="programming-language-javascript" -### [v3](#tab/javasscript-v1) +### [v3](#tab/node-v3) -Each sample uses the same `index.js` file, with binding data in the `function.json` file. +This sample uses the same `index.js` file, with binding data in the `function.json` file. Here's the `index.js` file: From `function.json`, here's the binding data: } ``` -### [v4](#tab/javascript-v2) +### [v4](#tab/node-v4) The JavaScript v4 programming model example isn't available in preview. The JavaScript v4 programming model example isn't available in preview. ::: zone-end ::: zone pivot="programming-language-powershell" -Each sample uses the same `run.ps1` file, with binding data in the `function.json` file. +This sample uses the same `run.ps1` file, with binding data in the `function.json` file. Here's the `run.ps1` file: From `function.json`, here's the binding data: ::: zone-end ::: zone pivot="programming-language-python" -Each sample uses the same `__init__.py` file, with binding data in the `function.json` file. +This sample uses the same `__init__.py` file, with binding data in the `function.json` file. ### [v1](#tab/python-v1) |
azure-functions | Functions Bindings Cache Trigger Redispubsub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redispubsub.md | This sample listens to any `keyevent` notifications for the delete command [`DEL ::: zone-end ::: zone pivot="programming-language-javascript" -### [v3](#tab/javasscript-v1) +### [v3](#tab/node-v3) -Each sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs. +This sample uses the same `index.js` file, with binding data in the `function.json` file determining on which channel the trigger occurs. Here's the `index.js` file: Here's binding data to listen to `keyevent` notifications for the delete command "scriptFile": "index.js" } ```-### [v4](#tab/javascript-v2) +### [v4](#tab/node-v4) The JavaScript v4 programming model example isn't available in preview. The JavaScript v4 programming model example isn't available in preview. ::: zone-end ::: zone pivot="programming-language-powershell" -Each sample uses the same `run.ps1` file, with binding data in the `function.json` file determining on which channel the trigger occurs. +This sample uses the same `run.ps1` file, with binding data in the `function.json` file determining on which channel the trigger occurs. Here's the `run.ps1` file: Here's binding data to listen to `keyevent` notifications for the delete command The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model). -Each sample uses the same `__init__.py` file, with binding data in the `function.json` file determining on which channel the trigger occurs. +This sample uses the same `__init__.py` file, with binding data in the `function.json` file determining on which channel the trigger occurs. Here's the `__init__.py` file: |
azure-functions | Functions Bindings Cache Trigger Redisstream | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache-trigger-redisstream.md | The `RedisStreamTrigger` reads new entries from a stream and surfaces those elem | Streams | Yes | Yes | Yes | > [!IMPORTANT]-> Redis triggers are not currently supported on Azure Functions Consumption plan. +> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). > ## Example The isolated process examples aren't available in preview. ::: zone-end ::: zone pivot="programming-language-javascript" -### [v3](#tab/javasscript-v1) +### [v3](#tab/node-v3) -Each sample uses the same `index.js` file, with binding data in the `function.json` file. +This sample uses the same `index.js` file, with binding data in the `function.json` file. Here's the `index.js` file: From `function.json`, here's the binding data: } ``` -### [v4](#tab/javascript-v2) +### [v4](#tab/node-v4) The JavaScript v4 programming model example isn't available in preview. The JavaScript v4 programming model example isn't available in preview. ::: zone-end ::: zone pivot="programming-language-powershell" -Each sample uses the same `run.ps1` file, with binding data in the `function.json` file. +This sample uses the same `run.ps1` file, with binding data in the `function.json` file. Here's the `run.ps1` file: From `function.json`, here's the binding data: The Python v1 programming model requires you to define bindings in a separate _function.json_ file in the function folder. For more information, see the [Python developer guide](functions-reference-python.md?pivots=python-mode-configuration#programming-model). -Each sample uses the same `__init__.py` file, with binding data in the `function.json` file. +This sample uses the same `__init__.py` file, with binding data in the `function.json` file. Here's the `__init__.py` file: |
azure-functions | Functions Bindings Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cache.md | You can integrate Azure Cache for Redis and Azure Functions to build functions t |Streams | Yes | Yes | Yes | > [!IMPORTANT]-> Redis triggers are not currently supported on consumption functions. +> Redis triggers aren't currently supported for functions running in the [Consumption plan](consumption-plan.md). > ::: zone pivot="programming-language-csharp" |
azure-functions | Functions Bindings Cosmosdb V2 Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-input.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example This section contains examples that require version 3.x of Azure Cosmos DB exten The examples refer to a simple `ToDoItem` type: <a id="queue-trigger-look-up-id-from-json-isolated"></a> The examples refer to a simple `ToDoItem` type: The following example shows a function that retrieves a single document. The function is triggered by a JSON message in the storage queue. The queue trigger parses the JSON into an object of type `ToDoItemLookup`, which contains the ID and partition key value to retrieve. That ID and partition key value are used to return a `ToDoItem` document from the specified database and collection. |
azure-functions | Functions Bindings Cosmosdb V2 Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-output.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Cosmosdb V2 Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2-trigger.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Event Grid Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid-trigger.md | See the [Example section](#example) for complete examples. ## Usage +The Event Grid trigger uses a webhook HTTP request, which can be configured using the same [*host.json* settings as the HTTP Trigger](functions-bindings-http-webhook.md#hostjson-settings). + ::: zone pivot="programming-language-csharp" The parameter type supported by the Event Grid trigger depends on the Functions runtime version, the extension package version, and the C# modality used. |
azure-functions | Functions Bindings Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-grid.md | Functions version 1.x doesn't support isolated worker process. To use the isolat :::zone-end +## host.json settings ++The Event Grid trigger uses a webhook HTTP request, which can be configured using the same [*host.json* settings as the HTTP Trigger](functions-bindings-http-webhook.md#hostjson-settings). + ## Next steps * If you have questions, submit an issue to the team [here](https://github.com/Azure/azure-sdk-for-net/issues) |
azure-functions | Functions Bindings Event Hubs Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end |
azure-functions | Functions Bindings Example | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-example.md | - Title: Azure Functions trigger and binding example -description: Learn to configure Azure Function bindings --- Previously updated : 02/08/2022---# Azure Functions trigger and binding example --This article demonstrates how to configure a [trigger and bindings](./functions-triggers-bindings.md) in an Azure Function. --Suppose you want to write a new row to Azure Table storage whenever a new message appears in Azure Queue storage. This scenario can be implemented using an Azure Queue storage trigger and an Azure Table storage output binding. --Here's a *function.json* file for this scenario. --```json -{ - "bindings": [ - { - "type": "queueTrigger", - "direction": "in", - "name": "order", - "queueName": "myqueue-items", - "connection": "MY_STORAGE_ACCT_APP_SETTING" - }, - { - "type": "table", - "direction": "out", - "name": "$return", - "tableName": "outTable", - "connection": "MY_TABLE_STORAGE_ACCT_APP_SETTING" - } - ] -} -``` --The first element in the `bindings` array is the Queue storage trigger. The `type` and `direction` properties identify the trigger. The `name` property identifies the function parameter that receives the queue message content. The name of the queue to monitor is in `queueName`, and the connection string is in the app setting identified by `connection`. --The second element in the `bindings` array is the Azure Table Storage output binding. The `type` and `direction` properties identify the binding. The `name` property specifies how the function provides the new table row, in this case by using the function return value. The name of the table is in `tableName`, and the connection string is in the app setting identified by `connection`. --To view and edit the contents of *function.json* in the Azure portal, click the **Advanced editor** option on the **Integrate** tab of your function. --> [!NOTE] -> The value of `connection` is the name of an app setting that contains the connection string, not the connection string itself. Bindings use connection strings stored in app settings to enforce the best practice that *function.json* does not contain service secrets. --# [C# script](#tab/csharp) --Here's C# script code that works with this trigger and binding. Notice that the name of the parameter that provides the queue message content is `order`; this name is required because the `name` property value in *function.json* is `order` --```cs -#r "Newtonsoft.Json" --using Microsoft.Extensions.Logging; -using Newtonsoft.Json.Linq; --// From an incoming queue message that is a JSON object, add fields and write to Table storage -// The method return value creates a new row in Table Storage -public static Person Run(JObject order, ILogger log) -{ - return new Person() { - PartitionKey = "Orders", - RowKey = Guid.NewGuid().ToString(), - Name = order["Name"].ToString(), - MobileNumber = order["MobileNumber"].ToString() }; -} - -public class Person -{ - public string PartitionKey { get; set; } - public string RowKey { get; set; } - public string Name { get; set; } - public string MobileNumber { get; set; } -} -``` --# [C# class library](#tab/csharp-class-library) --In a class library, the same trigger and binding information — queue and table names, storage accounts, function parameters for input and output — is provided by attributes instead of a function.json file. Here's an example: --```csharp -public static class QueueTriggerTableOutput -{ - [FunctionName("QueueTriggerTableOutput")] - [return: Table("outTable", Connection = "MY_TABLE_STORAGE_ACCT_APP_SETTING")] - public static Person Run( - [QueueTrigger("myqueue-items", Connection = "MY_STORAGE_ACCT_APP_SETTING")]JObject order, - ILogger log) - { - return new Person() { - PartitionKey = "Orders", - RowKey = Guid.NewGuid().ToString(), - Name = order["Name"].ToString(), - MobileNumber = order["MobileNumber"].ToString() }; - } -} --public class Person -{ - public string PartitionKey { get; set; } - public string RowKey { get; set; } - public string Name { get; set; } - public string MobileNumber { get; set; } -} -``` --# [JavaScript](#tab/javascript) ----The same *function.json* file can be used with a JavaScript function: --```javascript -// From an incoming queue message that is a JSON object, add fields and write to Table Storage -module.exports = async function (context, order) { - order.PartitionKey = "Orders"; - order.RowKey = generateRandomId(); -- context.bindings.order = order; -}; --function generateRandomId() { - return Math.random().toString(36).substring(2, 15) + - Math.random().toString(36).substring(2, 15); -} -``` ----You now have a working function that is triggered by an Azure Queue and outputs data to Azure Table storage. --## Next steps --> [!div class="nextstepaction"] -> [Azure Functions binding expression patterns](./functions-bindings-expressions-patterns.md) |
azure-functions | Functions Bindings Http Webhook Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end |
azure-functions | Functions Bindings Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-register.md | The following table lists the currently available version ranges of the default For compiled C# class library projects ([in-process](functions-dotnet-class-library.md) and [isolated worker process](dotnet-isolated-process-guide.md)), you install the NuGet packages for the extensions that you need as you normally would. For examples see either the [Visual Studio Code developer guide](functions-develop-vs-code.md?tabs=csharp#install-binding-extensions) or the [Visual Studio developer guide](functions-develop-vs.md#add-bindings). -For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. To learn more, see [Install extensions](functions-run-local.md#install-extensions). +For non-.NET languages and C# script, when you can't use extension bundles you need to manually install required binding extensions in your local project. The easiest way is to use Azure Functions Core Tools. For more information, see [func extensions install](functions-core-tools-reference.md#func-extensions-install). For portal-only development, you need to manually create an extensions.csproj file in the root of your function app. To learn more, see [Manually install extensions](functions-how-to-use-azure-function-app-settings.md#manually-install-extensions). |
azure-functions | Functions Bindings Service Bus Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-output.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example public static string ServiceBusOutput([HttpTrigger] dynamic input, ILogger log) The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: + |
azure-functions | Functions Bindings Service Bus Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus-trigger.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example public static void Run( The following example shows a [C# function](dotnet-isolated-process-guide.md) that receives a Service Bus queue message, logs the message, and sends a message to different Service Bus queue: + |
azure-functions | Functions Bindings Signalr Service Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-signalr-service-trigger.md | The `Function_App_URL` can be found on Function App's Overview page and The `API If you want to use more than one Function App together with one SignalR Service, upstream can also support complex routing rules. Find more details at [Upstream settings](../azure-signalr/concept-upstream.md). -### Step by step sample +### Step-by-step sample You can follow the sample in GitHub to deploy a chat room on Function App with SignalR Service trigger binding and upstream feature: [Bidirectional chat room sample](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/BidirectionChat) |
azure-functions | Functions Bindings Storage Blob Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-input.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Storage Blob Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-output.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Storage Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob-trigger.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Storage Queue Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-output.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Storage Queue Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-queue-trigger.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Bindings Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md | Functions version 1.x doesn't support isolated worker process. To use the isolat [ITableEntity]: /dotnet/api/azure.data.tables.itableentity [TableClient]: /dotnet/api/azure.data.tables.tableclient-[TableEntity]: /dotnet/api/azure.data.tables.tableentity [CloudTable]: /dotnet/api/microsoft.azure.cosmos.table.cloudtable Functions version 1.x doesn't support isolated worker process. To use the isolat [Microsoft.Azure.Cosmos.Table]: /dotnet/api/microsoft.azure.cosmos.table [Microsoft.WindowsAzure.Storage.Table]: /dotnet/api/microsoft.windowsazure.storage.table -[NuGet package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage [storage-4.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/4.0.5-[storage-5.x]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Storage/5.0.0 [table-api-package]: https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Tables/ [extension bundle]: ./functions-bindings-register.md#extension-bundles |
azure-functions | Functions Bindings Timer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md | The Python v1 programming model requires you to define bindings in a separate *f This article supports both programming models. -> [!IMPORTANT] -> The Python v2 programming model is currently in preview. ::: zone-end ## Example |
azure-functions | Functions Container Apps Hosting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md | Azure Functions currently supports the following methods of deployment to Azure + Azure Pipeline tasks + ARM templates + [Bicep templates](https://github.com/Azure/azure-functions-on-container-apps/tree/main/samples/Biceptemplates)-+ Azure Functions core tools ++ [Azure Functions Core Tools](functions-run-local.md#deploy-containers) To learn how to create and deploy a function app container to Container Apps using the Azure CLI, see [Create your first containerized functions on Azure Container Apps](functions-deploy-container-apps.md). |
azure-functions | Functions Core Tools Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md | Title: Azure Functions Core Tools reference description: Reference documentation that supports the Azure Functions Core Tools (func.exe). Previously updated : 07/30/2023 Last updated : 08/20/2023 # Azure Functions Core Tools reference When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with | **`--force`** | Initialize the project even when there are existing files in the project. This setting overwrites existing files with the same name. Other files in the project folder aren't affected. | | **`--language`** | Initializes a language-specific project. Currently supported when `--worker-runtime` set to `node`. Options are `typescript` and `javascript`. You can also use `--worker-runtime javascript` or `--worker-runtime typescript`. | | **`--managed-dependencies`** | Installs managed dependencies. Currently, only the PowerShell worker runtime supports this functionality. |+| **`--model`** | Sets the desired programming model for a target language when more than one model is available. Supported options are `V1` and `V2` for Python and `V3` and `V4` for Node.js. For more information, see the [Python developer guide](functions-reference-python.md#programming-model) and the [Node.js developer guide](functions-reference-node.md), respectively. | | **`--source-control`** | Controls whether a git repository is created. By default, a repository isn't created. When `true`, a repository is created. | | **`--worker-runtime`** | Sets the language runtime for the project. Supported values are: `csharp`, `dotnet`, `dotnet-isolated`, `javascript`,`node` (JavaScript), `powershell`, `python`, and `typescript`. For Java, use [Maven](functions-reference-java.md#create-java-functions). To generate a language-agnostic project with just the project files, use `custom`. When not set, you're prompted to choose your runtime during initialization. |-| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net6.0` (default), `net7.0`, and `net48`. | +| **`--target-framework`** | Sets the target framework for the function app project. Valid only with `--worker-runtime dotnet-isolated`. Supported values are: `net6.0` (default), `net7.0`, and `net48` (.NET Framework 4.8). | | > [!NOTE] Creates a new function in the current project based on a template. func new ``` +When you run `func new` without the `--template` option, you're prompted to choose a template. In version 1.x, you're also required to choose the language. + The `func new` action supports the following options: | Option | Description | To learn more, see [Create a function](functions-run-local.md#create-func). *Version 1.x only.* -Enables you to invoke a function directly, which is similar to running a function using the **Test** tab in the Azure portal. This action is only supported in version 1.x. For later versions, use `func start` and [call the function endpoint directly](functions-run-local.md#passing-test-data-to-a-function). +Enables you to invoke a function directly, which is similar to running a function using the **Test** tab in the Azure portal. This action is only supported in version 1.x. For later versions, use `func start` and [call the function endpoint directly](functions-run-local.md#run-a-local-function). ```command func run func start | **`--timeout`** | The timeout for the Functions host to start, in seconds. Default: 20 seconds.| | **`--useHttps`** | Bind to `https://localhost:{port}` rather than to `http://localhost:{port}`. By default, this option creates a trusted certificate on your computer.| -With the project running, you can [verify individual function endpoints](functions-run-local.md#passing-test-data-to-a-function). +With the project running, you can [verify individual function endpoints](functions-run-local.md#run-a-local-function). # [v1.x](#tab/v1) In version 1.x, you can also use the [`func run`](#func-run) command to run a sp Gets settings from a specific function app. ```command-func azure functionapp fetch-app-settings <APP_NAME> +func azure functionapp fetch-app-settings <APP_NAME> ``` -For an example, see [Get your storage connection strings](functions-run-local.md#get-your-storage-connection-strings). +For more information, see [Download application settings](functions-run-local.md#download-application-settings). -Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](#func-settings-encrypt). +Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](functions-run-local.md#encrypt-the-local-settings-file). ## func azure functionapp list-functions The `deploy` action supports the following options: | | -- | | **`--browser`** | Open Azure Application Insights Live Stream for the function app in the default browser. | -To learn more, see [Enable streaming logs](functions-run-local.md#enable-streaming-logs). +For more information, see [Enable streaming execution logs in Azure Functions](streaming-logs.md). ## func azure functionapp publish The following publish options apply, based on version: | Option | Description | | | -- |-| **`--access-token`** | Let's you use a specific access token when performing authenticated azure actions. | +| **`--access-token`** | Lets you use a specific access token when performing authenticated azure actions. | | **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | | **`--additional-packages`** | List of packages to install when building native dependencies. For example: `python3-dev libevent-dev`. | | **`--build`**, **`-b`** | Performs build action when deploying to a Linux function app. Accepts: `remote` and `local`. | The following publish options apply, based on version: | **`--no-build`** | Project isn't built during publishing. For Python, `pip install` isn't performed. | | **`--nozip`** | Turns the default `Run-From-Package` mode off. | | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|-| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). | +| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](#func-azure-storage-fetch-connection-string). | | **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. | | **`--slot`** | Optional name of a specific slot to which to publish. | | **`--subscription`** | Sets the default subscription to use. | The following publish options apply, based on version: | Option | Description | | | -- | | **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.|-| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](functions-run-local.md#get-your-storage-connection-strings). | +| **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using the Microsoft Azure Storage Emulator, first change the app setting to an [actual storage connection](#func-azure-storage-fetch-connection-string). | Gets the connection string for the specified Azure Storage account. func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME> ``` +For more information, see [Download a storage connection string](functions-run-local.md#download-a-storage-connection-string). + ## func azurecontainerapps deploy Deploys a containerized function app to an Azure Container Apps environment. Both the storage account used by the function app and the environment must already exist. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). The following deployment options apply: | Option | Description | | | -- |-| **`--access-token`** | Let's you use a specific access token when performing authenticated azure actions. | +| **`--access-token`** | Lets you use a specific access token when performing authenticated azure actions. | | **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). | | **`--environment`** | The name of an existing Container Apps environment.| | **`--image-build`** | When set to `true`, skips the local Docker build. | To learn more, see the [Durable Functions documentation](./durable/durable-funct ## func extensions install -Installs Functions extensions in a non-C# class library project. +Manually installs Functions extensions in a non-.NET project or in a C# script project. -When possible, you should instead use extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles). --For compiled C# projects (both in-process and isolated worker process), instead use standard NuGet package installation methods, such as `dotnet add package`. +```command +func extensions install --package Microsoft.Azure.WebJobs.Extensions.<EXTENSION> --version <VERSION> +``` The `install` action supports the following options: The `install` action supports the following options: | **`--source`** | NuGet feed source when not using NuGet.org.| | **`--version`** | Extension package version. | -No action is taken when an extension bundle is defined in your host.json file. When you need to manually install extensions, you must first remove the bundle definition. For more information, see [Install extensions](functions-run-local.md#install-extensions). +The following example installs version 5.0.1 of the Event Hubs extension in the local project: ++```command +func extensions install --package Microsoft.Azure.WebJobs.Extensions.EventHubs --version 5.0.1 +``` ++The following considerations apply when using `func extensions install`: +++ For compiled C# projects (both in-process and isolated worker process), instead use standard NuGet package installation methods, such as `dotnet add package`.+++ To manually install extensions using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed.+++ When possible, you should instead use [extension bundles](functions-bindings-register.md#extension-bundles). The following are some reasons why you might need to install extensions manually:++ + You need to access a specific version of an extension not available in a bundle. + + You need to access a custom extension not available in a bundle. + + You need to access a specific combination of extensions not available in a single bundle. +++ Before you can manually install extensions, you must first remove the [`extensionBundle`](functions-host-json.md#extensionbundle) object from the host.json file that defines the bundle. No action is taken when an extension bundle is already set in your [host.json file](functions-host-json.md#extensionbundle).+++ The first time you explicitly install an extension, a .NET project file named extensions.csproj is added to the root of your app project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file. ## func extensions sync |
azure-functions | Functions Develop Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md | The way in which you develop functions on your local computer depends on your [l Each of these local development environments lets you create function app projects and use predefined function templates to create new functions. Each uses the Core Tools so that you can test and debug your functions against the real Functions runtime on your own machine just as you would any other app. You can also publish your function app project from any of these environments to Azure. -## Local settings file +## Local project files ++A Functions project directory contains the following files in the project root folder, regardless of language: ++| File name | Description | +| | | +| host.json | To learn more, see the [host.json reference](functions-host-json.md). | +| local.settings.json | Settings used by Core Tools when running locally, including app settings. To learn more, see [local settings file](#local-settings-file). | +| .gitignore | Prevents the local.settings.json file from being accidentally published to a Git repository. To learn more, see [local settings file](#local-settings-file).| +| .vscode\extensions.json | Settings file used when opening the project folder in Visual Studio Code. | ++Other files in the project depend on your language and specific functions. For more information, see the developer guide for your language. ++### Local settings file The local.settings.json file stores app settings and settings used by local development tools. Settings in the local.settings.json file are used only when you're running your project locally. When you publish your project to Azure, be sure to also add any required settings to the app settings for the function app. |
azure-functions | Functions Event Grid Blob Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-grid-blob-trigger.md | description: This tutorial shows how to create a low-latency, event-driven trigg Previously updated : 3/1/2021 Last updated : 8/22/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers #Customer intent: As an Azure Functions developer, I want learn how to create an Event Grid-based trigger on a Blob Storage container so that I can get a more rapid response to changes in the container. When you create a Blob Storage-triggered function using Visual Studio Code, you |Prompt|Action| |--|--| |**Select a language**| Select `C#`. |- |**Select a .NET runtime**| Select `.NET 6.0 LTS`. Event-driven blob triggers aren't yet supported when running in an isolated worker process. | + |**Select a .NET runtime**| Select `.NET 6.0 Isolated LTS` for running in an [isolated worker process](dotnet-isolated-process-guide.md) or `.NET 6.0 LTS` for [in-process](functions-dotnet-class-library.md). | |**Select a template for your project's first function**| Select `Azure Blob Storage trigger`. | |**Provide a function name**| Enter `BlobTriggerEventGrid`. | |**Provide a namespace** | Enter `My.Functions`. | To use the Event Grid-based Blob Storage trigger, your function requires at leas ::: zone pivot="programming-language-csharp" To upgrade your project with the required extension version, in the Terminal window, run the following command: [dotnet add package](/dotnet/core/tools/dotnet-add-package) -<!# [In-process](#tab/in-process) --> +# [Isolated process](#tab/isolated-process) ```bash-dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.1 +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs --version 6.1.0 ```-<!# [Isolated process](#tab/isolated-process) +# [In-process](#tab/in-process) ```bash-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version 5.0.0 +dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.1.3 ``` >+ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" 1. Open the host.json project file, and inspect the `extensionBundle` element. -1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the following version: +1. If `extensionBundle.version` isn't at least `3.3.0 `, replace `extensionBundle` with the latest: ```json "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle",- "version": "[3.3.0, 4.0.0)" + "version": "[4.0.0, 5.0.0)" } ``` dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version ::: zone pivot="programming-language-csharp" In the BlobTriggerEventGrid.cs file, add `Source = BlobTriggerSource.EventGrid` to the parameters for the Blob trigger attribute, for example:- ++# [Isolated process](#tab/isolated-process) +```csharp +[Function("BlobTriggerCSharp")] +public async Task Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "<NAMED_STORAGE_CONNECTION>")] Stream myBlob, string name, FunctionContext executionContext) +{ + var logger = executionContext.GetLogger("BlobTriggerCSharp"); + logger.LogInformation($"C# Blob trigger function Processed blob\n Name: {name} \n Size: {myBlob.Length} Bytes"); +} +``` +# [In-process](#tab/in-process) ```csharp [FunctionName("BlobTriggerCSharp")] public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTriggerSource.EventGrid, Connection = "<NAMED_STORAGE_CONNECTION>")]Stream myBlob, string name, ILogger log) public static void Run([BlobTrigger("samples-workitems/{name}", Source = BlobTri log.LogInformation($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes"); } ```++ ::: zone-end ::: zone pivot="programming-language-python" After you create the function, in the function.json configuration file, add `"source": "EventGrid"` to the `myBlob` binding, for example: Event Grid validates the endpoint URL when you create an event subscription in t When your function runs locally, the default endpoint used for an event-driven blob storage trigger looks like the following URL: +# [Isolated process](#tab/isolated-process) +```http +http://localhost:7071/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid +``` +# [In-process](#tab/in-process) ```http http://localhost:7071/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid ```++ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" ```http The endpoint used in the event subscription is made up of three different parts, | | | | Prefix and server name | When your function runs locally, the server name with an `https://` prefix comes from the **Forwarding** URL generated by *ngrok*. In the localhost URL, the *ngrok* URL replaces `http://localhost:7071`. When running in Azure, you'll instead use the published function app server, which is usually in the form `https://<FUNCTION_APP_NAME>.azurewebsites.net`. | | Path | The path portion of the endpoint URL comes from the localhost URL copied earlier, and looks like `/runtime/webhooks/blobs` for a Blob Storage trigger. The path for an Event Grid trigger would be `/runtime/webhooks/EventGrid` | -| Query string | The `functionName=BlobTriggerEventGrid` parameter in the query string sets the name of the function that handles the event. For functions other than C#, the function name is qualified by `Host.Functions.`. If you used a different name for your function, you'll need to change this value. An access key isn't required when running locally. When running in Azure, you'll also need to include a `code=` parameter in the URL, which contains a key that you can get from the portal. | +| Query string | For all languages including .NET Isolated the `functionName=Host.Functions.BlobTriggerEventGrid` parameter, except for .NET In-process which should be `functionName=BlobTriggerEventGrid` in the query string sets the name of the function that handles the event. If you used a different name for your function, you'll need to change this value. An access key isn't required when running locally. When running in Azure, you'll also need to include a `code=` parameter in the URL, which contains a key that you can get from the portal. | The following screenshot shows an example of how the final endpoint URL should look when using a Blob Storage trigger named `BlobTriggerEventGrid`: ::: zone pivot="programming-language-csharp" +# [Isolated process](#tab/isolated-process) + ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection-qualified.png) +# [In-process](#tab/in-process) ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection.png)++ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" ![Endpoint selection](./media/functions-event-grid-blob-trigger/functions-event-grid-local-dev-event-subscription-endpoint-selection-qualified.png) An event subscription, powered by Azure Event Grid, raises events based on chang | **Name** | *myBlobLocalNgrokEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. | | **Event Schema** | **Event Grid Schema** | Use the default schema for events. | | **System Topic Name** | *samples-workitems-blobs* | Name for the topic, which represents the container. The topic is created with the first subscription, and you'll use it for future event subscriptions. |- | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*| + | **Filter to Event Types** | *Blob Created*| | **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. | | **Endpoint** | Your ngrok-based URL endpoint | Use the ngrok-based URL endpoint that you determined earlier. | You'll include this value in the query string of new endpoint URL. Create a new endpoint URL for the Blob Storage trigger based on the following example: ::: zone pivot="programming-language-csharp" +# [Isolated process](#tab/isolated-process) +```http +https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=Host.Functions.BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY> +``` +# [In-process](#tab/in-process) ```http https://<FUNCTION_APP_NAME>.azurewebsites.net/runtime/webhooks/blobs?functionName=BlobTriggerEventGrid&code=<BLOB_EXTENSION_KEY> ```++ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-java" ```http This time, you'll include the filter on the event subscription so that only JPEG | | - | -- | | **Name** | *myBlobAzureEventSub* | Name that identifies the event subscription. You can use the name to quickly find the event subscription. | | **Event Schema** | **Event Grid Schema** | Use the default schema for events. |- | **Filter to Event Types** | *Blob Created*<br/>*Blob Deleted*| + | **Filter to Event Types** | *Blob Created*| | **Endpoint Type** | **Web Hook** | The blob storage trigger uses a web hook endpoint. You would use Azure Functions for an Event Grid trigger. | | **Endpoint** | Your new Azure-based URL endpoint | Use the URL endpoint that you built, which includes the key value. | |
azure-functions | Functions How To Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-custom-container.md | You should also consider [enabling continuous deployment](#enable-continuous-dep :::zone pivot="azure-functions,container-apps" ## Azure portal create using containers -When you create a function app in the [Azure portal](https://portal.azure.com), you can choose to deploy the function app from an image in a container registry. To learn how to create a containerized function app in a container registry, see[Creating your function app in a container](#creating-your-function-app-in-a-container). +When you create a function app in the [Azure portal](https://portal.azure.com), you can choose to deploy the function app from an image in a container registry. To learn how to create a containerized function app in a container registry, see [Creating your function app in a container](#creating-your-function-app-in-a-container). The following steps create and deploy an existing containerized function app from a container registry. |
azure-functions | Functions How To Use Azure Function App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md | Connection strings, environment variables, and other application settings are de ## Get started in the Azure portal + 1. To begin, sign in to the [Azure portal] using your Azure account. In the search bar at the top of the portal, enter the name of your function app and select it from the list. 2. Under **Settings** in the left pane, select **Configuration**. You can use either the Azure portal or Azure CLI commands to migrate a function + Migration isn't supported on Linux. + The source plan and the target plan must be in the same resource group and geographical region. For more information, see [Move an app to another App Service plan](../app-service/app-service-plan-manage.md#move-an-app-to-another-app-service-plan). + The specific CLI commands depend on the direction of the migration.-+ Downtime in your function executions occur as the function app is migrated between plans. ++ Downtime in your function executions occurs as the function app is migrated between plans. + State and other app-specific content is maintained, since the same Azure Files share is used by the app both before and after migration. ### Migration in the portal In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your ## Manually install extensions -C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, the recommended way to install extensions is either by [using extension bundles](functions-bindings-register.md#extension-bundles) or by [using Azure Functions Core Tools](functions-run-local.md#install-extensions) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the host.json file. +C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, you should [use extension bundles](functions-bindings-register.md#extension-bundles). If you must manually install extensions you can do this by [using Azure Functions Core Tools](./functions-core-tools-reference.md#func-extensions-install) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the host.json file. This same process works for any other file you need to add to your app. > [!IMPORTANT]-> When possible, you shouldn't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](functions-run-local.md#install-extensions) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods). +> When possible, you shouldn't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](./functions-core-tools-reference.md#func-extensions-install) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods). The Functions editor built into the Azure portal lets you update your function code and configuration (function.json) files directly in the portal. |
azure-functions | Functions Reference Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-csharp.md | Title: Azure Functions C# script developer reference description: Understand how to develop Azure Functions using C# script. Previously updated : 09/15/2022 Last updated : 08/15/2023 # Azure Functions C# script (.csx) developer reference The way that both binding extension packages and other NuGet packages are added By default, the [supported set of Functions extension NuGet packages](functions-triggers-bindings.md#supported-bindings) are made available to your C# script function app by using extension bundles. To learn more, see [Extension bundles](functions-bindings-register.md#extension-bundles). -If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [Install extensions](functions-run-local.md#install-extensions). +If for some reason you can't use extension bundles in your project, you can also use the Azure Functions Core Tools to install extensions based on bindings defined in the function.json files in your app. When using Core Tools to register extensions, make sure to use the `--csx` option. To learn more, see [func extensions install](functions-core-tools-reference.md#func-extensions-install). By default, Core Tools reads the function.json files and adds the required packages to an *extensions.csproj* C# class library project file in the root of the function app's file system (wwwroot). Because Core Tools uses dotnet.exe, you can use it to add any NuGet package reference to this extensions file. During installation, Core Tools builds the extensions.csproj to install the required libraries. Here's an example *extensions.csproj* file that adds a reference to *Microsoft.ProjectOxford.Face* version *1.1.0*: The following table lists the .NET attributes for each binding type and the pack > | Storage table | [`Microsoft.Azure.WebJobs.TableAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs), [`Microsoft.Azure.WebJobs.StorageAccountAttribute`](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/StorageAccountAttribute.cs) | | > | Twilio | [`Microsoft.Azure.WebJobs.TwilioSmsAttribute`](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.Twilio/TwilioSMSAttribute.cs) | `#r "Microsoft.Azure.WebJobs.Extensions.Twilio"` | +## Convert a C# script app to a C# project ++The easiest way to convert a C# script function app to a compiled C# class library project is to start with a new project. You can then, for each function, migrate the code and configuration from each run.csx file and function.json file in a function folder to a single new .cs class library code file. For example, when you have a C# script function named `HelloWorld` you'll have two files: `HelloWorld/run.csx` and `HelloWorld/function.json`. For this function, you create a code file named `HelloWorld.cs` in your new class library project. ++If you are using C# scripting for portal editing, you can [download the app content to your local machine](./deployment-zip-push.md#download-your-function-app-files). Choose the **Site content** option instead of **Content and Visual Studio project**. You don't need to generate a project, and don't include application settings in the download. You're defining a new development environment, and this environment shouldn't have the same permissions as your hosted app environment. ++These instructions show you how to convert C# script functions (which run in-process with the Functions host) to C# class library functions that run in an [isolated worker process](dotnet-isolated-process-guide.md). ++1. Complete the **Create a functions app project** section from your preferred quickstart: + + ### [Azure CLI](#tab/azure-cli) + [Create a C# function in Azure from the command line](create-first-function-cli-csharp.md#create-a-local-function-project) + ### [Visual Studio](#tab/vs) + [Create your first C# function in Azure using Visual Studio](functions-create-your-first-function-visual-studio.md#create-a-function-app-project) + ### [Visual Studio Code](#tab/vs-code) + [Create your first C# function in Azure using Visual Studio Code](create-first-function-vs-code-csharp.md#create-an-azure-functions-project) ++ + +1. If your original C# script code includes an `extensions.csproj` file or any `function.proj` files, copy the package references from these file and add them to the new project's `.csproj` file in the same `ItemGroup` with the Functions core dependencies. ++ >[!TIP] + >Conversion provides a good opportunity to update to the latest versions of your dependencies. Doing so may require additional code changes in a later step. ++1. Copy the contents of the original `host.json` file into the new project's `host.json` file, except for the `extensionBundles` section (compiled C# projects don't use [extension bundles](functions-bindings-register.md#extension-bundles) and you must explicitly add references to all extensions used by your functions). When merging host.json files, remember that the [`host.json`](./functions-host-json.md) schema is versioned, with most apps using version 2.0. The contents of the `extensions` section can differ based on specific versions of the binding extensions used by your functions. See individual extension reference articles to learn how to correctly configure the host.json for your specific versions. ++1. For any [shared files referenced by a `#load` directive](#reusing-csx-code), create a new `.cs` file for each of these shared references. It's simplest to create a new `.cs` file for each shared class definition. If there are static methods without a class, you need to define new classes for these methods. ++1. Perform the following tasks for each `<FUNCTION_NAME>` folder in your original project: ++ 1. Create a new file named `<FUNCTION_NAME>.cs`, replacing `<FUNCTION_NAME>` with the name of the folder that defined your C# script function. You can create a new function code file from one of the trigger-specific templates in the following way: + ### [Azure CLI](#tab/azure-cli) + Using the `func new --name <FUNCTION_NAME>` command and choosing the correct trigger template at the prompt. + ### [Visual Studio](#tab/vs) + Following [Add a function to your project](functions-develop-vs.md?tabs=isolated-process#add-a-function-to-your-project) in the Visual Studio guide. + ### [Visual Studio Code](#tab/vs-code) + Following [Add a function to your project](functions-develop-vs-code.md?tabs=isolated-process#add-a-function-to-your-project) in the Visual Studio Code guide. + + + 1. Copy the `using` statements from your `run.csx` file and add them to the new file. You do not need any `#r` directives. + 1. For any `#load` statement in your `run.csx` file, add a new `using` statement for the namespace you used for the shared code. + 1. In the new file, define a class for your function under the namespace you are using for the project. + 1. Create a new method named `RunHandler` or something similar. This new method serves as the new entry point for the function. + 1. Copy the static method that represents your function, along with any functions it calls, from `run.csx` into your new class as a second method. From the new method you created in the previous step, call into this static method. This indirection step is helpful for navigating any differences as you continue the upgrade. You can keep the original method exactly the same and simply control its inputs from the new context. You may need to create parameters on the new method which you then pass into the static method call. After you have confirmed that the migration has worked as intended, you can remove this extra level of indirection. + 1. For each binding in the `function.json` file, add the corresponding attribute to your new method. To quickly find binding examples, see [Manually add bindings based on examples](add-bindings-existing-function.md?tabs=csharp). + 1. Add any extension packages required by the bindings to your project, if you haven't already done so. + +1. Recreate any application settings required by your app in the `Values` collection of the [local.settings.json file](functions-develop-local.md#local-settings-file). + +1. Verify that your project runs locally: ++ ### [Azure CLI](#tab/azure-cli) + Use `func start` to run your app from the command line. For more information, see [Run functions locally](functions-run-local.md#start). + ### [Visual Studio](#tab/vs) + Follow the [Run functions locally](functions-develop-vs.md?tabs=isolated-process#run-functions-locally) section of the Visual Studio guide. + ### [Visual Studio Code](#tab/vs-code) + Follow the [Run functions locally](functions-develop-vs-code.md?tabs=csharp#run-functions-locally) section of the Visual Studio Code guide. + + + +1. Publish your project to a new function app in Azure: ++ ### [Azure CLI](#tab/azure-cli) + [Create your Azure resources](create-first-function-cli-csharp.md#create-supporting-azure-resources-for-your-function) and deploy the code project to Azure by using the `func azure functionapp publish <APP_NAME>` command. For more information, see [Deploy project files](functions-run-local.md#project-file-deployment). + ### [Visual Studio](#tab/vs) + Follow the [Publish to Azure](functions-develop-vs.md?tabs=isolated-process#publish-to-azure) section of the Visual Studio guide. + ### [Visual Studio Code](#tab/vs-code) + Follow the [Create Azure resources](functions-develop-vs-code.md?tabs=csharp#publish-to-azure) section of the Visual Studio Code guide. + + + +### Example function conversion ++This section shows an example of the migration for a single function. ++The original function in C# scripting has two files: +- `HelloWorld/function.json` +- `HelloWorld/run.csx` ++The contents of `HelloWorld/function.json` are: ++```json +{ + "bindings": [ + { + "authLevel": "FUNCTION", + "name": "req", + "type": "httpTrigger", + "direction": "in", + "methods": [ + "get", + "post" + ] + }, + { + "name": "$return", + "type": "http", + "direction": "out" + } + ] +} +``` ++The contents of `HelloWorld/run.csx` are: ++```csharp +#r "Newtonsoft.Json" ++using System.Net; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++public static async Task<IActionResult> Run(HttpRequest req, ILogger log) +{ + log.LogInformation("C# HTTP trigger function processed a request."); ++ string name = req.Query["name"]; ++ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + dynamic data = JsonConvert.DeserializeObject(requestBody); + name = name ?? data?.name; ++ string responseMessage = string.IsNullOrEmpty(name) + ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." + : $"Hello, {name}. This HTTP triggered function executed successfully."; ++ return new OkObjectResult(responseMessage); +} +``` ++After migrating to the isolated worker model with ASP.NET Core integration, these are replaced by a single `HelloWorld.cs`: ++```csharp +using System.Net; +using Microsoft.Azure.Functions.Worker; +using Microsoft.AspNetCore.Http; +using Microsoft.AspNetCore.Mvc; +using Microsoft.Extensions.Logging; +using Microsoft.AspNetCore.Routing; +using Microsoft.Extensions.Primitives; +using Newtonsoft.Json; ++namespace MyFunctionApp +{ + public class HelloWorld + { + private readonly ILogger _logger; ++ public HelloWorld(ILoggerFactory loggerFactory) + { + _logger = loggerFactory.CreateLogger<HelloWorld>(); + } ++ [Function("HelloWorld")] + public async Task<IActionResult> RunHandler([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req) + { + return await Run(req, _logger); + } ++ // From run.csx + public static async Task<IActionResult> Run(HttpRequest req, ILogger log) + { + log.LogInformation("C# HTTP trigger function processed a request."); ++ string name = req.Query["name"]; ++ string requestBody = await new StreamReader(req.Body).ReadToEndAsync(); + dynamic data = JsonConvert.DeserializeObject(requestBody); + name = name ?? data?.name; ++ string responseMessage = string.IsNullOrEmpty(name) + ? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response." + : $"Hello, {name}. This HTTP triggered function executed successfully."; ++ return new OkObjectResult(responseMessage); + } + } +} +``` + ## Binding configuration and examples +This section contains references and examples for defining triggers and bindings in C# script. + ### Blob trigger The following table explains the binding configuration properties for C# script that you set in the *function.json* file. The following table explains the binding configuration properties for C# script |**connection** | The name of an app setting or setting collection that specifies how to connect to Azure Blobs. See [Connections](./functions-bindings-storage-blob-trigger.md#connections).| -The following example shows a blob trigger binding in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources). +The following example shows a blob trigger definition in a *function.json* file and code that uses the binding. The function writes a log when a blob is added or updated in the `samples-workitems` [container](../storage/blobs/storage-blobs-introduction.md#blob-storage-resources). Here's the binding data in the *function.json* file: The following table explains the binding configuration properties for C# script |function.json property | Description| ||-|-|**type** | Must be set to "timerTrigger". This property is set automatically when you create the trigger in the Azure portal.| -|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. | +|**type** | Must be set to `timerTrigger`. This property is set automatically when you create the trigger in the Azure portal.| +|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | The name of the variable that represents the timer object in function code. | |**schedule**| A [CRON expression](./functions-bindings-timer.md#ncrontab-expressions) or a [TimeSpan](./functions-bindings-timer.md#timespan) value. A `TimeSpan` can be used only for a function app that runs on an App Service Plan. You can put the schedule expression in an app setting and set this property to the app setting name wrapped in **%** signs, as in this example: "%ScheduleAppSetting%". | |**runOnStartup**| If `true`, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. *Use with caution.* **runOnStartup** should rarely if ever be set to `true`, especially in production. | The following table explains the trigger configuration properties that you set i |**connection** | The name of an app setting or setting collection that specifies how to connect to Event Hubs. See [Connections](./functions-bindings-event-hubs-trigger.md#connections).| -The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script functionthat uses the binding. The function logs the message body of the Event Hubs trigger. +The following example shows an Event Hubs trigger binding in a *function.json* file and a C# script function that uses the binding. The function logs the message body of the Event Hubs trigger. The following examples show Event Hubs binding data in the *function.json* file for Functions runtime version 2.x and later versions. The following table explains the binding configuration properties that you set i |function.json property | Description| ||-| |**type** | Must be set to `serviceBusTrigger`. This property is set automatically when you create the trigger in the Azure portal.|-|**direction** | Must be set to "in". This property is set automatically when you create the trigger in the Azure portal. | +|**direction** | Must be set to `in`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | The name of the variable that represents the queue or topic message in function code. | |**queueName**| Name of the queue to monitor. Set only if monitoring a queue, not for a topic. |**topicName**| Name of the topic to monitor. Set only if monitoring a topic, not for a queue.| The following table explains the binding configuration properties that you set i |function.json property | Description| |||-|-|**type** |Must be set to "serviceBus". This property is set automatically when you create the trigger in the Azure portal.| -|**direction** | Must be set to "out". This property is set automatically when you create the trigger in the Azure portal. | +|**type** |Must be set to `serviceBus`. This property is set automatically when you create the trigger in the Azure portal.| +|**direction** | Must be set to `out`. This property is set automatically when you create the trigger in the Azure portal. | |**name** | The name of the variable that represents the queue or topic message in function code. Set to "$return" to reference the function return value. | |**queueName**|Name of the queue. Set only if sending queue messages, not for a topic. |**topicName**|Name of the topic. Set only if sending topic messages, not for a queue.| |
azure-functions | Functions Reference Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-node.md | When running on Windows, the Node.js version is set by the [`WEBSITE_NODE_DEFAUL # [Linux](#tab/linux) -When running on Windows, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI. +When running on Linux, the Node.js version is set by the [linuxfxversion](./functions-app-settings.md#linuxfxversion) site setting. This setting can be updated using the Azure CLI. |
azure-functions | Functions Run Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md | Title: Work with Azure Functions Core Tools -description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you run them on Azure Functions. + Title: Develop Azure Functions locally using Core Tools +description: Learn how to code and test Azure Functions from the command prompt or terminal on your local computer before you deploy them to run them on Azure Functions. ms.assetid: 242736be-ec66-4114-924b-31795fd18884 Previously updated : 07/30/2023 Last updated : 08/24/2023 zone_pivot_groups: programming-languages-set-functions -# Work with Azure Functions Core Tools +# Develop Azure Functions locally using Core Tools -Azure Functions Core Tools lets you develop and test your functions on your local computer. Core Tools includes a version of the same runtime that powers Azure Functions. This runtime means your local functions run as they would in Azure and can connect to live Azure services during local development and debugging. You can even deploy your code project to Azure using Core Tools. ---Core Tools can be used with all [supported languages](supported-languages.md). Select your language at the top of the article. +Azure Functions Core Tools lets you develop and test your functions on your local computer. When you're ready, you can also use Core Tools to deploy your code project to Azure and work with application settings. ::: zone pivot="programming-language-csharp"+>You're viewing the C# version of this article. Make sure to select your preferred Functions programming language at the top of the article. + If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-csharp.md). ::: zone-end ::: zone pivot="programming-language-java"+>You're viewing the Java version of this article. Make sure to select your preferred Functions programming language at the top of the article. + If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-java.md). ::: zone-end ::: zone pivot="programming-language-javascript"+>You're viewing the JavaScript version of this article. Make sure to select your preferred Functions programming language at the top of the article. + If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-node.md). ::: zone-end ::: zone pivot="programming-language-powershell"+>You're viewing the PowerShell version of this article. Make sure to select your preferred Functions programming language at the top of the article. + If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-powershell.md). ::: zone-end ::: zone pivot="programming-language-python"+>You're viewing the Python version of this article. Make sure to select your preferred Functions programming language at the top of the article. + If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-python.md). ::: zone-end ::: zone pivot="programming-language-typescript"+>You're viewing the TypeScript version of this article. Make sure to select your preferred Functions programming language at the top of the article. + If you want to get started right away, complete the [Core Tools quickstart article](create-first-function-cli-typescript.md). ::: zone-end -Core Tools enables the integrated local development and debugging experience for your functions provided by both Visual Studio and Visual Studio Code. --## Prerequisites --To be able to publish to Azure from Core Tools, you must have one of the following Azure tools installed locally: --+ [Azure CLI](/cli/azure/install-azure-cli) -+ [Azure PowerShell](/powershell/azure/install-azure-powershell) --These tools are required to authenticate with your Azure account from your local computer. --## <a name="v2"></a>Core Tools versions --Major versions of Azure Functions Core Tools are linked to specific major versions of the Azure Functions runtime. For example, version 4.x of Core Tools supports version 4.x of the Functions runtime. This is the recommended major version of both the Functions runtime and Core Tools. You can find the latest Core Tools release version on [this release page](https://github.com/Azure/azure-functions-core-tools/releases/latest). --Run the following command to determine the version of your current Core Tools installation: -```command -func --version -``` +For help with version-related issues, see [Core Tools versions](#v2). -Unless otherwise noted, the examples in this article are for version 4.x. --The following considerations apply to Core Tools versions: --+ You can only install one version of Core Tools on a given computer. --+ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of life (EOL). For more information, see [Azure Functions runtime versions overview](functions-versions.md). -+ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. +## Create your local project +> [!IMPORTANT] +> For Python, you must run Core Tools commands in a virtual environment. For more information, see [Quickstart: Create a Python function in Azure from the command line](create-first-function-cli-python.md#create-venv). ::: zone-end+In the terminal window or from a command prompt, run the following command to create a project in the `MyProjFolder` folder: -## Install the Azure Functions Core Tools --The recommended way to install Core Tools depends on the operating system of your local development computer. --### [Windows](#tab/windows) --The following steps use a Windows installer (MSI) to install Core Tools v4.x. For more information about other package-based installers, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#windows). --Download and run the Core Tools installer, based on your version of Windows: --- [v4.x - Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2174087) (Recommended. [Visual Studio Code debugging](functions-develop-vs-code.md#debugging-functions-locally) requires 64-bit.)-- [v4.x - Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2174159)--If you previously used Windows installer (MSI) to install Core Tools on Windows, you should uninstall the old version from Add Remove Programs before installing the latest version. --If you need to install version 1.x of the Core Tools, see the [GitHub repository](https://github.com/Azure/azure-functions-core-tools/blob/v1.x/README.md#installing) for more information. --### [macOS](#tab/macos) ---The following steps use Homebrew to install the Core Tools on macOS. --1. Install [Homebrew](https://brew.sh/), if it's not already installed. --1. Install the Core Tools package: -- ```bash - brew tap azure/functions - brew install azure-functions-core-tools@4 - # if upgrading on a machine that has 2.x or 3.x installed: - brew link --overwrite azure-functions-core-tools@4 - ``` -### [Linux](#tab/linux) --The following steps use [APT](https://wiki.debian.org/Apt) to install Core Tools on your Ubuntu/Debian Linux distribution. For other Linux distributions, see the [Core Tools readme](https://github.com/Azure/azure-functions-core-tools/blob/v4.x/README.md#linux). --1. Install the Microsoft package repository GPG key, to validate package integrity: -- ```bash - curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg - sudo mv microsoft.gpg /etc/apt/trusted.gpg.d/microsoft.gpg - ``` --1. Set up the APT source list before doing an APT update. -- ##### Ubuntu -- ```bash - sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/repos/microsoft-ubuntu-$(lsb_release -cs)-prod $(lsb_release -cs) main" > /etc/apt/sources.list.d/dotnetdev.list' - ``` -- ##### Debian -- ```bash - sudo sh -c 'echo "deb [arch=amd64] https://packages.microsoft.com/debian/$(lsb_release -rs | cut -d'.' -f 1)/prod $(lsb_release -cs) main" > /etc/apt/sources.list.d/dotnetdev.list' - ``` --1. Check the `/etc/apt/sources.list.d/dotnetdev.list` file for one of the appropriate Linux version strings in the following table: -- | Linux distribution | Version | - | -- | - | - | Debian 11 | `bullseye` | - | Debian 10 | `buster` | - | Debian 9 | `stretch` | - | Ubuntu 22.04 | `jammy` | - | Ubuntu 20.04 | `focal` | - | Ubuntu 19.04 | `disco` | - | Ubuntu 18.10 | `cosmic` | - | Ubuntu 18.04 | `bionic` | - | Ubuntu 17.04 | `zesty` | - | Ubuntu 16.04/Linux Mint 18 | `xenial` | --1. Start the APT source update: -- ```bash - sudo apt-get update - ``` --1. Install the Core Tools package: -- ```bash - sudo apt-get install azure-functions-core-tools-4 - ``` ----When upgrading to the latest version of Core Tools, you should use the same package manager as the original installation to perform the upgrade. Visual Studio and Visual Studio Code may also install Azure Functions Core Tools, depending on your specific tools installation. --## Create a local Functions project --A Functions project directory contains the following files and folders, regardless of language: --| File name | Description | -| | | -| host.json | To learn more, see the [host.json reference](functions-host-json.md). | -| local.settings.json | Settings used by Core Tools when running locally, including app settings. To learn more, see [local settings](#local-settings). | -| .gitignore | Prevents the local.settings.json file from being accidentally published to a Git repository. To learn more, see [local settings](#local-settings)| -| .vscode\extensions.json | Settings file used when opening the project folder in Visual Studio Code. | --To learn more about the Functions project folder, see the [Azure Functions developers guide](functions-reference.md#folder-structure). --In the terminal window or from a command prompt, run the following command to create the project and local Git repository: +### [Isolated process](#tab/isolated-process) +```console +func init MyProjFolder --worker-runtime dotnet-isolated ```-func init MyFunctionProj -``` --This example creates a Functions project in a new `MyFunctionProj` folder. You're prompted to choose a default language for your project. -The following considerations apply to project initialization: +By default this command creates a project that runs in-process with the Functons host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For for information, see the [`func init`](functions-core-tools-reference.md#func-init) reference. -+ If you don't provide the `--worker-runtime` option in the command, you're prompted to choose your language. For more information, see the [func init reference](functions-core-tools-reference.md#func-init). +### [In-process](#tab/in-process) -+ When you don't provide a project name, the current folder is initialized. +```console +func init MyProjFolder --worker-runtime dotnet +``` -+ If you plan to deploy your project as a function app running in a Linux container, use the `--docker` option to make sure that a Dockerfile is generated for your project. To learn more, see [Create a function app in a local container](functions-create-container-registry.md#create-and-test-the-local-functions-project). If you forget to do this, you can always generate the Dockerfile for the project later by using the `func init --docker-only` command. +This command creates a project that runs on the current [Long-Term Support (LTS) version of .NET Core]. For other .NET version, create an app that runs in an isolated worker process from the Functions host. -+ Core Tools lets you create function app projects for the .NET runtime as either [in-process](functions-dotnet-class-library.md) or [isolated worker process](dotnet-isolated-process-guide.md) C# class library projects (.csproj). These projects, which can be used with Visual Studio or Visual Studio Code, are compiled during debugging and when publishing to Azure. + -+ Use the `--csx` parameter if you want to work locally with C# script (.csx) files. These files are the same ones you get when you create functions in the Azure portal and when using version 1.x of Core Tools. To learn more, see the [func init reference](functions-core-tools-reference.md#func-init). +For a comparison between the two .NET process models, see the [process mode comparison article](./dotnet-isolated-in-process-differences.md). ::: zone-end ::: zone pivot="programming-language-java"-+ Java uses a Maven archetype to create the local Functions project, along with your first HTTP triggered function. Instead of using `func init` and `func new`, you should follow the steps in the [Command line quickstart](./create-first-function-cli-java.md). -+ To use a `--worker-runtime` value of `node`, specify the `--language` as `javascript`. -+ You should run all commands, including `func init`, from inside a virtual environment. To learn more, see [Create and activate a virtual environment](create-first-function-cli-python.md#create-venv). -+ To use a `--worker-runtime` value of `node`, specify the `--language` as `typescript`. +Java uses a Maven archetype to create the local project, along with your first HTTP triggered function. Rather than using `func init` and `func new`, you should instead follow the steps in the [Command line quickstart](./create-first-function-cli-java.md). ::: zone-end+### [v4](#tab/node-v4) +```console +func init MyProjFolder --worker-runtime javascript --model V4 +``` +### [v3](#tab/node-v3) +```console +func init MyProjFolder --worker-runtime javascript --model V3 +``` + -## Binding extensions --[Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. To be able to use a specific binding extension, that extension must be installed in the project. --This section doesn't apply to version 1.x of the Functions runtime. In version 1.x, supported binding were included in the core product extension. --For compiled C# project, add references to the specific NuGet packages for the binding extensions required by your functions. C# script (.csx) project should use [extension bundles](functions-bindings-register.md#extension-bundles). -Functions provides _extension bundles_ to make is easy to work with binding extensions in your project. Extension bundles, which are versioned and defined in the host.json file, install a complete set of compatible binding extension packages for your app. Your host.json should already have extension bundles enabled. If for some reason you need to add or update the extension bundle in the host.json file, see [Extension bundles](functions-bindings-register.md#extension-bundles). --If you must use a binding extension or an extension version not in a supported bundle, you need to manually install extensions. For such rare scenarios, see [Install extensions](#install-extensions). ---By default, these settings aren't migrated automatically when the project is published to Azure. Use the [`--publish-local-settings` option][func azure functionapp publish] when you publish to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published. +This command creates a JavaScript project that uses the desired [programming model version](functions-reference-node.md). +### [v4](#tab/node-v4) +```console +func init MyProjFolder --worker-runtime typescript --model V4 +``` +### [v3](#tab/node-v3) +```console +func init MyProjFolder --worker-runtime typescript --model V3 +``` + -The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables). -The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables). -The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables). +This command creates a TypeScript project that uses the desired [programming model version](functions-reference-node.md). ::: zone pivot="programming-language-powershell"-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables). +```console +func init MyProjFolder --worker-runtime powershell +``` ::: zone-end ::: zone pivot="programming-language-python"-The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables). --When no valid storage connection string is set for [`AzureWebJobsStorage`] and a local storage emulator isn't being used, the following error message is shown: --> Missing value for AzureWebJobsStorage in local.settings.json. This is required for all triggers other than HTTP. You can run 'func azure functionapp fetch-app-settings \<functionAppName\>' or specify a connection string in local.settings.json. --### Get your storage connection strings --Even when using the [Azurite storage emulator](functions-develop-local.md#local-storage-emulator) for development, you may want to run locally with an actual storage connection. Assuming you have already [created a storage account](../storage/common/storage-account-create.md), you can get a valid storage connection string in one of several ways: --#### [Portal](#tab/portal) --1. From the [Azure portal], search for and select **Storage accounts**. -- ![Select Storage accounts from Azure portal](./media/functions-run-local/select-storage-accounts.png) - -1. Select your storage account, select **Access keys** in **Settings**, then copy one of the **Connection string** values. -- ![Copy connection string from Azure portal](./media/functions-run-local/copy-storage-connection-portal.png) --#### [Core Tools](#tab/azurecli) --From the project root, use one of the following commands to download the connection string from Azure: -- + Download all settings from an existing function app: -- ``` - func azure functionapp fetch-app-settings <FunctionAppName> - ``` -- + Get the Connection string for a specific storage account: -- ``` - func azure storage fetch-connection-string <StorageAccountName> - ``` -- When you aren't already signed in to Azure, you're prompted to do so. These commands overwrite any existing settings in the local.settings.json file. To learn more, see the [`func azure functionapp fetch-app-settings`](functions-core-tools-reference.md#func-azure-functionapp-fetch-app-settings) and [`func azure storage fetch-connection-string`](functions-core-tools-reference.md#func-azure-storage-fetch-connection-string) commands. --#### [Storage Explorer](#tab/storageexplorer) --1. Run [Azure Storage Explorer](https://storageexplorer.com/). --1. In the **Explorer**, expand your subscription, then expand **Storage Accounts**. --1. Select your storage account and copy the primary or secondary connection string. -- ![Copy connection string from Storage Explorer](./media/functions-run-local/storage-explorer.png) -+### [v2](#tab/python-v2) +```console +func init MyProjFolder --worker-runtime python --model V2 +``` +### [v1](#tab/python-v1) +```console +func init MyProjFolder --worker-runtime python +``` -## <a name="create-func"></a>Create a function --To create a function in an existing project, run the following command: +This command creates a Python project that uses the desired [programming model version](functions-reference-python.md#programming-model). -``` -func new -``` +When you run `func init` without the `--worker-runtime` option, you're prompted to choose your project language. To learn more about the available options for the `func init` command, see the [`func init`](functions-core-tools-reference.md#func-init) reference. -When you run `func new`, you're prompted to choose a template in the default language of your function app. Next, you're prompted to choose a name for your function. In version 1.x, you're also required to choose the language. +## <a name="create-func"></a>Create a function -You can also specify the function name and template in the `func new` command. The following example uses the `--template` option to create an HTTP trigger named `MyHttpTrigger`: +To add a function to your project, run the `func new` command using the `--template` option to select your trigger template. The following example creates an HTTP trigger named `MyHttpTrigger`: ``` func new --template "Http Trigger" --name MyHttpTrigger This example creates a Queue Storage trigger named `MyQueueTrigger`: func new --template "Azure Queue Storage Trigger" --name MyQueueTrigger ``` -To learn more, see the [`func new`](functions-core-tools-reference.md#func-new) command. +The following considerations apply when adding functions: +++ When you run `func new` without the `--template` option, you're prompted to choose a template.+++ Use the [`func templates list`](./functions-core-tools-reference.md#func-templates-list) command to see the complete list of available templates for your language. +++ When you add a trigger that connects to a service, you'll also need to add an application setting that references a connection string or a managed identity to the local.settings.json file. Using app settings in this way prevents you from having to embed credentials in your code. For more information, see [Work with app settings locally](#local-settings). ++ Core Tools also adds a reference to the specific binding extension to your C# project. -## <a name="start"></a>Run functions locally +To learn more about the available options for the `func new` command, see the [`func new`](functions-core-tools-reference.md#func-new) reference. -To run a Functions project, you run the Functions host from the root directory of your project. The host enables triggers for all functions in the project. Use the following command to run your functions locally: +## Add a binding to your function ++Functions provides a set of service-specific input and output bindings, which make it easier for your function to connection to other Azure services without having to use the service-specific client SDKs. For more information, see [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md). + +To add an input or output binding to an existing function, you must manually update the function definition. +The following considerations apply when adding bindings to a function: ++ For languages that define functions using the _function.json_ configuration file, Visual Studio Code simplifies the process of adding bindings to an existing function definition. For more information, see [Connect functions to Azure services using bindings](add-bindings-existing-function.md#visual-studio-code). ++ When you add bindings that connect to a service, you must also add an application setting that references a connection string or managed identity to the local.settings.json file. For more information, see [Work with app settings locally](#local-settings). ++ When you add a supported binding, the extension should already be installed when your app uses extension bundle. For more information, see [extension bundles](functions-bindings-register.md#extension-bundles).++ When you add a binding that requires a new binding extension, you must also add a reference to that specific binding extension in your C# project. +For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=csharp#manually-add-bindings-based-on-examples). +For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=java#manually-add-bindings-based-on-examples). +For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=javascript#manually-add-bindings-based-on-examples). +For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=powershell#manually-add-bindings-based-on-examples). +For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=python#manually-add-bindings-based-on-examples). +For more information, including links to example binding code that you can refer to, see [Add bindings to a function](add-bindings-existing-function.md?tabs=typescript#manually-add-bindings-based-on-examples). +++## <a name="start"></a>Start the Functions runtime ++Before you can run or debug the functions in your project, you need to start the Functions host from the root directory of your project. The host enables triggers for all functions in the project. Use this command to start the local runtime: ::: zone pivot="programming-language-java" ``` mvn clean package mvn azure-functions:run ``` ::: zone-end ``` func start ``` ::: zone-end -The way you start the host depends on your runtime version: -### [v4.x](#tab/v2) -``` -func start -``` -### [v1.x](#tab/v1) -``` -func host start -``` - ::: zone pivot="programming-language-typescript" ``` npm install npm start This command must be [run in a virtual environment](./create-first-function-cli-python.md). ::: zone-end -When the Functions host starts, it outputs the URL of HTTP-triggered functions, like in the following example: +When the Functions host starts, it outputs a list of functions in the project, including the URLs of any HTTP-triggered functions, like in this example: <pre> Found the following functions: Job host started Http Function MyHttpTrigger: http://localhost:7071/api/MyHttpTrigger </pre> -### Considerations when running locally - Keep in mind the following considerations when running your functions locally: + By default, authorization isn't enforced locally for HTTP endpoints. This means that all local HTTP requests are handled as `authLevel = "anonymous"`. For more information, see the [HTTP binding article](functions-bindings-http-webhook-trigger.md#authorization-keys). You can use the `--enableAuth` option to require authorization when running locally. For more information, see [`func start`](./functions-core-tools-reference.md?tabs=v2#func-start) + While there's local storage emulation available, it's often best to validate your triggers and bindings against live services in Azure. You can maintain the connections to these services in the local.settings.json project file. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). Make sure to keep test and production data separate when testing against live Azure services. -+ You can trigger non-HTTP functions locally without connecting to a live service. For more information, see [Non-HTTP triggered functions](#non-http-triggered-functions). ++ You can trigger non-HTTP functions locally without connecting to a live service. For more information, see [Run a local function](./functions-run-local.md?tabs=non-http-trigger#run-a-local-function). + When you include your Application Insights connection information in the local.settings.json file, local log data is written to the specific Application Insights instance. To keep local telemetry data separate from production data, consider using a separate Application Insights instance for development and testing.++ When using version 1.x of the Core Tools, instead use the `func host start` command to start the local runtime. -### Passing test data to a function +## Run a local function -To test your functions locally, you [start the Functions host](#start) and call endpoints on the local server using HTTP requests. The endpoint you call depends on the type of function. +With your local Functions host (func.exe) running, you can now trigger individual functions to run and debug your function code. The way in which you execute an individual function depends on its trigger type. ->[!NOTE] +> [!NOTE] > Examples in this topic use the cURL tool to send HTTP requests from the terminal or a command prompt. You can use a tool of your choice to send HTTP requests to the local server. The cURL tool is available by default on Linux-based systems and Windows 10 build 17063 and later. On older Windows, you must first download and install the [cURL tool](https://curl.haxx.se/). -For more general information on testing functions, see [Strategies for testing your code in Azure Functions](functions-test-a-function.md). +### [HTTP trigger](#tab/http-trigger) -#### HTTP and webhook triggered functions --You call the following endpoint to locally run HTTP and webhook triggered functions: +HTTP triggers are started by sending an HTTP request to the local endpoint and port as displayed in the func.exe output, which has this general format: ```-http://localhost:{port}/api/{function_name} +http://localhost:<PORT>/api/<FUNCTION_NAME> ``` -Make sure to use the same server name and port that the Functions host is listening on. You see an endpoint like this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger. +In this URL template, `<FUNCTION_NAME>` is the name of the function or route and `<PORT>` is the local port on which func.exe is listening. -The following cURL command triggers the `MyHttpTrigger` quickstart function from a GET request with the _name_ parameter passed in the query string. +For example, this cURL command triggers the `MyHttpTrigger` quickstart function from a GET request with the _name_ parameter passed in the query string: ``` curl --get http://localhost:7071/api/MyHttpTrigger?name=Azure%20Rocks ``` -The following example is the same function called from a POST request passing _name_ in the request body: +This example is the same function called from a POST request passing _name_ in the request body, shown for both Bash shell and Windows command line: -##### [Bash](#tab/bash) ```bash curl --request POST http://localhost:7071/api/MyHttpTrigger --data '{"name":"Azure Rocks"}' ```-##### [Cmd](#tab/cmd) + ```cmd curl --request POST http://localhost:7071/api/MyHttpTrigger --data "{'name':'Azure Rocks'}" ```---You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use cURL, Fiddler, Postman, or a similar HTTP testing tool that supports POST requests. -#### Non-HTTP triggered functions +The following considerations apply when calling HTTP endpoints locally: -For all functions other than HTTP and Event Grid triggers, you can test your functions locally using REST by calling a special endpoint called an _administration endpoint_. Calling this endpoint with an HTTP POST request on the local server triggers the function. You can call the `functions` administrator endpoint (`http://localhost:{port}/admin/functions/`) to get URLs for all available functions, both HTTP triggered and non-HTTP triggered. ++ You can make GET requests from a browser passing data in the query string. For all other HTTP methods, you must use cURL, Fiddler, Postman, or a similar HTTP testing tool that supports POST requests. -When running your functions in Core Tools, authentication and authorization is bypassed. However, when you try to call the same administrator endpoints on your function app in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). ++ Make sure to use the same server name and port that the Functions host is listening on. You see an endpoint like this in the output generated when starting the Function host. You can call this URL using any HTTP method supported by the trigger. ->[!IMPORTANT] ->Access keys are valuable shared secrets. When used locally, they must be securely stored outside of source control. Because authentication and authorization isn't required by Functions when running locally, you should avoid using and storing access keys unless your scenarios require it. +### [Non-HTTP trigger](#tab/non-http-trigger) -To test Event Grid triggered functions locally, see [Local testing with viewer web app](event-grid-how-tos.md#local-testing-with-viewer-web-app). +There are two ways to execute non-HTTP triggers locally. First, you can connect to live Azure services, such as Azure Storage and Azure Service Bus. This directly mirrors the behavior of your function when running in Azure. When using live services, make sure to include the required named connection strings in the [local settings file](#local-settings). You may consider using a different service connection during development than you do in production by using a different connection string in the local.settings.json file than you use in the function app settings in Azure. -You can optionally pass test data to the execution in the body of the POST request. This functionality is similar to the **Test** tab in the Azure portal. +Event Grid triggers require extra configuration to run locally. -You call the following administrator endpoint to trigger non-HTTP functions: +You can also run a non-HTTP function locally using REST by calling a special endpoint called an _administrator endpoint_. Use this format to call the `admin` endpoint and trigger a specific non-HTTP function: ```-http://localhost:{port}/admin/functions/{function_name} +http://localhost:<PORT>/admin/functions/<FUNCTION_NAME> ``` -To pass test data to the administrator endpoint of a function, you must supply the data in the body of a POST request message. The message body is required to have the following JSON format: +In this URL template, `<FUNCTION_NAME>` is the name of the function or route and `<PORT>` is the local port on which func.exe is listening. ++You can optionally pass test data to the execution in the body of the POST request. To pass test data, you must supply the data in the body of a POST request message, which has this JSON format: ```JSON {- "input": "<trigger_input>" + "input": "<TRIGGER_INPUT>" } ``` -The `<trigger_input>` value contains data in a format expected by the function. The following cURL example is a POST to a `QueueTriggerJS` function. In this case, the input is a string that is equivalent to the message expected to be found in the queue. +The `<TRIGGER_INPUT>` value contains data in a format expected by the function. This cURL example is shown for both Bash shell and Windows command line: -##### [Bash](#tab/bash) ```bash curl --request POST -H "Content-Type:application/json" --data '{"input":"sample queue data"}' http://localhost:7071/admin/functions/QueueTrigger ```-##### [Cmd](#tab/cmd) -```bash ++```cmd curl --request POST -H "Content-Type:application/json" --data "{'input':'sample queue data'}" http://localhost:7071/admin/functions/QueueTrigger ```++The previous examples generate a POST request that passes a string `sample queue data` to a function named `QueueTrigger` function, which simulates data arriving in the queue and triggering the function ++The following considerations apply when using the administrator endpoint for local testing: +++ You can call the `functions` administrator endpoint (`http://localhost:{port}/admin/functions/`) to return a list of administrator URLs for all available functions, both HTTP triggered and non-HTTP triggered.+++ Authentication and authorization are bypassed when running locally. The same APIs exist in Azure, but when you try to call the same administrator endpoints in Azure, you must provide an access key. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). +++ Access keys are valuable shared secrets. When used locally, they must be securely stored outside of source control. Because authentication and authorization aren't required by Functions when running locally, you should avoid using and storing access keys unless your scenarios require it.+++ Calling an administrator endpoint and passing test data is similar to using the **Test** tab in the Azure portal.++### [Event Grid trigger](#tab/event-grid-trigger) ++Event Grid triggers have specific requirements to enable local testing. For more information, see [Local testing with viewer web app](event-grid-how-tos.md#local-testing-with-viewer-web-app). + ## <a name="publish"></a>Publish to Azure The Azure Functions Core Tools supports three types of deployment: | Azure Container Apps | `func azurecontainerapps deploy` | Deploys a containerized function app to an existing Container Apps environment. | | Kubernetes cluster | `func kubernetes deploy` | Deploys your Linux function app as a custom Docker container to a Kubernetes cluster. | -### Authenticating with Azure - You must have either the [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell) installed locally to be able to publish to Azure from Core Tools. By default, Core Tools uses these tools to authenticate with your Azure account. If you don't have these tools installed, you need to instead [get a valid access token](/cli/azure/account#az-account-get-access-token) to use during deployment. You can present an access token using the `--access-token` option in the deployment commands. -### <a name="project-file-deployment"></a>Deploy project files +## <a name="project-file-deployment"></a>Deploy project files ::: zone pivot="programming-language-csharp,programming-language-javascript,programming-language-powershell,programming-language-python,programming-language-typescript" To publish your local code to a function app in Azure, use the [`func azure functionapp publish publish`](./functions-core-tools-reference.md#func-azure-functionapp-publish) command, as in the following example: The following considerations apply to this kind of deployment: + A [remote build](functions-deployment-technologies.md#remote-build) is performed on compiled projects. This can be controlled by using the [`--no-build` option][func azure functionapp publish]. -+ Use the [`--publish-local-settings` option][func azure functionapp publish] to automatically create app settings in your function app based on values in the local.settings.json file. ++ Use the [`--publish-local-settings`][func azure functionapp publish] option to automatically create app settings in your function app based on values in the local.settings.json file. + To publish to a specific named slot in your function app, use the [`--slot` option](functions-core-tools-reference.md#func-azure-functionapp-publish). ::: zone-end -### Azure Container Apps deployment +## Deploy containers ++Core Tools lets you deploy your [containerized function app](functions-create-container-registry.md) to both managed Azure Container Apps environments and Kubernetes clusters that you manage. ++### [Container Apps](#tab/container-apps) -Functions lets you deploy a [containerized function app](functions-create-container-registry.md) to an Azure Container Apps environment. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). Use the following [`func azurecontainerapps deploy`](./functions-core-tools-reference.md#func-azurecontainerapps-deploy) command to deploy an existing container image to a Container Apps environment: +Use the following [`func azurecontainerapps deploy`](./functions-core-tools-reference.md#func-azurecontainerapps-deploy) command to deploy an existing container image to a Container Apps environment: ```command func azurecontainerapps deploy --name <APP_NAME> --environment <ENVIRONMENT_NAME> --storage-account <STORAGE_CONNECTION> --resource-group <RESOURCE_GROUP> --image-name <IMAGE_NAME> [--registry-password] [--registry-server] [--registry-username] ``` -When deploying to an Azure Container Apps environment, the environment and storage account must already exist. You don't need to create a separate function app resource. The storage account connection string you provide is used by the deployed function app. +When you deploy to an Azure Container Apps environment, the following considerations apply: -> [!IMPORTANT] -> Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control systems. ++ The environment and storage account must already exist. The storage account connection string you provide is used by the deployed function app.+++ You don't need to create a separate function app resource when deploying to Container Apps. +++ Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control systems. You can [encrypt the local.settings.json file](#encrypt-the-local-settings-file) for added security. -### Kubernetes cluster +For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md). -Core Tools can also be used to deploy a [containerized function app](functions-create-container-registry.md) to a Kubernetes cluster that you manage. The following [`func kubernetes deploy`](./functions-core-tools-reference.md#func-kubernetes-deploy) command uses the Dockerfile to generate a container in the specified registry and deploy it to the default Kubernetes cluster. +### [Kubernetes cluster](#tab/kubernetes) ++The following [`func kubernetes deploy`](./functions-core-tools-reference.md#func-kubernetes-deploy) command uses the Dockerfile to generate a container in the specified registry and deploy it to the default Kubernetes cluster. ```command func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME> func kubernetes deploy --name <DEPLOYMENT_NAME> --registry <REGISTRY_USERNAME> Azure Functions on Kubernetes using KEDA is an open-source effort that you can use free of cost. Best-effort support is provided by contributors and from the community. To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes). -## Install extensions + +++The following considerations apply when working with the local settings file: ++ Because the local.settings.json may contain secrets, such as connection strings, you should never store it in a remote repository. Core Tools helps you encrypt this local settings file for improved security. For more information, see [Local settings file](functions-develop-local.md#local-settings-file). You can also [encrypt the local.settings.json file](#encrypt-the-local-settings-file) for added security. +++ By default, local settings aren't migrated automatically when the project is published to Azure. Use the [`--publish-local-settings`][func azure functionapp publish] option when you publish your project files to make sure these settings are added to the function app in Azure. Values in the `ConnectionStrings` section are never published. You can also [upload settings from the local.settings.json file](#upload-local-settings-to-azure) at any time. +++ You can download and overwrite settings in your local.settings.json file with settings from your function app in Azure. For more information, see [Download application settings](#download-application-settings). ::: zone pivot="programming-language-csharp"-> [!NOTE] -> This section only applies to C# script (.csx) projects, which also rely on extension bundles. Compiled C# projects use NuGet extension packages in the regular way. ++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-dotnet-class-library.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-java.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-node.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-powershell.md#environment-variables).++ The function app settings values can also be read in your code as environment variables. For more information, see [Environment variables](functions-reference-python.md#environment-variables). ::: zone-end -In the rare event you aren't able to use [extension bundles](functions-bindings-register.md#extension-bundles), you can use Core Tools to install the specific extension packages required by your project. The following are some reasons why you might need to install extensions manually: ++ When no valid storage connection string is set for [`AzureWebJobsStorage`](functions-app-settings.md#azurewebjobsstorage) and a local storage emulator isn't being used, an error is shown. You can use Core Tools to [download a specific connection string](#download-a-storage-connection-string) from any of your Azure Storage accounts. -* You need to access a specific version of an extension not available in a bundle. -* You need to access a custom extension not available in a bundle. -* You need to access a specific combination of extensions not available in a single bundle. +### Download application settings -The following considerations apply when manually installing extensions: +From the project root, use the following command to download all application settings from the `myfunctionapp12345` app in Azure: -+ To manually install extensions by using Core Tools, you must have the [.NET 6.0 SDK](https://dotnet.microsoft.com/download) installed. +```command +func azure functionapp fetch-app-settings myfunctionapp12345 +``` -+ You can't explicitly install extensions in a function app with extension bundles enabled. First, remove the `extensionBundle` section in *host.json* before explicitly installing extensions. +This command overwrites any existing settings in the local.settings.json file with values from Azure. When not already present, new items are added to the collection. For more information, see the [`func azure functionapp fetch-app-settings`](functions-core-tools-reference.md#func-azure-functionapp-fetch-app-settings) command. -+ The first time you explicitly install an extension, a .NET project file named extensions.csproj is added to the root of your app project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file. +### Download a storage connection string -Use the following command to install a specific extension package at a specific version, in this case the Storage extension: +Core Tools also make it easy to get the connection string of any storage account to which you have access. From the project root, use the following command to download the connection string from a storage account named `mystorage12345`. ```command-func extensions install --package Microsoft.Azure.WebJobs.Extensions.Storage --version 5.0.0 +func azure storage fetch-connection-string mystorage12345 ``` -You can use this command to install any compatible NuGet package. To learn more, see the [`func extensions install`](functions-core-tools-reference.md#func-extensions-install) command. +This command adds a setting named `mystorage12345_STORAGE` to the local.settings.json file, which contains the connection string for the `mystorage12345` account. For more information, see the [`func azure storage fetch-connection-string`](functions-core-tools-reference.md#func-azure-storage-fetch-connection-string) command. -## Monitoring functions +For improved security during development, consider [encrypting the local.settings.json file](#encrypt-the-local-settings-file). -The recommended way to monitor the execution of your functions is by integrating with Azure Application Insights. You can also stream execution logs to your local computer. To learn more, see [Monitor Azure Functions](functions-monitoring.md). +### Upload local settings to Azure -### Application Insights integration +When you publish your project files to Azure without using the `--publish-local-settings` option, settings in the local.settings.json file aren't set in your function app. You can always rerun the `func azure functionapp publish` with the `--publish-settings-only` option to upload just the settings without republishing the project files. -Application Insights integration should be enabled when you create your function app in Azure. If for some reason your function app isn't connected to an Application Insights instance, it's easy to do this integration in the Azure portal. To learn more, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration). +The following example uploads just settings from the `Values` collection in the local.settings.json file to the function app in Azure named `myfunctionapp12345`: -### Enable streaming logs +```command +func azure functionapp publish myfunctionapp12345 --publish-settings-only +``` ++### Encrypt the local settings file ++To improve security of connection strings and other valuable data in your local settings, Core Tools lets you encrypt the local.settings.json file. When this file is encrypted, the runtime automatically decrypts the settings when needed the same way it does with application setting in Azure. You can also decrypt a locally encrypted file to work with the settings. -You can view a stream of log files being generated by your functions in a command-line session on your local computer. +Use the following command to encrypt the local settings file for the project: +```command +func settings encrypt +``` -This type of streaming logs requires that Application Insights integration be enabled for your function app. +Use the following command to decrypt an encrypted local setting, so that you can work with it: ++```command +func settings decrypt +``` ++When the settings file is encrypted and decrypted, the file's `IsEncrypted` setting also gets updated. ++## Configure binding extensions ++[Functions triggers and bindings](functions-triggers-bindings.md) are implemented as .NET extension (NuGet) packages. To be able to use a specific binding extension, that extension must be installed in the project. ++This section doesn't apply to version 1.x of the Functions runtime. In version 1.x, supported binding were included in the core product extension. ++For C# class library projects, add references to the specific NuGet packages for the binding extensions required by your functions. C# script (.csx) project must use [extension bundles](functions-bindings-register.md#extension-bundles). +Functions provides _extension bundles_ to make is easy to work with binding extensions in your project. Extension bundles, which are versioned and defined in the host.json file, install a complete set of compatible binding extension packages for your app. Your host.json should already have extension bundles enabled. If for some reason you need to add or update the extension bundle in the host.json file, see [Extension bundles](functions-bindings-register.md#extension-bundles). ++If you must use a binding extension or an extension version not in a supported bundle, you need to manually install extensions. For such rare scenarios, see the [`func extensions install`](./functions-core-tools-reference.md#func-extensions-install) command. ++## <a name="v2"></a>Core Tools versions ++Major versions of Azure Functions Core Tools are linked to specific major versions of the Azure Functions runtime. For example, version 4.x of Core Tools supports version 4.x of the Functions runtime. This version is the recommended major version of both the Functions runtime and Core Tools. You can determine the latest release version of Core Tools in the [Azure Functions Core Tools repository](https://github.com/Azure/azure-functions-core-tools/releases/latest). ++Run the following command to determine the version of your current Core Tools installation: ++```command +func --version +``` ++Unless otherwise noted, the examples in this article are for version 4.x. ++The following considerations apply to Core Tools installations: +++ You can only install one version of Core Tools on a given computer. +++ When upgrading to the latest version of Core Tools, you should use the same method that you used for original installation to perform the upgrade. For example, if you used an MSI on Windows, uninstall the current MSI and install the latest one. Or if you used npm, rerun the `npm install command`. +++ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of life (EOL). For more information, see [Azure Functions runtime versions overview](functions-versions.md). ++ Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. [!INCLUDE [functions-x86-emulation-on-arm64](../../includes/functions-x86-emulation-on-arm64.md)] -If you're using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code). +When using Visual Studio Code, you can integrate Rosetta with the built-in Terminal. For more information, see [Enable emulation in Visual Studio Code](./functions-develop-vs-code.md#enable-emulation-in-visual-studio-code). ## Next steps Learn how to [develop, test, and publish Azure functions by using Azure Function <!-- LINKS --> -[Azure Functions Core Tools]: https://www.npmjs.com/package/azure-functions-core-tools -[Azure portal]: https://portal.azure.com -[Node.js]: https://docs.npmjs.com/getting-started/installing-node#osx-or-windows -[`FUNCTIONS_WORKER_RUNTIME`]: functions-app-settings.md#functions_worker_runtime -[`AzureWebJobsStorage`]: functions-app-settings.md#azurewebjobsstorage [extension bundles]: functions-bindings-register.md#extension-bundles [func azure functionapp publish]: functions-core-tools-reference.md?tabs=v2#func-azure-functionapp-publish-[func init]: functions-core-tools-reference.md?tabs=v2#func-init +++[Long-Term Support (LTS) version of .NET Core]: https://dotnet.microsoft.com/platform/support/policy/dotnet-core#lifecycle |
azure-functions | Functions Triggers Bindings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-triggers-bindings.md | Title: Triggers and bindings in Azure Functions description: Learn to use triggers and bindings to connect your Azure Function to online events and cloud-based services.- Previously updated : 05/25/2022 Last updated : 08/14/2023 +zone_pivot_groups: programming-languages-set-functions # Azure Functions triggers and bindings concepts |
azure-functions | Migrate Cosmos Db Version 3 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md | + + Title: Migrate Azure Cosmos DB extension for Azure Functions to version 4.x +description: This article shows you how to upgrade your existing function apps using the Azure Cosmos DB extension version 3.x to be able to use version 4.x of the extension. ++ Last updated : 08/16/2023+zone_pivot_groups: programming-languages-set-functions-lang-workers +++# Migrate function apps from Azure Cosmos DB extension version 3.x to version 4.x ++This article highlights considerations for upgrading your existing Azure Functions applications that use the Azure Cosmos DB extension version 3.x to use the newer [extension version 4.x](./functions-bindings-cosmosdb-v2.md?tabs=extensionv4). Migrating from version 3.x to version 4.x of the Azure Cosmos DB extension has breaking changes for your application. ++> [!IMPORTANT] +> On August 31, 2024 the Azure Cosmos DB extension version 3.x will be retired. The extension and all applications using the extension will continue to function, but Azure Cosmos DB will cease to provide further maintenance and support for this extension. We recommend migrating to the latest version 4.x of the extension. ++This article walks you through the process of migrating your function app to run on version 4.x of the Azure Cosmos DB extension. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top). +++## Update the extension version ++.NET Functions use bindings that are installed in the project as NuGet packages. Depending on your Functions process model, the NuGet package to update varies. ++|Functions process model |Azure Cosmos DB extension |Recommended version | +||--|--| +|[In-process model](./functions-dotnet-class-library.md)|[Microsoft.Azure.WebJobs.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.CosmosDB) |>= 4.3.0 | +|[Isolated worker model](./dotnet-isolated-process-guide.md) |[Microsoft.Azure.Functions.Worker.Extensions.CosmosDB](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB)|>= 4.4.1 | ++Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 4 of the Azure Cosmos DB extension. ++### [In-process model](#tab/in-process) ++```xml +<Project Sdk="Microsoft.NET.Sdk"> + <PropertyGroup> + <TargetFramework>net7.0</TargetFramework> + <AzureFunctionsVersion>v4</AzureFunctionsVersion> + </PropertyGroup> + <ItemGroup> + <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="4.3.0" /> + <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" /> + </ItemGroup> + <ItemGroup> + <None Update="host.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + <None Update="local.settings.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + <CopyToPublishDirectory>Never</CopyToPublishDirectory> + </None> + </ItemGroup> +</Project> +``` ++### [Isolated worker model](#tab/isolated-worker) ++```xml +<Project Sdk="Microsoft.NET.Sdk"> + <PropertyGroup> + <TargetFramework>net7.0</TargetFramework> + <AzureFunctionsVersion>v4</AzureFunctionsVersion> + <OutputType>Exe</OutputType> + </PropertyGroup> + <ItemGroup> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.CosmosDB" Version="4.4.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" /> + </ItemGroup> + <ItemGroup> + <None Update="host.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + <None Update="local.settings.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + <CopyToPublishDirectory>Never</CopyToPublishDirectory> + </None> + </ItemGroup> +</Project> +``` +++++## Update the extension bundle ++By default, [extension bundles](./functions-bindings-register.md#extension-bundles) are used by non-.NET function apps to install binding extensions. The Azure Cosmos DB version 4 extension is part of the Microsoft Azure Functions version 4 extension bundle. ++To update your application to use the latest extension bundle, update your `host.json`. The following `host.json` file uses version 4 of the Microsoft Azure Functions extension bundle. ++```json +{ + "version": "2.0", + "extensionBundle": { + "id": "Microsoft.Azure.Functions.ExtensionBundle", + "version": "[4.*, 5.0.0)" + } +} +``` ++++## Rename the binding attributes ++Both [in-process](functions-dotnet-class-library.md) and [isolated process](dotnet-isolated-process-guide.md) C# libraries use the [CosmosDBTriggerAttribute](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.CosmosDB/Trigger/CosmosDBTriggerAttribute.cs) to define the function. ++The following table only includes attributes that were renamed or were removed from the version 3 extension. For a full list of attributes available in the version 4 extension, visit the [attribute reference](./functions-bindings-cosmosdb-v2-trigger.md?tabs=extensionv4#attributes). ++|Version 3 attribute property |Version 4 attribute property |Version 4 attribute description | +|--|--|--| +|**ConnectionStringSetting** |**Connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](./functions-bindings-cosmosdb-v2-trigger.md#connections).| +|**CollectionName** |**ContainerName** | The name of the container being monitored. | +|**LeaseConnectionStringSetting** |**LeaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `Connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.| +|**LeaseCollectionName** |**LeaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. | +|**CreateLeaseCollectionIfNotExists** |**CreateLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Azure AD identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.| +|**LeasesCollectionThroughput** |**LeasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `CreateLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. | +|**LeaseCollectionPrefix** |**LeaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. | +|**UseMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. | +|**UseDefaultJsonSerialization** |*Removed* | This attribute is no longer needed as you can fully customize the serialization using built in support in the [Azure Cosmos DB version 3 .NET SDK](../cosmos-db/nosql/migrate-dotnet-v3.md#customize-serialization). | +|**CheckpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. | +|**CheckpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. | +++## Rename the binding attributes ++Update your binding configuration properties in the `function.json` file. ++The following table only includes attributes that changed or were removed from the version 3.x extension. For a full list of attributes available in the version 4 extension, visit the [attribute reference](./functions-bindings-cosmosdb-v2-trigger.md#attributes). ++|Version 3 attribute property |Version 4 attribute property |Version 4 attribute description | +|--|--|--| +|**connectionStringSetting** |**connection** | The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account being monitored. For more information, see [Connections](./functions-bindings-cosmosdb-v2-trigger.md#connections).| +|**collectionName** |**containerName** | The name of the container being monitored. | +|**leaseConnectionStringSetting** |**leaseConnection** | (Optional) The name of an app setting or setting collection that specifies how to connect to the Azure Cosmos DB account that holds the lease container. <br><br> When not set, the `connection` value is used. This parameter is automatically set when the binding is created in the portal. The connection string for the leases container must have write permissions.| +|**leaseCollectionName** |**leaseContainerName** | (Optional) The name of the container used to store leases. When not set, the value `leases` is used. | +|**createLeaseCollectionIfNotExists** |**createLeaseContainerIfNotExists** | (Optional) When set to `true`, the leases container is automatically created when it doesn't already exist. The default value is `false`. When using Azure AD identities if you set the value to `true`, creating containers isn't [an allowed operation](../cosmos-db/nosql/troubleshoot-forbidden.md#non-data-operations-are-not-allowed) and your Function won't be able to start.| +|**leasesCollectionThroughput** |**leasesContainerThroughput** | (Optional) Defines the number of Request Units to assign when the leases container is created. This setting is only used when `createLeaseContainerIfNotExists` is set to `true`. This parameter is automatically set when the binding is created using the portal. | +|**leaseCollectionPrefix** |**leaseContainerPrefix** | (Optional) When set, the value is added as a prefix to the leases created in the Lease container for this function. Using a prefix allows two separate Azure Functions to share the same Lease container by using different prefixes. | +|**useMultipleWriteLocations** |*Removed* | This attribute is no longer needed as it's automatically detected. | +|**checkpointInterval**|*Removed* | This attribute has been removed in the version 4 extension. | +|**checkpointDocumentCount** |*Removed* | This attribute has been removed in the version 4 extension. | ++++## Modify your function code ++The Azure Functions extension version 4 is built on top of the Azure Cosmos DB .NET SDK version 3, which removed support for the [`Document` class](../cosmos-db/nosql/migrate-dotnet-v3.md#major-name-changes-from-v2-sdk-to-v3-sdk). Instead of receiving a list of `Document` objects with each function invocation, which you must then deserialize into your own object type, you can now directly receive a list of objects of your own type. ++This example refers to a simple `ToDoItem` type. ++```cs +namespace CosmosDBSamples +{ + // Customize the model with your own desired properties + public class ToDoItem + { + public string id { get; set; } + public string Description { get; set; } + } +} +``` ++Changes to the attribute names must be made directly in the code when defining your Function. ++```cs +using System.Collections.Generic; +using Microsoft.Azure.WebJobs; +using Microsoft.Azure.WebJobs.Host; +using Microsoft.Extensions.Logging; ++namespace CosmosDBSamples +{ + public static class CosmosTrigger + { + [FunctionName("CosmosTrigger")] + public static void Run([CosmosDBTrigger( + databaseName: "databaseName", + containerName: "containerName", + Connection = "CosmosDBConnectionSetting", + LeaseContainerName = "leases", + CreateLeaseContainerIfNotExists = true)]IReadOnlyList<ToDoItem> input, ILogger log) + { + if (input != null && input.Count > 0) + { + log.LogInformation("Documents modified " + input.Count); + log.LogInformation("First document Id " + input[0].id); + } + } + } +} +``` +++## Modify your function code ++After you update your `host.json` to use the correct extension bundle version and modify your `function.json` to use the correct attribute names, there are no further code changes required. +++## Next steps ++- [Run a function when an Azure Cosmos DB document is created or modified (Trigger)](./functions-bindings-cosmosdb-v2-trigger.md) +- [Read an Azure Cosmos DB document (Input binding)](./functions-bindings-cosmosdb-v2-input.md) +- [Save changes to an Azure Cosmos DB document (Output binding)](./functions-bindings-cosmosdb-v2-output.md) |
azure-functions | Migrate Dotnet To Isolated Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md | To upgrade the application, you will: ## Upgrade your local project -The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. +The section outlines the various changes that you need to make to your local project to move it to the isolated worker model. Some of the steps change based on your target version of .NET. Use the tabs to select the instructions which match your desired version. These steps assume a local C# project, and if your app is instead using C# script (`.csx` files), you should [convert to the project model](./functions-reference-csharp.md#convert-a-c-script-app-to-a-c-project) before continuing. > [!TIP]-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. +> If you are moving to an LTS or STS version of .NET, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. ### .csproj file |
azure-functions | Migrate Version 1 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md | zone_pivot_groups: programming-languages-set-functions This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top). +If you are running version 1.x of the runtime in Azure Stack Hub, see [Considerations for Azure Stack Hub](#considerations-for-azure-stack-hub) first. + ## Identify function apps to upgrade Use the following PowerShell script to generate a list of function apps in your subscription that currently target version 1.x: On version 1.x of the Functions runtime, your C# function app targets .NET Frame > [!TIP] > **Unless your app depends on a library or API only available to .NET Framework, we recommend upgrading to .NET 6 on the isolated worker model.** Many apps on version 1.x target .NET Framework only because that is what was available when they were created. Additional capabilities are available to more recent versions of .NET, and if your app is not forced to stay on .NET Framework due to a dependency, you should upgrade. >-> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you. +> Migrating to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you. ::: zone-end Migrating a C# function app from version 1.x to version 4.x of the Functions run Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process). > [!TIP]-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. +> If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. ### .csproj file In version 2.x, the following changes were made: * The URL format of Event Grid trigger webhooks has been changed to follow this pattern: `https://{app}/runtime/webhooks/{triggerName}`. +## Considerations for Azure Stack Hub ++[App Service on Azure Stack Hub](/azure-stack/operator/azure-stack-app-service-overview) does not support version 4.x of Azure Functions. When you are planning a migration off of version 1.x in Azure Stack Hub, you can choose one of the following options: ++- Migrate to version 4.x hosted in public cloud Azure Functions using the instructions in this article. Instead of upgrading your existing app, you would create a new app using version 4.x and then deploy your modified project to it. +- Switch to [WebJobs](../app-service/webjobs-create.md) hosted on an App Service plan in Azure Stack Hub. + ## Next steps > [!div class="nextstepaction"] |
azure-functions | Migrate Version 3 Version 4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md | On version 3.x of the Functions runtime, your C# function app targets .NET Core > [!TIP] > **If you're migrating from .NET 5 (on the isolated worker model), we recommend upgrading to .NET 6 on the isolated worker model.** This provides a quick upgrade path with the longest support window from .NET. >-> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. The [.NET Upgrade Assistant] can also handle many of the necessary code changes for you. +> **If you're migrating from .NET Core 3.1 (on the in-process model), we recommend upgrading to .NET 6 on the in-process model.** This provides a quick upgrade path. However, you might also consider upgrading to .NET 6 on the isolated worker model. Switching to the isolated worker model will require additional code changes as part of this migration, but it will give your app [additional benefits](./dotnet-isolated-in-process-differences.md), including the ability to more easily target future versions of .NET. If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can also handle many of the necessary code changes for you. ::: zone-end Upgrading instructions are language dependent. If you don't see your language, c Choose the tab that matches your target version of .NET and the desired process model (in-process or isolated worker process). > [!TIP]-> The [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. +> If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections. ### .csproj file |
azure-functions | Run Functions From Deployment Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md | This section provides information about how to run your function app from a pack + When running a function app on Windows, the app setting `WEBSITE_RUN_FROM_PACKAGE = <URL>` gives worse cold-start performance and isn't recommended. + When you specify a URL, you must also [manually sync triggers](functions-deployment-technologies.md#trigger-syncing) after you publish an updated package. + The Functions runtime must have permissions to access the package URL.-+ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../vs-azure-tools-storage-manage-with-storage-explorer.md#generate-a-sas-in-storage-explorer) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package. ++ You shouldn't deploy your package to Azure Blob Storage as a public blob. Instead, use a private container with a [Shared Access Signature (SAS)](../storage/common/storage-sas-overview.md) or [use a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity) to enable the Functions runtime to access the package.++ You must maintain any SAS URLs used for deployment. When an SAS expires, the package can no longer be deployed. In this case, you must generate a new SAS and update the setting in your function app. You can eliminate this management burden by [using a managed identity](#fetch-a-package-from-azure-blob-storage-using-a-managed-identity). + When running on a Premium plan, make sure to [eliminate cold starts](functions-premium-plan.md#eliminate-cold-starts). + When running on a Dedicated plan, make sure you've enabled [Always On](dedicated-plan.md#always-on). + You can use the [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload package files to blob containers in your storage account. |
azure-functions | Security Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/security-concepts.md | For example, every function app requires an associated storage account, which is App settings and connection strings are stored encrypted in Azure. They're decrypted only before being injected into your app's process memory when the app starts. The encryption keys are rotated regularly. If you prefer to instead manage the secure storage of your secrets, the app setting should instead be references to Azure Key Vault. -You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. To learn more, see the `IsEncrypted` property in the [local settings file](functions-develop-local.md#local-settings-file). +You can also encrypt settings by default in the local.settings.json file when developing functions on your local computer. For more information, see [Encrypt the local settings file](functions-run-local.md#encrypt-the-local-settings-file). #### Key Vault references By having a separate scm endpoint, you can control deployments and other advance ### Continuous security validation -Since security needs to be considered at every step in the development process, it makes sense to also implement security validations in a continuous deployment environment. This is sometimes called DevSecOps. Using Azure DevOps for your deployment pipeline let's you integrate validation into the deployment process. For more information, see [Learn how to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline). +Since security needs to be considered at every step in the development process, it makes sense to also implement security validations in a continuous deployment environment. This is sometimes called DevSecOps. Using Azure DevOps for your deployment pipeline lets you integrate validation into the deployment process. For more information, see [Learn how to add continuous security validation to your CI/CD pipeline](/azure/devops/migrate/security-validation-cicd-pipeline). ## Network security |
azure-functions | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/overview.md | Specifying a list of VMs can be used when you need to perform the start and stop - You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). -- Your account has been granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) permission in the subscription.+- To deploy the solution, your account must be granted the [Owner](../../role-based-access-control/built-in-roles.md#owner) permission in the subscription. - Start/Stop VMs v2 is available in all Azure global and US Government cloud regions that are listed in [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=functions) page for Azure Functions. |
azure-functions | Streaming Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/streaming-logs.md | Title: Stream execution logs in Azure Functions description: Learn how you can stream logs for functions in near real time. Previously updated : 9/1/2020 Last updated : 8/21/2023 ms.devlang: azurecli There are two ways to view a stream of log files being generated by your functio * **Built-in log streaming**: the App Service platform lets you view a stream of your application log files. This is equivalent to the output seen when you debug your functions during [local development](functions-develop-local.md) and when you use the **Test** tab in the portal. All log-based information is displayed. For more information, see [Stream logs](../app-service/troubleshoot-diagnostic-logs.md#stream-logs). This streaming method supports only a single instance, and can't be used with an app running on Linux in a Consumption plan. -* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances or on Linux in a Consumption plan. This method uses [sampled data](configure-monitoring.md#configure-sampling). +* **Live Metrics Stream**: when your function app is [connected to Application Insights](configure-monitoring.md#enable-application-insights-integration), you can view log data and other metrics in near real-time in the Azure portal using [Live Metrics Stream](../azure-monitor/app/live-stream.md). Use this method when monitoring functions running on multiple-instances and supports all plan types. This method uses [sampled data](configure-monitoring.md#configure-sampling). Log streams can be viewed both in the portal and in most local development environments. -## Portal +## [Portal](#tab/azure-portal) You can view both types of log streams in the portal. -### Built-in log streaming - To view streaming logs in the portal, select the **Platform features** tab in your function app. Then, under **Monitoring**, choose **Log streaming**. ![Enable streaming logs in the portal](./media/functions-monitoring/enable-streaming-logs-portal.png) This connects your app to the log streaming service and application logs are dis ![View streaming logs in the portal](./media/functions-monitoring/streaming-logs-window.png) -### Live Metrics Stream - To view the Live Metrics Stream for your app, select the **Overview** tab of your function app. When you have Application Insights enabled, you see an **Application Insights** link under **Configured features**. This link takes you to the Application Insights page for your app. In Application Insights, select **Live Metrics Stream**. [Sampled log entries](configure-monitoring.md#configure-sampling) are displayed under **Sample Telemetry**. ![View Live Metrics Stream in the portal](./media/functions-monitoring/live-metrics-stream.png) -## Visual Studio Code +## [Visual Studio Code](#tab/vs-code) [!INCLUDE [functions-enable-log-stream-vs-code](../../includes/functions-enable-log-stream-vs-code.md)] -## Core Tools +## [Core Tools](#tab/core-tools) ++Use the [`func azure functionapp logstream` command](functions-core-tools-reference.md#func-azure-functionapp-list-functions) to start receiving streaming logs of a specific function app running in Azure, as in this example: ++```bash +func azure functionapp logstream <FunctionAppName> +``` ++>[!NOTE] +>Because built-in log streaming isn't yet enabled for function apps running on Linux in a Consumption plan, you need to instead enable the [Live Metrics Stream](../azure-monitor/app/live-stream.md) to view the logs in near-real time. ++Use this command to display the Live Metrics Stream in a new browser window. +```bash +func azure functionapp logstream <FunctionAppName> --browser +``` -## Azure CLI +## [Azure CLI](#tab/azure-cli) You can enable streaming logs by using the [Azure CLI](/cli/azure/install-azure-cli). Use the following commands to sign in, choose your subscription, and stream log files: az account set --subscription <subscriptionNameOrId> az webapp log tail --resource-group <RESOURCE_GROUP_NAME> --name <FUNCTION_APP_NAME> ``` -## Azure PowerShell +## [Azure PowerShell](#tab/azure-powershell) You can enable streaming logs by using [Azure PowerShell](/powershell/azure/). For PowerShell, use the [Set-AzWebApp](/powershell/module/az.websites/set-azwebapp) command to enable logging on the function app, as shown in the following snippet: You can enable streaming logs by using [Azure PowerShell](/powershell/azure/). F For more information, see the [complete code example](../app-service/scripts/powershell-monitor.md#sample-script). ++ ## Next steps + [Monitor Azure Functions](functions-monitoring.md) |
azure-government | Azure Services In Fedramp Auditscope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md | This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and | [Cognitive Search](../../search/index.yml) (formerly Azure Search) | ✅ | ✅ | ✅ | ✅ | ✅ | | [Azure AI | [Azure AI -| [Azure AI services containers](../../ai-services/cognitive-services-container-support.md) | ✅ | ✅ | ✅ | ✅ | | +| [Azure AI containers](../../ai-services/cognitive-services-container-support.md) | ✅ | ✅ | ✅ | ✅ | | | [Azure AI | [Azure AI | [Azure AI |
azure-government | Documentation Government Overview Wwps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-wwps.md | Azure Stack Hub and Azure Stack Edge represent key enabling technologies that al ### Azure Stack Hub -[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Azure AI services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity. +[Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) (formerly Azure Stack) is an integrated system of software and validated hardware that you can purchase from Microsoft hardware partners, deploy in your own data center, and then operate entirely on your own or with the help from a managed service provider. With Azure Stack Hub, you're always fully in control of access to your data. Azure Stack Hub can accommodate up to [16 physical servers per Azure Stack Hub scale unit](/azure-stack/operator/azure-stack-overview). It represents an extension of Azure, enabling you to provision various IaaS and PaaS services and effectively bring multi-tenant cloud technology to on-premises and edge environments. You can run many types of VM instances, App Services, Containers (including Azure AI containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes you use in Azure. Azure Stack Hub isn't dependent on connectivity to Azure to run deployed applications and enable operations via local connectivity. In addition to Azure Stack Hub, which is intended for on-premises deployment (for example, in a data center), a ruggedized and field-deployable version called [Tactical Azure Stack Hub](https://www.delltechnologies.com/en-us/collaterals/unauth/data-sheets/products/converged-infrastructure/dell-emc-integrated-system-for-azure-stack-hub-tactical-spec-sheet.pdf) is also available to address tactical edge deployments for limited or no connectivity, fully mobile requirements, and harsh conditions requiring military specification solutions. This section addresses common customer questions related to Azure public, privat - **Data storage for regional - **Data storage for non-regional - **Air-gapped (sovereign) cloud deployment:** Why doesnΓÇÖt Microsoft deploy an air-gapped, sovereign, physically isolated cloud instance in every country/region? **Answer:** Microsoft is actively pursuing air-gapped cloud deployments where a business case can be made with governments across the world. However, physical isolation or ΓÇ£air gappingΓÇ¥, as a strategy, is diametrically opposed to the strategy of hyperscale cloud. The value proposition of the cloud, rapid feature growth, resiliency, and cost-effective operation, are diminished when the cloud is fragmented and physically isolated. These strategic challenges compound with each extra air-gapped cloud or fragmentation within an air-gapped cloud. Whereas an air-gapped cloud might prove to be the right solution for certain customers, it isn't the only available option.-- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country/region by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country/region personnel. You can run many types of VM instances, App Services, Containers (including Azure AI services containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access.+- **Air-gapped (sovereign) cloud customer options:** How can Microsoft support governments who need to operate cloud services completely in-country/region by local security-cleared personnel? What options does Microsoft have for cloud services operated entirely on-premises within customer owned datacenter where government employees exercise sole operational and data access control? **Answer:** You can use [Azure Stack Hub](https://azure.microsoft.com/products/azure-stack/hub/) to deploy a private cloud on-premises managed by your own security-cleared, in-country/region personnel. You can run many types of VM instances, App Services, Containers (including Azure AI containers), Functions, Azure Monitor, Key Vault, Event Hubs, and other services while using the same development tools, APIs, and management processes that you use in Azure. With Azure Stack Hub, you have sole control of your data, including storage, processing, transmission, and remote access. - **Local jurisdiction:** Is Microsoft subject to local country/region jurisdiction based on the availability of Azure public cloud service? **Answer:** Yes, Microsoft must comply with all applicable local laws; however, government requests for customer data must also comply with applicable laws. A subpoena or its local equivalent is required to request non-content data. A warrant, court order, or its local equivalent is required for content data. Government requests for customer data follow a strict procedure according to [Microsoft practices for responding to government requests](https://blogs.microsoft.com/datalaw/our-practices/). Every year, Microsoft rejects many law enforcement requests for customer data. Challenges to government requests can take many forms. In many of these cases, Microsoft simply informs the requesting government that it's unable to disclose the requested information and explains the reason for rejecting the request. Where appropriate, Microsoft challenges requests in court. Our [Law Enforcement Request Report](https://www.microsoft.com/corporate-responsibility/law-enforcement-requests-report?rtc=1) and [US National Security Order Report](https://www.microsoft.com/corporate-responsibility/us-national-security-orders-report) are updated every six months and show that most of our customers are never impacted by government requests for data. For example, in the second half of 2019, Microsoft received 39 requests from law enforcement for accounts associated with enterprise cloud customers. Of those requests, only one warrant resulted in disclosure of customer content related to a non-US enterprise customer whose data was stored outside the United States. - **Autarky:** Can Microsoft cloud operations be separated from the rest of Microsoft cloud and connected solely to local government network? Are operations possible without external connections to a third party? **Answer:** Yes, depending on the cloud deployment model. - **Public Cloud:** Azure regional datacenters can be connected to your local government network through dedicated private connections such as ExpressRoute. Independent operation without any connectivity to a third party such as Microsoft isn't possible in the public cloud. |
azure-maps | Creator Facility Ontology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md | Learn more about Creator for indoor maps by reading: [structures]: #structure <! REST API Links > [conversion service]: /rest/api/maps/v2/conversion-[dataset]: /rest/api/maps/v20220901preview/dataset +[dataset]: /rest/api/maps/2023-03-01-preview/dataset [GeoJSON Point geometry]: /rest/api/maps/v2/wfs/get-features#geojsonpoint [MultiPolygon]: /rest/api/maps/v2/wfs/get-features?tabs=HTTP#geojsonmultipolygon [Point]: /rest/api/maps/v2/wfs/get-features#geojsonpoint |
azure-maps | Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md | The following example shows how to update a dataset, create a new tileset, and d <!-- REST API Links -> [Alias API]: /rest/api/maps/v2/alias [Conversion service]: /rest/api/maps/v2/conversion-[Creator - map configuration Rest API]: /rest/api/maps/v20220901preview/map-configuration +[Creator - map configuration Rest API]: /rest/api/maps/2023-03-01-preview/map-configuration [Data Upload]: /rest/api/maps/data-v2/update [Dataset Create]: /rest/api/maps/v2/dataset/create [Dataset service]: /rest/api/maps/v2/dataset The following example shows how to update a dataset, create a new tileset, and d [Feature State Update API]: /rest/api/maps/v2/feature-state/update-states [Geofence service]: /rest/api/maps/spatial/postgeofence [Render V2-Get Map Tile API]: /rest/api/maps/render-v2/get-map-tile-[routeset]: /rest/api/maps/v20220901preview/routeset -[Style - Create]: /rest/api/maps/v20220901preview/style/create -[style]: /rest/api/maps/v20220901preview/style +[routeset]: /rest/api/maps/2023-03-01-preview/routeset +[Style - Create]: /rest/api/maps/2023-03-01-preview/style/create +[style]: /rest/api/maps/2023-03-01-preview/style [Tileset Create]: /rest/api/maps/v2/tileset/create [Tileset List]: /rest/api/maps/v2/tileset/list [Tileset service]: /rest/api/maps/v2/tileset-[tileset]: /rest/api/maps/v20220901preview/tileset -[wayfinding path]: /rest/api/maps/v20220901preview/wayfinding/get-path -[wayfinding service]: /rest/api/maps/v20220901preview/wayfinding -[wayfinding]: /rest/api/maps/v20220901preview/wayfinding +[tileset]: /rest/api/maps/2023-03-01-preview/tileset +[wayfinding path]: /rest/api/maps/2023-03-01-preview/wayfinding/get-path +[wayfinding service]: /rest/api/maps/2023-03-01-preview/wayfinding +[wayfinding]: /rest/api/maps/2023-03-01-preview/wayfinding [Web Feature service]: /rest/api/maps/v2/wfs <! learn.microsoft.com Links > |
azure-maps | Creator Onboarding Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md | The following steps demonstrate how to create an indoor map in your Azure Maps a :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-upload.png" alt-text="Screenshot showing the package upload screen of the Azure Maps Creator onboarding tool."::: -<!-- - > [!NOTE] - > If the manifest included in the drawing package is incomplete or contains errors, the onboarding tool will not go directly to the **Review + Create** tab, but instead goes to the tab where you are best able to address the issue. >- 1. Once the package is uploaded, the onboarding tool uses the [Conversion service] to validate the data then convert the geometry and data from the drawing package into a digital indoor map. For more information about the conversion process, see [Convert a drawing package] in the Creator concepts article. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/package-conversion.png" alt-text="Screenshot showing the package conversion screen of the Azure Maps Creator onboarding tool, including the Conversion ID value."::: |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | Defining text properties enables you to associate text entities that fall inside > * Stair > * Elevator -### Download +### Review + Create -When finished, select the **Download** button to view the manifest. When you finished verifying that it's ready, select the **Download** button to save it locally so that you can include it in the drawing package to import into your Azure Maps Creator resource. +When finished, select the **Create + Download** button to download a copy of the drawing package and start the map creation process. For more information on the map creation process, see [Create indoor map with the onboarding tool]. :::image type="content" source="./media/creator-indoor-maps/onboarding-tool/review-download.png" alt-text="Screenshot showing the manifest JSON."::: You should now have all the DWG drawings prepared to meet Azure Maps Conversion [manifest files]: drawing-requirements.md#manifest-file-1 [wayfinding]: creator-indoor-maps.md#wayfinding-preview [facility level]: drawing-requirements.md#facility-level+[Create indoor map with the onboarding tool]: creator-onboarding-tool.md |
azure-maps | Geocoding Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md | The ability to geocode in a country/region is dependent upon the road data cover | Burkina Faso | | | ✓ | ✓ | ✓ | | Burundi | | | ✓ | ✓ | ✓ | | Cameroon | | | ✓ | ✓ | ✓ |-| Cape Verde | | | ✓ | ✓ | ✓ | +| Cabo Verde | | | ✓ | ✓ | ✓ | | Central African Republic | | | ✓ | ✓ | ✓ | | Chad | | | | ✓ | ✓ | | Congo | | | | ✓ | ✓ | The ability to geocode in a country/region is dependent upon the road data cover | Qatar | ✓ | | ✓ | ✓ | ✓ | | Réunion | ✓ | ✓ | ✓ | ✓ | ✓ | | Rwanda | | | ✓ | ✓ | ✓ |-| Saint Helena | | | | ✓ | ✓ | +| Saint Helena, Ascension, and Tristan da Cunha | | | | ✓ | ✓ | | São Tomé & Príncipe | | | ✓ | ✓ | ✓ | | Saudi Arabia | ✓ | | ✓ | ✓ | ✓ | | Senegal | | | ✓ | ✓ | ✓ | |
azure-maps | How To Create Custom Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md | Select the **Get map configuration list** button to get a list of every map conf :::image type="content" source="./media/creator-indoor-maps/style-editor/select-the-map-configuration.png" alt-text="A screenshot of the open style dialog box in the visual style editor with the Select map configuration drop-down list highlighted."::: > [!NOTE]-> If the map configuration was created as part of a custom style and has a user provided alias, that alias appears in the map configuration drop-down list, otherwise the `mapConfigurationId` appears. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID: +> If the map configuration was created as part of a custom style and has a user provided alias, that alias appears in the map configuration drop-down list, otherwise just the `mapConfigurationId` appears. The default map configuration ID for any given tileset can be found by using the [tileset get] HTTP request and passing in the tileset ID: > > ```http-> https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2022-09-01-preview +> https://{geography}.atlas.microsoft.com/tilesets/{tilesetId}?2023-03-01-preview > ``` > > The `mapConfigurationId` is returned in the body of the response, for example: Select the **Get map configuration list** button to get a list of every map conf > "defaultMapConfigurationId": "68d74ad9-4f84-99ce-06bb-19f487e8e692" > ``` -Once the map configuration drop-down list is populated with the IDs of all the map configurations in your creator resource, select the desired map configuration, then the drop-down list of style + tileset tuples appears. The *style + tileset* tuples consists of the style alias or ID, followed by the plus (**+**) sign then the `tilesetId`. +Once the desired map configuration is selected, the drop-down list of styles appears. Once you've selected the desired style, select the **Load selected style** button. Once you've selected the desired style, select the **Load selected style** butto ||| | 1 | Your Azure Maps account [subscription key] | | 2 | Select the geography of the Azure Maps account. |-| 3 | A list of map configuration aliases. If a given map configuration has no alias, the `mapConfigurationId` is shown instead. | -| 4 | This value is created from a combination of the style and tileset. If the style has an alias it's shown, if not the `styleId` is shown. The `tilesetId` is always shown for the tileset value. | +| 3 | A list of map configuration IDs and aliases. | +| 4 | A list of styles associated with the selected map configuration. | ### Modify style The following table describes the four fields you're presented with. | Property | Description | |-|-| | Style description | A user-defined description for this style. |-| Style alias | An alias that can be used to reference this style.<BR>When referencing programmatically, the style is referenced by the style ID if no alias is provided. | | Map configuration description | A user-defined description for this map configuration. | | Map configuration alias | An alias used to reference this map configuration.<BR>When referencing programmatically, the map configuration is referenced by the map configuration ID if no alias is provided. | Some important things to know about aliases: 1. Can be named using alphanumeric characters (0-9, a-z, A-Z), hyphens (-) and underscores (_).-1. Can be used to reference the underlying object, whether a style or map configuration, in place of that object's ID. This is especially important since the style and map configuration can't be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times. +1. Can be used to reference the underlying map configuration, in place of that object's ID. This is especially important since the map configuration can't be updated, meaning every time any changes are saved, a new ID is generated, but the alias can remain the same, making referencing it less error prone after it has been modified multiple times. > [!WARNING]-> Duplicate aliases are not allowed. If the alias of an existing style or map configuration is used, the style or map configuration that alias points to will be overwritten and the existing style or map configuration will be deleted and references to that ID will result in errors. See [map configuration] in the concepts article for more information. +> Duplicate aliases are not allowed. If the alias of an existing map configuration is used, the map configuration that alias points to will be overwritten and the existing map configuration will be deleted and references to that ID will result in errors. For more information, see [map configuration] in the concepts article. Once you have entered values into each required field, select the **Upload map configuration** button to save the style and map configuration data to your Creator resource. +Once you have successfully uploaded your custom styles you'll see the **Upload complete** dialog showing you the values for Style ID, Map configuration ID and the map configuration alias. For more information, see [custom styling] and [map configuration]. ++ > [!TIP]-> Make a note of the map configuration `alias` value, it will be required when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. +> Make a note of the map configuration alias value, it will be required when you [Instantiate the Indoor Manager] of a Map object when developing applications in Azure Maps. +> Also, make a note of the Style ID, it can be reused for other tilesets. ## Custom categories Now when you select that unit in the map, the pop-up menu has the new layer ID, [categories]: https://atlas.microsoft.com/sdk/javascript/indoor/0.2/categories.json [Creator concepts]: creator-indoor-maps.md [Creators Rest API]: /rest/api/maps-creator/+[custom styling]: creator-indoor-maps.md#custom-styling-preview [Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager [manifest]: drawing-requirements.md#manifest-file-requirements [map configuration]: creator-indoor-maps.md#map-configuration [style editor]: https://azure.github.io/Azure-Maps-Style-Editor [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account-[tileset get]: /rest/api/maps/v20220901preview/tileset/get -[tileset]: /rest/api/maps/v20220901preview/tileset +[tileset get]: /rest/api/maps/2023-03-01-preview/tileset/get +[tileset]: /rest/api/maps/2023-03-01-preview/tileset [unitProperties]: drawing-requirements.md#unitproperties [Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md |
azure-maps | How To Creator Wayfinding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md | To create a routeset: 1. Execute the following **HTTP POST request**: ```http- https://us.atlas.microsoft.com/routesets?api-version=2022-09-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/routesets?api-version=2023-03-01-preview&datasetID={datasetId}&subscription-key={Your-Azure-Maps-Subscription-key} ``` To check the status of the routeset creation process and retrieve the routesetId 1. Execute the following **HTTP GET request**: ```http- https://us.atlas.microsoft.com/routesets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/routesets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ``` To check the status of the routeset creation process and retrieve the routesetId 1. Copy the value of the **Resource-Location** key from the responses header. It's the resource location URL and contains the `routesetId`: - > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2022-09-01-preview + > https://us.atlas.microsoft.com/routesets/**675ce646-f405-03be-302e-0d22bcfe17e8**?api-version=2023-03-01-preview Make a note of the `routesetId`. It's required in all [wayfinding](#get-a-wayfinding-path) requests and when you [Get the facility ID]. The `facilityId`, a property of the routeset, is a required parameter when searc 1. Execute the following **HTTP GET request**: ```http- https://us.atlas.microsoft.com/routesets/{routesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/routesets/{routesetId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ``` To create a wayfinding query: 1. Execute the following **HTTP GET request** (replace {routesetId} with the routesetId obtained in the [Check the routeset creation status] section and the {facilityId} with the facilityId obtained in the [Get the facility ID] section): ```http- https://us.atlas.microsoft.com/wayfinding/path?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width} + https://us.atlas.microsoft.com/wayfinding/path?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key}&routesetid={routeset-ID}&facilityid={facility-ID}&fromPoint={lat,lon}&fromLevel={from-level}&toPoint={lat,lon}&toLevel={to-level}&minWidth={minimun-width} ``` > [!TIP] The wayfinding service calculates the path through specific intervening points. [wayfinding service]: creator-indoor-maps.md#wayfinding-preview [wayfinding]: creator-indoor-maps.md#wayfinding-preview <! REST API Links >-[routeset]: /rest/api/maps/v20220901preview/routeset -[wayfinding API]: /rest/api/maps/v20220901preview/wayfinding +[routeset]: /rest/api/maps/2023-03-01-preview/routeset +[wayfinding API]: /rest/api/maps/2023-03-01-preview/wayfinding |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | To create a dataset: 1. Enter the following URL to the dataset service. The request should look like the following URL (replace {udid} with the `udid` obtained in [Check the GeoJSON package upload status] section): ```http- https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&udid={udid}&subscription-key={Your-Azure-Maps-Subscription-key} ``` 1. Copy the value of the `Operation-Location` key in the response header. The `Operation-Location` key is also known as the `status URL` and is required to check the status of the dataset creation process and to get the `datasetId`, which is required to create a tileset. To check the status of the dataset creation process and retrieve the `datasetId` 1. Enter the status URL you copied in [Create a dataset]. The request should look like the following URL: ```http- https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/datasets/operations/{operationId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ``` 1. In the Header of the HTTP response, copy the value of the unique identifier contained in the `Resource-Location` key. - > `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2022-09-01-preview` + > `https://us.atlas.microsoft.com/datasets/**c9c15957-646c-13f2-611a-1ea7adc75174**?api-version=2023-03-01-preview` See [Next steps] for links to articles to help you complete your indoor map. One thing to consider when adding to an existing dataset is how the feature IDs If your original dataset was created from a GoeJSON source and you wish to add another facility created from a drawing package, you can append it to your existing dataset by referencing its `conversionId`, as demonstrated by this HTTP POST request: ```shttp-https://us.atlas.microsoft.com/datasets?api-version=2022-09-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId} +https://us.atlas.microsoft.com/datasets?api-version=2023-03-01-preview&conversionId={conversionId}&outputOntology=facility-2.0&datasetId={datasetId} ``` | Identifier | Description | Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.) [Creator Long-Running Operation API V2]: creator-long-running-operation-v2.md [Creator resource]: how-to-manage-creator.md [Data Upload API]: /rest/api/maps/data-v2/upload-[Dataset Create API]: /rest/api/maps/v20220901preview/dataset/create +[Dataset Create API]: /rest/api/maps/2023-03-01-preview/dataset/create [Dataset Create]: /rest/api/maps/v2/dataset/create [dataset]: creator-indoor-maps.md#datasets [Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2 |
azure-maps | How To Secure Spa Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md | Create the web application in Azure AD for users to sign in. The web application 6. Copy the Azure AD app ID and the Azure AD tenant ID from the app registration to use in the Web SDK. Add the Azure AD app registration details and the `x-ms-client-id` from the Azure Map account to the Web SDK. ```javascript- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js" /> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js" /> <script> var map = new atlas.Map("map", { center: [-122.33, 47.64], |
azure-maps | How To Use Indoor Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md | Set the map domain with a prefix matching the location of your Creator resource, For more information, see [Azure Maps service geographic scope]. -Next, instantiate a *Map object* with the map configuration object set to the `alias` or `mapConfigurationId` property of your map configuration, then set your `styleAPIVersion` to `2022-09-01-preview`. +Next, instantiate a *Map object* with the map configuration object set to the `alias` or `mapConfigurationId` property of your map configuration, then set your `styleAPIVersion` to `2023-03-01-preview`. The *Map object* will be used in the next step to instantiate the *Indoor Manager* object. The following code shows you how to instantiate the *Map object* with `mapConfiguration`, `styleAPIVersion` and map domain set: const map = new atlas.Map("map-id", { zoom: 19, mapConfiguration: mapConfiguration,- styleAPIVersion: '2022-09-01-preview' + styleAPIVersion: '2023-03-01-preview' }); ``` When you create an indoor map using Azure Maps Creator, default styles are appli - `mapConfiguration` the ID or alias of the map configuration that defines the custom styles you want to display on the map, use the map configuration ID or alias from step 1. - `style` allows you to set the initial style from your map configuration that is displayed. If not set, the style matching map configuration's default configuration is used. - `zoom` allows you to specify the min and max zoom levels for your map.- - `styleAPIVersion`: pass **'2022-09-01-preview'** (which is required while Custom Styling is in public preview) + - `styleAPIVersion`: pass **'2023-03-01-preview'** (which is required while Custom Styling is in public preview) 7. Next, create the *Indoor Manager* module with *Indoor Level Picker* control instantiated as part of *Indoor Manager* options, optionally set the `statesetId` option. Your file should now look similar to the following HTML: <meta name="viewport" content="width=device-width, user-scalable=no" /> <title>Indoor Maps App</title> - <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.css" type="text/css"/> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script src="https://atlas.microsoft.com/sdk/javascript/indoor/0.2/atlas-indoor.min.js"></script> <style> Your file should now look similar to the following HTML: zoom: 19, mapConfiguration: mapConfig,- styleAPIVersion: '2022-09-01-preview' + styleAPIVersion: '2023-03-01-preview' }); const levelControl = new atlas.control.LevelControl({ Learn more about how to add more data to your map: [Drawing package requirements]: drawing-requirements.md [dynamic map styling]: indoor-map-dynamic-styling.md [Indoor Maps dynamic styling]: indoor-map-dynamic-styling.md-[map configuration API]: /rest/api/maps/v20220901preview/map-configuration +[map configuration API]: /rest/api/maps/2023-03-01-preview/map-configuration [map configuration]: creator-indoor-maps.md#map-configuration-[Style Rest API]: /rest/api/maps/v20220901preview/style +[Style Rest API]: /rest/api/maps/2023-03-01-preview/style [style-loader]: https://webpack.js.org/loaders/style-loader [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Tileset List API]: /rest/api/maps/v2/tileset/list |
azure-maps | How To Use Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md | The Azure Maps Web SDK provides a [Map Control] that enables the customization o This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects]. +> [!IMPORTANT] +> If you have existing applications incorporating Azure Maps using version 2 of the [Map Control], it is recomended to start using version 3. Version 3 is backwards compatible and has several benifits including [WebGL 2 Compatibility], increased performance and support for [3D terrain tiles]. + ## Prerequisites To use the Map Control in a web page, you must have one of the following prerequisites: You can embed a map in a web page by using the Map Control client-side JavaScrip * Use the globally hosted CDN version of the Azure Maps Web SDK by adding references to the JavaScript and `stylesheet` in the `<head>` element of the HTML file: ```html- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> ``` * Load the Azure Maps Web SDK source code locally using the [azure-maps-control] npm package and host it with your app. This package also includes TypeScript definitions. You can embed a map in a web page by using the Map Control client-side JavaScrip Then add references to the Azure Maps `stylesheet` to the `<head>` element of the file: ```html- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> ``` > [!NOTE] You can embed a map in a web page by using the Map Control client-side JavaScrip <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type="text/javascript"> Here's an example of Azure Maps with the language set to "fr-FR" and the regiona For a list of supported languages and regional views, see [Localization support in Azure Maps]. +## WebGL 2 Compatibility ++Beginning with Azure Maps Web SDK 3.0, the Web SDK includes full compatibility with [WebGL 2], a powerful graphics technology that enables hardware-accelerated rendering in modern web browsers. By using WebGL 2, developers can harness the capabilities of modern GPUs to render complex maps and visualizations more efficiently, resulting in improved performance and visual quality. ++![Map image showing WebGL 2 Compatibility.](./media/how-to-use-map-control/webgl-2-compatability.png) ++```html +<!DOCTYPE html> +<html lang="en"> + <head> + <meta charset="utf-8" /> + <meta name="viewport" content="width=device-width, user-scalable=no" /> + <title>WebGL2 - Azure Maps Web SDK Samples</title> + <link href=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet"/> + <script src=https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script> + <script src="https://unpkg.com/deck.gl@latest/dist.min.js"></script> + <style> + html, + body { + width: 100%; + height: 100%; + padding: 0; + margin: 0; + } + #map { + width: 100%; + height: 100%; + } + </style> + </head> + <body> + <div id="map"></div> + <script> + var map = new atlas.Map("map", { + center: [-122.44, 37.75], + bearing: 36, + pitch: 45, + zoom: 12, + style: "grayscale_light", + // Get an Azure Maps key at https://azuremaps.com/. + authOptions: { + authType: "subscriptionKey", + subscriptionKey: " <Your Azure Maps Key> " + } + }); ++ // Wait until the map resources are ready. + map.events.add("ready", (event) => { + // Create a custom layer to render data points using deck.gl + map.layers.add( + new DeckGLLayer({ + id: "grid-layer", + data: "https://raw.githubusercontent.com/visgl/deck.gl-data/master/website/sf-bike-parking.json", + cellSize: 200, + extruded: true, + elevationScale: 4, + getPosition: (d) => d.COORDINATES, + // GPUGridLayer leverages WebGL2 to perform aggregation on the GPU. + // For more details, see https://deck.gl/docs/api-reference/aggregation-layers/gpu-grid-layer + type: deck.GPUGridLayer + }) + ); + }); ++ // A custom implementation of WebGLLayer + class DeckGLLayer extends atlas.layer.WebGLLayer { + constructor(options) { + super(options.id); + // Create an instance of deck.gl MapboxLayer which is compatible with Azure Maps + // https://deck.gl/docs/api-reference/mapbox/mapbox-layer + this._mbLayer = new deck.MapboxLayer(options); ++ // Create a renderer + const renderer = { + renderingMode: "3d", + onAdd: (map, gl) => { + this._mbLayer.onAdd?.(map["map"], gl); + }, + onRemove: (map, gl) => { + this._mbLayer.onRemove?.(map["map"], gl); + }, + prerender: (gl, matrix) => { + this._mbLayer.prerender?.(gl, matrix); + }, + render: (gl, matrix) => { + this._mbLayer.render(gl, matrix); + } + }; + this.setOptions({ renderer }); + } + } + </script> + </body> +</html> +``` ++## 3D terrain tiles ++Beginning with Azure Maps Web SDK 3.0, developers can take advantage of 3D terrain visualizations. This feature allows you to incorporate elevation data into your maps, creating a more immersive experience for your users. Whether it's visualizing mountain ranges, valleys, or other geographical features, the 3D terrain support brings a new level of realism to your mapping applications. ++The following code example demonstrates how to implement 3D terrain tiles. ++```html +<!DOCTYPE html> +<html lang="en"> + <head> + <meta charset="utf-8" /> + <meta name="viewport" content="width=device-width, user-scalable=no" /> + <title>Elevation - Azure Maps Web SDK Samples</title> + <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css rel="stylesheet" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js></script> + <style> + html, + body { + width: 100%; + height: 100%; + padding: 0; + margin: 0; + } + #map { + width: 100%; + height: 100%; + } + </style> + </head> ++ <body> + <div id="map"></div> + <script> + var map = new atlas.Map("map", { + center: [-121.7269, 46.8799], + maxPitch: 85, + pitch: 60, + zoom: 12, + style: "road_shaded_relief", + // Get an Azure Maps key at https://azuremaps.com/. + authOptions: { + authType: "subscriptionKey", + subscriptionKey: "<Your Azure Maps Key>" + } + }); ++ // Create a tile source for elevation data. For more information on creating + // elevation data & services using open data, see https://aka.ms/elevation + var elevationSource = new atlas.source.ElevationTileSource("elevation", { + url: "<tileSourceUrl>" + }); ++ // Wait until the map resources are ready. + map.events.add("ready", (event) => { ++ // Add the elevation source to the map. + map.sources.add(elevationSource); ++ // Enable elevation on the map. + map.enableElevation(elevationSource); + }); + </script> + </body> +</html> +``` + ## Azure Government cloud support The Azure Maps Web SDK supports the Azure Government cloud. All JavaScript and CSS URLs used to access the Azure Maps Web SDK remain the same. The following tasks need to be done to connect to the Azure Government cloud version of the Azure Maps platform. For a list of samples showing how to integrate Azure AD with Azure Maps, see: > [!div class="nextstepaction"] > [Azure AD authentication samples](https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples) +[3D terrain tiles]: #3d-terrain-tiles [authentication options]: /javascript/api/azure-maps-control/atlas.authenticationoptions [Authentication with Azure Maps]: azure-maps-authentication.md [Azure Maps & Azure Active Directory Samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples For a list of samples showing how to integrate Azure AD with Azure Maps, see: [AzureMapsControl.Components]: https://github.com/arnaudleclerc/AzureMapsControl.Components [azure-maps-control]: https://www.npmjs.com/package/azure-maps-control [Localization support in Azure Maps]: supported-languages.md+[Map Control]: https://www.npmjs.com/package/azure-maps-control [ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps-[Map Control]: https://www.npmjs.com/package/azure-maps-control +[WebGL 2 Compatibility]: #webgl-2-compatibility +[WebGL 2]: https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API#webgl_2 |
azure-maps | How To Use Spatial Io Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md | You can load the Azure Maps spatial IO module using one of the two options: <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script> <script type='text/javascript'> You can load the Azure Maps spatial IO module using one of the two options: <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> |
azure-maps | Map Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md | In the following code, the first code block creates a map and sets the enter and <head> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type="text/javascript"> |
azure-maps | Migrate From Bing Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md | The following code shows how to load a map with the same view in Azure Maps alon <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; When using a Symbol layer, the data must be added to a data source, and the data <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; Symbol layers in Azure Maps support custom images as well, but the image needs t <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; GeoJSON data can be directly imported in Azure Maps using the `importDataFromUrl <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; In Azure Maps, load the GeoJSON data into a data source and connect the data sou <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; In Azure Maps, georeferenced images can be overlaid using the `atlas.layer.Image <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> In Azure Maps, the drawing tools module needs to be loaded by loading the JavaSc <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add references to the Azure Maps Map Drawing Tools JavaScript and CSS files. --> <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/drawing/0/atlas-drawing.min.css" type="text/css" /> |
azure-maps | Migrate From Bing Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps.md | Learn the details of how to migrate your Bing Maps application with these articl [azure.com]: https://azure.com [Basic snap to road logic]: https://samples.azuremaps.com/?search=Snap%20to%20road&sample=basic-snap-to-road-logic [Choose the right pricing tier in Azure Maps]: choose-pricing-tier.md+[free account]: https://azure.microsoft.com/free/ [free Azure account]: https://azure.microsoft.com/free/ [manage authentication in Azure Maps]: how-to-manage-authentication.md [Microsoft Azure terms of use]: https://www.microsoftvolumelicensing.com/DocumentSearch.aspx?Mode=3&DocumentTypeId=31 |
azure-maps | Migrate From Google Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md | Load a map with the same view in Azure Maps along with a map style control and z <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; For a Symbol layer, add the data to a data source. Attach the data source to the <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; Symbol layers in Azure Maps support custom images as well. First, load the image <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; GeoJSON is the base data type in Azure Maps. Import it into a data source using <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; Directly import GeoJSON data using the `importDataFromUrl` function on the `Data <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map, datasource; Load the GeoJSON data into a data source and connect the data source to a heat m <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; Use the `atlas.layer.ImageLayer` class to overlay georeferenced images. This cla <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <script type='text/javascript'> var map; In Azure Maps, GeoJSON is the main data format used in the web SDK, more spatial <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add reference to the Azure Maps Spatial IO module. --> <script src="https://atlas.microsoft.com/sdk/javascript/spatial/0/atlas-spatial.js"></script> |
azure-maps | Power Bi Visual Add Reference Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md | The following are all settings in the **Format** pane that are available in the |-|| | Reference layer data | The data GeoJSON file to upload to the visual as another layer within the map. The **+ Add local file** button opens a file dialog the user can use to select a GeoJSON file that has a `.json` or `.geojson` file extension. | -> [!NOTE] -> In this preview of the Azure Maps Power BI visual, the reference layer will only load the first 5,000 shape features to the map. This limit will be increased in a future update. ## Styling data in a reference layer |
azure-maps | Release Notes Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md | -## v3 (preview) +## v3 (latest) ++### [3.0.0] (August 18, 2023) ++#### Bug fixes (3.0.0) ++- Fixed zoom control to take into account the `maxBounds` [CameraOptions]. ++- Fixed an issue that mouse positions are shifted after a css scale transform on the map container. ++#### Other changes (3.0.0) ++- Phased out the style definition version `2022-08-05` and switched the default `styleDefinitionsVersion` to `2023-01-01`. ++- Added the `mvc` parameter to encompass the map control version in both definitions and style requests. ++#### Installation (3.0.0) ++The version is available on [npm][3.0.0] and CDN. ++- **NPM:** Refer to the instructions at [azure-maps-control@3.0.0][3.0.0] ++- **CDN:** Reference the following CSS and JavaScript in the `<head>` element of an HTML file: ++ ```html + <link href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.css" rel="stylesheet" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3.0/atlas.min.js"></script> + ``` ### [3.0.0-preview.10] (July 11, 2023) This update is the first preview of the upcoming 3.0.0 release. The underlying [ }) ``` -## v2 (latest) +## v2 ### [2.3.2] (August 11, 2023) Stay up to date on Azure Maps: > [!div class="nextstepaction"] > [Azure Maps Blog] +[3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0 [3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10 [3.0.0-preview.9]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.9 [3.0.0-preview.8]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.8 |
azure-maps | Render Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md | The render coverage tables below list the countries/regions that support Azure M | Liechtenstein | Γ£ô | | Lithuania | Γ£ô | | Luxembourg | Γ£ô |-| Macedonia | Γ£ô | | Malta | Γ£ô | | Moldova | Γ£ô | | Monaco | Γ£ô | | Montenegro | Γ£ô | | Netherlands | Γ£ô |+| North Macedonia | Γ£ô | | Norway | Γ£ô | | Poland | Γ£ô | | Portugal | Γ£ô | |
azure-maps | Routing Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md | The following tables provide coverage information for Azure Maps routing. | Burkina Faso | Γ£ô | | | | Burundi | Γ£ô | | | | Cameroon | Γ£ô | | |-| Cape Verde | Γ£ô | | | +| Cabo Verde | Γ£ô | | | | Central African Republic | Γ£ô | | | | Chad | Γ£ô | | | | Congo | Γ£ô | | | The following tables provide coverage information for Azure Maps routing. | Somalia | Γ£ô | | | | South Africa | Γ£ô | Γ£ô | Γ£ô | | South Sudan | Γ£ô | | |-| St. Helena | Γ£ô | | | +| St. Helena, Ascension, and Tristan da Cunha | Γ£ô | | | | Sudan | Γ£ô | | | | Swaziland | Γ£ô | | | | Syria | Γ£ô | | | |
azure-maps | Supported Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md | Azure Maps have been localized in variety languages across its services. The fol | de-DE | German | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | el-GR | Greek | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | en-AU | English (Australia) | Γ£ô | Γ£ô | | | Γ£ô |-| en-GB | English (Great Britain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | en-NZ | English (New Zealand) | Γ£ô | Γ£ô | | Γ£ô | Γ£ô |+| en-GB | English (United Kingdom) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | en-US | English (USA) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | es-419 | Spanish (Latin America) | | Γ£ô | | | Γ£ô | | es-ES | Spanish (Spain) | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | |
azure-maps | Tutorial Create Store Locator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md | To create the HTML: ```HTML <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> ``` 3. Next, add a reference to the Azure Maps Services module. This module is a JavaScript library that wraps the Azure Maps REST services, making them easy to use in JavaScript. The Services module is useful for powering search functionality. |
azure-maps | Tutorial Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md | After you create a tileset, you can get the `mapConfigurationId` value by using 5. Enter the following URL to the [Tileset service]. Pass in the tileset ID that you obtained in the previous step. ```http- https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2022-09-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} + https://us.atlas.microsoft.com/tilesets/{tilesetId}?api-version=2023-03-01-preview&subscription-key={Your-Azure-Maps-Subscription-key} ``` 6. Select **Send**. For more information, see [Map configuration] in the article about indoor map co [Drawing conversion errors and warnings]: drawing-conversion-error-codes.md [Dataset Create API]: /rest/api/maps/v2/dataset/create [Dataset service]: /rest/api/maps/v2/dataset-[Tileset service]: /rest/api/maps/v20220901preview/tileset -[tileset get]: /rest/api/maps/v20220901preview/tileset/get +[Tileset service]: /rest/api/maps/2023-03-01-preview/tileset +[tileset get]: /rest/api/maps/2023-03-01-preview/tileset/get [Map configuration]: creator-indoor-maps.md#map-configuration |
azure-maps | Tutorial Prioritized Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md | The following steps show you how to create and display the Map control in a web <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> |
azure-maps | Tutorial Route Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md | The following steps show you how to create and display the Map control in a web <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css"> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css"> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | The Map Control API is a convenient client library. This API allows you to easil <meta charset="utf-8" /> <!-- Add references to the Azure Maps Map control JavaScript and CSS files. -->- <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.css" type="text/css" /> - <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/2/atlas.min.js"></script> + <link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> + <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> <!-- Add a reference to the Azure Maps Services Module JavaScript file. --> <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script> |
azure-maps | Weather Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md | Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned | Burkina Faso | Γ£ô | Γ£ô | | Γ£ô | | Burundi | Γ£ô | Γ£ô | | Γ£ô | | Cameroon | Γ£ô | Γ£ô | | Γ£ô |-| Cape Verde | Γ£ô | Γ£ô | | Γ£ô | +| Cabo Verde | Γ£ô | Γ£ô | | Γ£ô | | Central African Republic | Γ£ô | Γ£ô | | Γ£ô | | Chad | Γ£ô | Γ£ô | | Γ£ô | | Comoros | Γ£ô | Γ£ô | | Γ£ô | |
azure-maps | Web Sdk Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md | If self-hosting the Azure Maps Web SDK via the npm module, be sure to use the ca ```json "dependencies": {- "azure-maps-control": "^2.2.6" + "azure-maps-control": "^3.0.0" } ``` |
azure-maps | Web Sdk Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-migration-guide.md | + + Title: The Azure Maps Web SDK v1 migration guide ++description: Find out how to migrate your Azure Maps Web SDK v1 applications to the most recent version of the Web SDK. ++ Last updated : 08/18/2023+++++# The Azure Maps Web SDK v1 migration guide ++Thank you for choosing the Azure Maps Web SDK for your mapping needs. This migration guide helps you transition from version 1 to version 3, allowing you to take advantage of the latest features and enhancements. ++## Understand the changes ++Before you start the migration process, it's important to familiarize yourself with the key changes and improvements introduced in Web SDK v3. Review the [release notes] to grasp the scope of the new features. ++## Updating the Web SDK version ++### CDN ++If you're using CDN ([content delivery network]), update the references to the stylesheet and JavaScript within the `head` element of your HTML files. ++#### v1 ++```html +<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/css/atlas.min.css?api-version=1" type="text/css" /> +<script src="https://atlas.microsoft.com/sdk/js/atlas.min.js?api-version=1"></script> +``` ++#### v3 ++```html +<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> +<script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js"></script> +``` ++### npm ++If you're using [npm], update to the latest Azure Maps control by running the following command: ++```shell +npm install azure-maps-control@latest +``` ++## Review authentication methods (optional) ++To enhance security, more authentication methods are included in the Web SDK starting in version 2. The new methods include [Azure Active Directory Authentication] and [Shared Key Authentication]. For more information about Azure Maps web application security, see [Manage Authentication in Azure Maps]. ++## Testing ++Comprehensive testing is essential during migration. Conduct thorough testing of your application's functionality, performance, and user experience in different browsers and devices. ++## Gradual Rollout ++Consider a gradual rollout strategy for the updated version. Release the migrated version to a smaller group of users or in a controlled environment before making it available to your entire user base. ++By following these steps and considering best practices, you can successfully migrate your application from Azure Maps WebSDK v1 to v3. Embrace the new capabilities and improvements offered by the latest version while ensuring a smooth and seamless transition for your users. For more information, see [Azure Maps Web SDK best practices]. ++## Next steps ++Learn how to add maps to web and mobile applications using the Map Control client-side JavaScript library in Azure Maps: ++> [!div class="nextstepaction"] +> [Use the Azure Maps map control] ++[Azure Active Directory Authentication]: how-to-secure-spa-users.md +[Azure Maps Web SDK best practices]: web-sdk-best-practices.md +[content delivery network]: /azure/cdn/cdn-overview +[Manage Authentication in Azure Maps]: how-to-manage-authentication.md +[npm]: https://www.npmjs.com/package/azure-maps-control +[release notes]: release-notes-map-control.md +[Shared Key Authentication]: how-to-secure-sas-app.md +[Use the Azure Maps map control]: how-to-use-map-control.md |
azure-monitor | Azure Monitor Agent Extension Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md | We strongly recommended to always update to the latest version, or opt in to the | Release Date | Release notes | Windows | Linux | |:|:|:|:| | July 2023| **Windows** <ul><li>Fix crash when Event Log subscription callback throws errors.<li>MetricExtension updated to 2.2023.609.2051</li></ui> |1.18.0| Comming Soon|-| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncomliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4| +| June 2023| **Windows** <ul><li>Add new file path column to custom logs table</li><li>Config setting to disable custom IMDS endpoint in Tenant.json file</li><li>FluentBit binaries signed with Microsoft customer Code Sign cert</li><li>Minimize number of retries on calls to refresh tokens</li><li>Don't overwrite resource ID with empty string</li><li>AzSecPack updated to version 4.27</li><li>AzureProfiler and AzurePerfCollector updated to version 1.0.0.990</li><li>MetricsExtension updated to version 2.2023.513.10</li><li>Troubleshooter updated to version 1.5.0</li></ul>**Linux** <ul><li>Add new column CollectorHostName to syslog table to identify forwarder/collector machine</li><li>Link OpenSSL dynamically</li><li>**Fixes**<ul><li>Allow uploads soon after AMA start up</li><li>Run LocalSink GC on a dedicated thread to avoid thread pool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for noncompliant RFC 5424 logs</li><li>ASA tenant can fail to start up due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li><li>Fix to remove null characters in agentlauncher.log after log rotation</li><li>Fix for authenticated proxy(1.27.3)</li><li>Fix regression in VM Insights(1.27.4)</ul></li></ul>|1.17.0 |1.27.4| | May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid an would not resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](./agents-overview.md)</li><li>Include Ubuntu 22.04 (Jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system Telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li><li>Fix to prevent events folder growing too large and filling the disk</li><li>hot fix (1.26.3) for Syslog</li></ul><</li><ul> | 1.16.0.0 | 1.26.2 1.26.3<sup>Hotfix</sup>| | Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate logging and continuous tailing of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon | |
azure-monitor | Azure Monitor Agent Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md | For information on how to install Azure Monitor Agent from the Azure portal, see #### [PowerShell](#tab/azure-powershell) -You can install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension. +You can install Azure Monitor Agent on Azure virtual machines and on Azure Arc-enabled servers by using the PowerShell command for adding a virtual machine extension. ### Install on Azure virtual machines Use the following PowerShell commands to install Azure Monitor Agent on Azure vi Set-AzVMExtension -Name AzureMonitorLinuxAgent -ExtensionType AzureMonitorLinuxAgent -Publisher Microsoft.Azure.Monitor -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> -Location <location> -TypeHandlerVersion <version-number> -EnableAutomaticUpgrade $true ``` +### Install on Azure virtual machines scale set ++Use the [Add-AzVmssExtension](/powershell/module/az.compute/add-azvmssextension) PowerShell cmdlet to install Azure Monitor Agent on Azure virtual machines scale sets. + ### Install on Azure Arc-enabled servers Use the following PowerShell commands to install Azure Monitor Agent on Azure Arc-enabled servers. Use the following CLI commands to install Azure Monitor Agent on Azure virtual m ```azurecli az vm extension set --name AzureMonitorLinuxAgent --publisher Microsoft.Azure.Monitor --ids <vm-resource-id> --enable-auto-upgrade true ```+### Install on Azure virtual machines scale set ++Use the [az vmss extension set](/cli/azure/vmss/extension) CLI cmdlet to install Azure Monitor Agent on Azure virtual machines scale sets. ### Install on Azure Arc-enabled servers Use the following PowerShell commands to uninstall Azure Monitor Agent on Azure ```powershell Remove-AzVMExtension -Name AzureMonitorLinuxAgent -ResourceGroupName <resource-group-name> -VMName <virtual-machine-name> ```+### Uninstall on Azure virtual machines scale set ++Use the [Remove-AzVmssExtension](/powershell/module/az.compute/remove-azvmssextension) PowerShell cmdlet to uninstall Azure Monitor Agent on Azure virtual machines scale sets. ### Uninstall on Azure Arc-enabled servers Use the following CLI commands to uninstall Azure Monitor Agent on Azure virtual ```azurecli az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> --name AzureMonitorLinuxAgent ```+### Uninstall on Azure virtual machines scale set ++Use the [az vmss extension delete](/cli/azure/vmss/extension) CLI cmdlet to uninstall Azure Monitor Agent on Azure virtual machines scale sets. ### Uninstall on Azure Arc-enabled servers |
azure-monitor | Action Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md | description: Find out how to create and manage action groups. Learn about notifi Last updated 05/02/2023 -+ # Action groups |
azure-monitor | Prometheus Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/prometheus-alerts.md | Last updated 09/15/2022 # Prometheus alerts in Azure Monitor-Prometheus alert rules allow you to define alert conditions, using queries which are written in Prometheus Query Language (Prom QL). The rule queries are applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it is fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule. +As part of [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md), Prometheus alert rules allow you to define alert conditions, using queries written in Prometheus Query Language (Prom QL). The rule queries are applied on Prometheus metrics stored in [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). Whenever the alert query results in one or more time series meeting the condition, the alert counts as pending for these metric and label sets. A pending alert becomes active after a user-defined period of time during which all the consecutive query evaluations for the respective time series meet the alert condition. Once an alert becomes active, it's fired and would trigger your actions or notifications of choice, as defined in the Azure Action Groups configured in your alert rule. -> [!NOTE] -> Azure Monitor managed service for Prometheus, including Prometheus metrics, is currently in public preview and does not yet have all of its features enabled. Prometheus metrics are displayed with alerts generated by other types of alert rules, but they currently have a difference experience for creating and managing them. --## Create Prometheus alert rule -Prometheus alert rules are created as part of a Prometheus rule group, which is applied on the [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details. +## Create Prometheus alert rules +Prometheus alert rules are created and managed as part of a Prometheus rule group. See [Azure Monitor managed service for Prometheus rule groups](../essentials/prometheus-rule-groups.md) for details. ## View Prometheus alerts-View fired and resolved Prometheus alerts in the Azure portal with other alert types. Use the following steps to filter on only Prometheus alerts. +You can view fired and resolved Prometheus alerts in the Azure portal together with all other alert types. Use the following steps to filter on only Prometheus alerts. 1. From the **Monitor** menu in the Azure portal, select **Alerts**. 2. If **Monitoring Service** isn't displayed as a filter option, then select **Add Filter** and add it. 3. Set the filter **Monitoring Service** to **Prometheus** to see Prometheus alerts.- :::image type="content" source="media/prometheus-metric-alerts/view-alerts.png" lightbox="media/prometheus-metric-alerts/view-alerts.png" alt-text="Screenshot of a list of alerts in Azure Monitor with a filter for Prometheus alerts."::: 4. Click the alert name to view the details of a specific fired/resolved alert.- :::image type="content" source="media/prometheus-metric-alerts/alert-details-grafana.png" lightbox="media/prometheus-metric-alerts/alert-details-grafana.png" alt-text="Screenshot of detail for a Prometheus alert in Azure Monitor."::: +If your rule group is configured with [a specific cluster scope](../essentials/prometheus-rule-groups.md#limiting-rules-to-a-specific-cluster), you can also view alerts fired for this cluster, under this cluster alerts blade. From the cluster menu in the Azure portal, select **Alerts**. You can then filter for the Prometheus monitor service. + ## Explore Prometheus alerts in Grafana 1. In the fired alerts details pane, you can click the **View query in Grafana** link. -2. A browser tab will be opened taking you to the [Azure Managed Grafana](../../managed-grafan) instance connected to your Azure Monitor Workspace. -3. Grafana will be opened in Explore mode, presenting the chart for your alert rule expression query which triggered the alert, around the alert firing time. You can further explore the query in Grafana to identify the reason causing the alert to fire. +2. A browser tab is opened taking you to the [Azure Managed Grafana](../../managed-grafan) instance connected to your Azure Monitor Workspace. +3. Grafana is opened in Explore mode, presenting the chart for your alert rule expression query around the alert firing time. You can further explore the query in Grafana to identify the reason causing the alert to fire. > [!NOTE] > 1. If there is no Azure Managed Grafana connected to your Azure Monitor Workspace, a link to Grafana will not be available. |
azure-monitor | Smart Detection Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/smart-detection-performance.md | The response time degradation notification tells you: ## Dependency Duration Degradation -Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as cognitive services. +Modern applications often adopt a micro services design approach, which in many cases rely heavily on external services. For example, if your application relies on some data platform, or on a critical services provider such as Azure AI services. Example of dependency degradation notification: |
azure-monitor | App Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-insights-overview.md | The [Application Map](app-map.md) allows a high-level, top-down view of the appl To understand the number of Application Insights resources required to cover your application or components across environments, see the [Application Insights deployment planning guide](separate-resources.md). ++Firewall settings must be adjusted for data to reach ingestion endpoints. For more information, see [IP addresses used by Azure Monitor](./ip-addresses.md). + ## How do I use Application Insights? Application Insights is enabled through either [autoinstrumentation](codeless-overview.md) (agent) or by adding the [Application Insights SDK](sdk-support-guidance.md) or [Azure Monitor OpenTelemetry Distro](opentelemetry-enable.md) to your application code. [Many languages](#supported-languages) are supported. The applications could be on Azure, on-premises, or hosted by another cloud. To figure out which type of instrumentation is best for you, see [How do I instrument an application?](#how-do-i-instrument-an-application). Consider starting with the [Application Map](app-map.md) for a high-level view. Two views are especially useful: - [Performance view](tutorial-performance.md): Get deep insights into how your application or API and downstream dependencies are performing. You can also find a representative sample to [explore end to end](transaction-diagnostics.md).-- [Failure view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis.+- [Failures view](tutorial-runtime-exceptions.md): Understand which components or actions are generating failures and triage errors and exceptions. The built-in views are helpful to track application health proactively and for reactive root-cause analysis. [Create Azure Monitor alerts](tutorial-alert.md) to signal potential issues in case your application or components parts deviate from the established baseline. |
azure-monitor | Availability Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md | Title: Review TrackAvailability() test results description: This article explains how to review data logged by TrackAvailability() tests Previously updated : 06/23/2023 Last updated : 08/20/2023 # Review TrackAvailability() test results -This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). +This article explains how to review [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) test results in the Azure portal and query the data using [Log Analytics](../logs/log-analytics-overview.md#overview-of-log-analytics-in-azure-monitor). [Standard tests](availability-standard-tests.md) **should always be used if possible** as they require little investment, no maintenance, and have few prerequisites. ## Prerequisites > [!div class="checklist"] > - [Workspace-based Application Insights resource](create-workspace-resource.md)-> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions. -> - Developer expertise capable of authoring custom code for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs +> - Access to the source code of a [function app](../../azure-functions/functions-how-to-use-azure-function-app-settings.md) in Azure Functions +> - Developer expertise capable of authoring [custom code](#basic-code-sample) for [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability), tailored to your specific business needs -> [!NOTE] -> - TrackAvailability() requires that you have made a developer investment in custom code. -> - [Standard tests](availability-standard-tests.md) should always be used if possible as they require little investment and have few prerequisites. +> [!IMPORTANT] +> [TrackAvailability()](/dotnet/api/microsoft.applicationinsights.telemetryclient.trackavailability) requires making a developer investment in writing and maintanining potentially complex custom code. ## Check availability You can use Log Analytics to view your availability results, dependencies, and m :::image type="content" source="media/availability-azure-functions/dependencies.png" alt-text="Screenshot that shows the New Query tab with dependencies limited to 50." lightbox="media/availability-azure-functions/dependencies.png"::: +## Basic code sample ++The following example demonstrates a web availability test that requires a simple URL ping using the `getStringAsync()` method. ++```csharp +using System.Net.Http; ++public async static Task RunAvailabilityTestAsync(ILogger log) +{ + using (var httpClient = new HttpClient()) + { + // TODO: Replace with your business logic + await httpClient.GetStringAsync("https://www.bing.com/"); + } +} +``` ++For advanced scenarios where the business logic must be adjusted to access the URL, such as obtaining tokens, setting parameters, and other test cases, custom code is necessary. + ## Next steps * [Standard tests](availability-standard-tests.md) |
azure-monitor | Codeless Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md | -Autoinstrumentation quickly and easily enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). +Autoinstrumentation enables [Application Insights](app-insights-overview.md) to make [telemetry](data-model-complete.md) like metrics, requests, and dependencies available in your [Application Insights resource](create-workspace-resource.md). It provides easy access to experiences such as the [application dashboard](overview-dashboard.md) and [application map](app-map.md). ++If your language and platform are supported, select the corresponding link in the [Supported environments, languages, and resource providers table](#supported-environments-languages-and-resource-providers) for more detailed information. In many cases, autoinstrumentation is enabled by default. ++## What are the autoinstrumentation advantages? > [!div class="checklist"]-> - No code changes are required. -> - [SDK update](sdk-support-guidance.md) overhead is eliminated. -> - Recommended when available. +> - Code changes aren't required. +> - Access to source code isn't required. +> - Configuration changes aren't required. +> - Ongoing [SDK update maintenance](sdk-support-guidance.md) is eliminated. ## Supported environments, languages, and resource providers |
azure-monitor | Java Standalone Profiler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-profiler.md | The Application Insights Java Profiler provides a system for: The Application Insights Java profiler uses the JFR profiler provided by the JVM to record profiling data, allowing users to download the JFR recordings at a later time and analyze them to identify the cause of performance issues. -This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage and Memory consumption. +This data is gathered on demand when trigger conditions are met. The available triggers are thresholds over CPU usage, Memory consumption, and Request (SLA triggers). Request triggers monitor Spans generated by Open Telemetry and allow the user to configure SLA requirements over the duration of those Spans. When a threshold is reached, a profile of the configured type and duration is gathered and uploaded. This profile is then visible within the performance pane of the associated Application Insights Portal UI. See [Configuring Profile Contents](#configuring-profile-contents) on setting a c For more detailed description of the various triggers available, see [profiler overview](../profiler/profiler-overview.md). -The ApplicationInsights Java Agent monitors CPU and memory consumption and if it breaches a configured threshold a profile is triggered. Both thresholds are a percentage. +The ApplicationInsights Java Agent monitors CPU, memory, and request duration such as a business transaction. If it breaches a configured threshold, a profile is triggered. #### Profile now In this scenario, a profile will occur in the following circumstances: - Full garbage collection is executed - The Tenured regions occupancy is above 691 mb after collection +### Request ++SLA triggers are based on OpenTelemetry (otel) and they will initiate a profile if certain criteria is fulfilled. ++Each individual trigger configuration is formed as follows: ++- `Name` - A unique identifier for the trigger. +- `Filter` - Filters the requests of interest for the trigger. +- `Aggregation` - This calculates the ratio of requests that breached a given threshold. + - `Threshold` - A value (in milliseconds) above which a request breach is determined. + - `Minimum samples` - The minimum number of samples that must be collected for the aggregation to produce data, this is to prevent triggering off of small sample sizes. + - `Window` - Rolling time window (in milliseconds). +- `Threshold` - The threshold value (percentage) applied to the aggregation output. If this value is exceeded, a profile is initiated. ++For instance, the following scenario would trigger a profile if: more than 75% of requests to a specific endpoint (/users/.*) take longer than 30 ms within a 60 seconds window, when at least 100 samples were gathered. ++ ### Installation The following steps will guide you through enabling the profiling component on the agent and configuring resource limits that will trigger a profile if breached. The following steps will guide you through enabling the profiling component on t 2. Select "Triggers" - 3. Configure the required CPU and Memory thresholds and select Apply. - :::image type="content" source="./media/java-standalone-profiler/cpu-memory-trigger-settings.png" alt-text="Screenshot of trigger settings pane for CPU and Memory triggers."::: + 3. Configure the required CPU, Memory or Request triggers (if enabled) and select Apply. + :::image type="content" source="./media/java-standalone-profiler/trigger-settings.png" alt-text="Screenshot of trigger settings"::: > [!WARNING] > The Java profiler does not support the "Sampling" trigger. Configuring this will have no effect. Example configuration: "enabled": true, "cpuTriggeredSettings": "profile-without-env-data", "memoryTriggeredSettings": "profile-without-env-data",- "manualTriggeredSettings": "profile-without-env-data" + "manualTriggeredSettings": "profile-without-env-data", + "enableRequestTriggering": true } } } This value can be one of: - `profile`. Uses the `profile.jfc` jfc configuration that ships with JFR. - A path to a custom jfc configuration file on the file system, i.e `/tmp/myconfig.jfc`. +`enableRequestTriggering` Whether JFR profiling should be triggered based on request configuration. +This value can be one of: ++- `true` Profiling will be triggered if a request trigger threshold is breached. +- `false` (default value). Profiling will not be triggered by request configuration. + ## Frequently asked questions ### What is Azure Monitor Application Insights Java Profiling? |
azure-monitor | Javascript Feature Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/javascript-feature-extensions.md | See a [simple web app with the Click Analytics Autocollection Plug-in enabled](h The following examples show which value is fetched as the `parentId` for different configurations. +The examples show how if `parentDataTag` is defined but the plug-in can't find this tag under the DOM tree, the plug-in uses the `id` of its closest parent element. + ### Example 1 -In example 1, the `parentDataTag` isn't declared and `data-parentid` or `data-*-parentid` isn't defined in any element. +In example 1, the `parentDataTag` isn't declared and `data-parentid` or `data-*-parentid` isn't defined in any element. This example shows a configuration where a value for `parentId` isn't collected. ```javascript export const clickPluginConfigWithUseDefaultContentNameOrId = { export const clickPluginConfigWithUseDefaultContentNameOrId = { }; <div className="test1" data-id="test1parent">- <div>Test1</div> + <div>Test1</div> <div><small>with id, data-id, parent data-id defined</small></div> <Button id="id1" data-id="test1id" variant="info" onClick={trackEvent}>Test1</Button>- </div> +</div> ``` -For clicked element `<Button>`, the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because `parentDataTag` is not declared and the `data-parentid` or `data-*-parentid` is not defined in any element. +For clicked element `<Button>` the value of `parentId` is `ΓÇ£not_specifiedΓÇ¥`, because no `parentDataTag` details are defined and no parent element id is provided within the current element. ### Example 2 -In example 2, `parentDataTag` is declared and `data-parentid` is defined. +In example 2, `parentDataTag` is declared and `data-parentid` is defined. This example shows how parent id details are collected. ```javascript export const clickPluginConfigWithParentDataTag = { export const clickPluginConfigWithParentDataTag = { </div> ``` -For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` definition takes precedence. If the `data-parentid` attribute was defined within the div element with `className=ΓÇ¥test2ΓÇ¥`, the value for `parentId` would still be `parentid2`. +For clicked element `<Button>`, the value of `parentId` is `parentid2`. Even though `parentDataTag` is declared, the `data-parentid` is directly defined within the element. Therefore, this value takes precedence over all other parent ids or id details defined in its parent elements. ### Example 3 -In example 3, `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined. +In example 3, `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined. This example shows how declaring `parentDataTag` can be helpful to collect a value for `parentId` for cases when dynamic elements don't have an `id` or `data-*-id`. ```javascript export const clickPluginConfigWithParentDataTag = { export const clickPluginConfigWithParentDataTag = { </div> </div> ```-For clicked element `<Button>`, because `parentDataTag` is declared and the `data-parentid` or `data-*-parentid` attribute isnΓÇÖt defined, the value of `parentId` is `test6parent`. It's `test6parent` because when `parentDataTag` is declared, the plug-in fetches the value of the `id` or `data-*-id` attribute from the parent HTML element that is closest to the clicked element. Because `data-group="buttongroup1"` is defined, the plug-in finds the `parentId` more efficiently. +For clicked element `<Button>`, the value of `parentId` is `test6parent`, because `parentDataTag` is declared. This declaration allows the plugin to traverse the current element tree and therefore the id of its closest parent will be used when parent id details are not directly provided within the current element. With the `data-group="buttongroup1"` defined, the plug-in finds the `parentId` more efficiently. If you remove the `data-group="buttongroup1"` attribute, the value of `parentId` is still `test6parent`, because `parentDataTag` is still declared. |
azure-monitor | Opentelemetry Add Modify | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md | Telemetry emitted by these Azure SDKs is automatically collected by default: #### [Node.js](#tab/nodejs) -The following OpenTelemetry Instrumentation libraries are included as part of the Azure Monitor Application Insights Distro. See [this](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#officially-supported-instrumentations) for more details. +The following OpenTelemetry Instrumentation libraries are included as part of the Azure Monitor Application Insights Distro. For more information, see [OpenTelemetry officially supported instrumentations](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#officially-supported-instrumentations). Requests - [HTTP/HTTPS](https://github.com/open-telemetry/opentelemetry-js/tree/main/experimental/packages/opentelemetry-instrumentation-http) <sup>[2](#FOOTNOTETWO)</sup> Telemetry emitted by Azure SDKS is automatically [collected](https://github.com/ You can collect more data automatically when you include instrumentation libraries from the OpenTelemetry community. -> [!NOTE] -> We don't support and cannot guarantee the quality of community instrumentation libraries. If you would like to suggest a community instrumentation library us to include in our distro, post or up-vote an idea in our [feedback community](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). --> [!CAUTION] -> Some instrumentation libraries are based on experimental OpenTelemetry semantic specifications. Adding them may leave you vulnerable to future breaking changes. ### [ASP.NET Core](#tab/aspnetcore) -To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTraceProvider` methods. +To add a community library, use the `ConfigureOpenTelemetryMeterProvider` or `ConfigureOpenTelemetryTracerProvider` methods. The following example demonstrates how the [Runtime Instrumentation](https://www.nuget.org/packages/OpenTelemetry.Instrumentation.Runtime) can be added to collect extra metrics. app.MapGet("/", () => app.Run(); ``` -When calling `StartActivity`, it defaults to `ActivityKind.Internal` but you can provide any other `ActivityKind`. +`StartActivity` defaults to `ActivityKind.Internal`, but you can provide any other `ActivityKind`. `ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`. `ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`. using (var activity = activitySource.StartActivity("CustomActivity")) } ``` -When calling `StartActivity`, it defaults to `ActivityKind.Internal` but you can provide any other `ActivityKind`. +`StartActivity` defaults to `ActivityKind.Internal`, but you can provide any other `ActivityKind`. `ActivityKind.Client`, `ActivityKind.Producer`, and `ActivityKind.Internal` are mapped to Application Insights `dependencies`. `ActivityKind.Server` and `ActivityKind.Consumer` are mapped to Application Insights `requests`. Attaching custom dimensions to logs can be accomplished using a [message templat Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways: -* [Log4j 2 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message) -* [Log4j 2 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) +* [Log4j 2.0 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message) +* [Log4j 2.0 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) * [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html) #### [Node.js](#tab/nodejs) Get the request trace ID and the span ID in your code: + ## Next steps ### [ASP.NET Core](#tab/aspnetcore) Get the request trace ID and the span ID in your code: ### [Node.js](#tab/nodejs) - To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.+- To install the npm package and check for updates, see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page. - To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md). |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | The sampler expects a sample rate of between 0 and 1 inclusive. A rate of 0.1 me ```csharp var builder = WebApplication.CreateBuilder(args); -builder.Services.AddOpenTelemetry().UseAzureMonitor(); -builder.Services.Configure<ApplicationInsightsSamplerOptions>(options => { options.SamplingRatio = 0.1F; }); +builder.Services.AddOpenTelemetry().UseAzureMonitor(o => +{ + o.SamplingRatio = 0.1F; +}); var app = builder.Build(); For more information about OpenTelemetry SDK configuration, see the [OpenTelemet For more information about OpenTelemetry SDK configuration, see the [OpenTelemetry documentation](https://opentelemetry.io/docs/concepts/sdk-configuration). For additional details, see [Azure monitor Distro Usage](https://github.com/microsoft/ApplicationInsights-Python/tree/main/azure-monitor-opentelemetry#usage). + |
azure-monitor | Opentelemetry Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md | -This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro". The Distro will [automatically collect](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry). +This article describes how to enable and configure OpenTelemetry-based data collection to power the experiences within [Azure Monitor Application Insights](app-insights-overview.md#application-insights-overview). We walk through how to install the "Azure Monitor OpenTelemetry Distro." The Distro [automatically collects](opentelemetry-add-modify.md#automatic-data-collection) traces, metrics, logs, and exceptions across your application and its dependencies. To learn more about collecting data using OpenTelemetry, see [Data Collection Basics](opentelemetry-overview.md) or [OpenTelemetry FAQ](/azure/azure-monitor/faq#opentelemetry). ## OpenTelemetry Release Status pip install azure-monitor-opentelemetry --pre ### Enable Azure Monitor Application Insights-To enable Azure Monitor Application Insights, you make a minor modification to your application and set your "Connection String". The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you. +To enable Azure Monitor Application Insights, you make a minor modification to your application and set your "Connection String." The Connection String tells your application where to send the telemetry the Distro collects, and it's unique to you. #### Modify your Application To paste your Connection String, select from the following options: C. Set via Code - ASP.NET Core, Node.js, and Python Only (Not recommended) - See [Connection String Configuration](opentelemetry-configuration.md#connection-string) for example setting Connection String via code. + See [Connection String Configuration](opentelemetry-configuration.md#connection-string) for an example of setting Connection String via code. > [!NOTE] > If you set the connection string in more than one place, we adhere to the following precendence: Run your application and open your **Application Insights Resource** tab in the :::image type="content" source="media/opentelemetry/server-requests.png" alt-text="Screenshot of the Application Insights Overview tab with server requests and server response time highlighted."::: -That's it. Your application is now monitored by Application Insights. Everything else below is optional and available for further customization. +You've now enabled Application Insights for your application. All the following steps are optional and allow for further customization. Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-dotnet), [Java](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-java), [Node.js](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-nodejs), or [Python](/troubleshoot/azure/azure-monitor/app-insights/opentelemetry-troubleshooting-python). Not working? Check out the troubleshooting page for [ASP.NET Core](/troubleshoot As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). -## Support --### [ASP.NET Core](#tab/aspnetcore) --- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).--#### [.NET](#tab/net) --- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry .NET community](https://github.com/open-telemetry/opentelemetry-dotnet) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-net/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).--### [Java](#tab/java) --- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For help with troubleshooting, review the [troubleshooting steps](java-standalone-troubleshoot.md).-- For OpenTelemetry issues, contact the [OpenTelemetry community](https://opentelemetry.io/community/) directly.-- For a list of open issues related to Azure Monitor Java Autoinstrumentation, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Java/issues).--### [Node.js](#tab/nodejs) --- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).--### [Python](#tab/python) --- For Azure support issues, open an [Azure support ticket](https://azure.microsoft.com/support/create-ticket/).-- For OpenTelemetry issues, contact the [OpenTelemetry Python community](https://github.com/open-telemetry/opentelemetry-python) directly.-- For a list of open issues related to Azure Monitor Distro, see the [GitHub Issues Page](https://github.com/microsoft/ApplicationInsights-Python/issues/new).----## OpenTelemetry feedback --To provide feedback: --- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).-- Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).-- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).-- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). ## Next steps To provide feedback: - For details on adding and modifying Azure Monitor OpenTelemetry, see [Add and modify Azure Monitor OpenTelemetry](opentelemetry-add-modify.md) - To review the source code, see the [Application Insights Beta GitHub repository](https://github.com/microsoft/ApplicationInsights-node.js/tree/beta).-- To install the npm package and check for updates see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page.+- To install the npm package and check for updates, see the [applicationinsights npm Package](https://www.npmjs.com/package/applicationinsights/v/beta) page. - To become more familiar with Azure Monitor Application Insights and OpenTelemetry, see the [Azure Monitor Example Application](https://github.com/Azure-Samples/azure-monitor-opentelemetry-node.js). - To learn more about OpenTelemetry and its community, see the [OpenTelemetry JavaScript GitHub repository](https://github.com/open-telemetry/opentelemetry-js). - To enable usage experiences, [enable web or browser user monitoring](javascript.md). |
azure-monitor | Opentelemetry Nodejs Exporter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-nodejs-exporter.md | You might want to enable the OpenTelemetry Protocol (OTLP) Exporter alongside yo provider.register(); ``` -## Support --- For OpenTelemetry issues, contact the [OpenTelemetry JavaScript community](https://github.com/open-telemetry/opentelemetry-js) directly.-- For a list of open issues related to Azure Monitor Exporter, see the [GitHub Issues Page](https://github.com/Azure/azure-sdk-for-js/issues?q=is%3Aopen+is%3Aissue+label%3A%22Monitor+-+Exporter%22).--## OpenTelemetry feedback --To provide feedback: --- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform).-- Tell Microsoft about yourself by joining the [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).-- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).-- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/3887dc70-2025-ec11-b6e6-000d3a4f09d0). ## Next steps |
azure-monitor | Opentelemetry Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md | Traces | Logs Requests | Server Spans Dependencies | Other Span Types (Client, Internal, etc.) + ## Next steps Select your enablement approach: |
azure-monitor | Opentelemetry Python Opencensus Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-python-opencensus-migrate.md | Coming soon. ### Performance Counters -The OpenCensus Python Azure Monitor exporter automatically collected system and performance related metrics called [performance counters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure#performance-counters). These metrics appear in `performanceCounters` in your Application Insights instance. In OpenTelemetry, we no longer send these metrics explicitly to `performanceCounters`. Metrics related to incoming/outgoing requests can be found under [standard metrics](./standard-metrics.md). If you would like OpenTelemetry to autocollect system related metrics, you can use the experimental system metrics [instrumentation](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-system-metrics), contributed by the OpenTelemetry Python community. This package is experimental and not officially supported by Microsoft. +The OpenCensus Python Azure Monitor exporter automatically collected system and performance related metrics called [performance counters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure#performance-counters). These metrics appear in `performanceCounters` in your Application Insights instance. In OpenTelemetry, we no longer send these metrics explicitly to `performanceCounters`. Metrics related to incoming/outgoing requests can be found under [standard metrics](./standard-metrics.md). If you would like OpenTelemetry to autocollect system related metrics, you can use the experimental system metrics [instrumentation](https://github.com/open-telemetry/opentelemetry-python-contrib/tree/main/instrumentation/opentelemetry-instrumentation-system-metrics), contributed by the OpenTelemetry Python community. This package is experimental and not officially supported by Microsoft. + |
azure-monitor | Autoscale Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-get-started.md | Title: Get started with autoscale in Azure description: "Learn how to scale your resource web app, cloud service, virtual machine, or Virtual Machine Scale Set in Azure."-++ Last updated 04/10/2023 |
azure-monitor | Azure Monitor Monitoring Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-monitoring-reference.md | The following schemas are relevant to action groups, which are part of the notif ## See Also -- See [Monitoring Azure Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself. -- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Monitoring Azure Monitor](monitor-azure-monitor.md) for a description of what Azure Monitor monitors in itself. +- See [Monitoring Azure resources with Azure Monitor](./essentials/monitor-azure-resource.md) for details on monitoring Azure resources. |
azure-monitor | Container Insights Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md | The following types of data collected from a Kubernetes cluster with Container i - Container environment variables from every monitored container in the cluster - Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics-- [Diagnostic log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.+- [Resource log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`. ## Estimating costs to monitor your AKS cluster |
azure-monitor | Container Insights Hybrid Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-hybrid-setup.md | + + Title: Configure hybrid Kubernetes clusters with Container insights | Microsoft Docs +description: This article describes how you can configure Container insights to monitor Kubernetes clusters hosted on Azure Stack or other environments. + Last updated : 08/21/2023++++# Configure hybrid Kubernetes clusters with Container insights ++Container insights provides a rich monitoring experience for the Azure Kubernetes Service (AKS) and [AKS Engine on Azure](https://github.com/Azure/aks-engine), which is a self-managed Kubernetes cluster hosted on Azure. This article describes how to enable monitoring of Kubernetes clusters hosted outside of Azure and achieve a similar monitoring experience. ++## Supported configurations ++The following configurations are officially supported with Container insights. If you have a different version of Kubernetes and operating system versions, please open a support ticket.. ++- Environments: + - Kubernetes on-premises. + - AKS Engine on Azure and Azure Stack. For more information, see [AKS Engine on Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview). + - [OpenShift](https://docs.openshift.com/container-platform/4.3/welcome/https://docsupdatetracker.net/index.html) version 4 and higher, on-premises or in other cloud environments. +- Versions of Kubernetes and support policy are the same as versions of [AKS supported](../../aks/supported-kubernetes-versions.md). +- The following container runtimes are supported: Moby and CRI compatible runtimes such CRI-O and ContainerD. +- The Linux OS release for main and worker nodes supported are Ubuntu (18.04 LTS and 16.04 LTS) and Red Hat Enterprise Linux CoreOS 43.81. +- Azure Access Control service supported: Kubernetes role-based access control (RBAC) and non-RBAC. ++## Prerequisites ++Before you start, make sure that you meet the following prerequisites: ++- You have a [Log Analytics workspace](../logs/design-logs-deployment.md). Container insights supports a Log Analytics workspace in the regions listed in Azure [Products by region](https://azure.microsoft.com/global-infrastructure/services/?regions=all&products=monitor). You can create your own workspace through [Azure Resource Manager](../logs/resource-manager-workspace.md), [PowerShell](../logs/powershell-workspace-configuration.md?toc=%2fpowershell%2fmodule%2ftoc.json), or the [Azure portal](../logs/quick-create-workspace.md). ++ >[!NOTE] + >Enabling the monitoring of multiple clusters with the same cluster name to the same Log Analytics workspace isn't supported. Cluster names must be unique. + > ++- You're a member of the Log Analytics contributor role to enable container monitoring. For more information about how to control access to a Log Analytics workspace, see [Manage access to workspace and log data](../logs/manage-access.md). +- To view the monitoring data, you must have the [Log Analytics reader](../logs/manage-access.md#azure-rbac) role in the Log Analytics workspace, configured with Container insights. +- You have a [Helm client](https://helm.sh/docs/using_helm/) to onboard the Container insights chart for the specified Kubernetes cluster. +- The following proxy and firewall configuration information is required for the containerized version of the Log Analytics agent for Linux to communicate with Azure Monitor: ++ |Agent resource|Ports | + ||| + |*.ods.opinsights.azure.com |Port 443 | + |*.oms.opinsights.azure.com |Port 443 | + |*.dc.services.visualstudio.com |Port 443 | ++- The containerized agent requires the Kubelet `cAdvisor secure port: 10250` or `unsecure port :10255` to be opened on all nodes in the cluster to collect performance metrics. We recommend that you configure `secure port: 10250` on the Kubelet cAdvisor if it isn't configured already. +- The containerized agent requires the following environmental variables to be specified on the container to communicate with the Kubernetes API service within the cluster to collect inventory data: `KUBERNETES_SERVICE_HOST` and `KUBERNETES_PORT_443_TCP_PORT`. ++>[!IMPORTANT] +>The minimum agent version supported for monitoring hybrid Kubernetes clusters is *ciprod10182019* or later. ++## Enable monitoring ++To enable Container insights for the hybrid Kubernetes cluster: ++1. Configure your Log Analytics workspace with the Container insights solution. ++1. Enable the Container insights Helm chart with a Log Analytics workspace. ++For more information on monitoring solutions in Azure Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). ++### Add the Azure Monitor Containers solution ++You can deploy the solution with the provided Azure Resource Manager template by using the Azure PowerShell cmdlet `New-AzResourceGroupDeployment` or with the Azure CLI. ++If you're unfamiliar with the concept of deploying resources by using a template, see: ++- [Deploy resources with Resource Manager templates and Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md) +- [Deploy resources with Resource Manager templates and the Azure CLI](../../azure-resource-manager/templates/deploy-cli.md) ++If you choose to use the Azure CLI, you first need to install and use the CLI locally. You must be running the Azure CLI version 2.0.59 or later. To identify your version, run `az --version`. If you need to install or upgrade the Azure CLI, see [Install the Azure CLI](/cli/azure/install-azure-cli). ++This method includes two JSON templates. One template specifies the configuration to enable monitoring. The other template contains parameter values that you configure to specify: ++- `workspaceResourceId`: The full resource ID of your Log Analytics workspace. +- `workspaceRegion`: The region the workspace is created in, which is also referred to as **Location** in the workspace properties when you view them from the Azure portal. ++To first identify the full resource ID of your Log Analytics workspace that's required for the `workspaceResourceId` parameter value in the *containerSolutionParams.json* file, perform the following steps. Then run the PowerShell cmdlet or Azure CLI command to add the solution. ++1. List all the subscriptions to which you have access by using the following command: ++ ```azurecli + az account list --all -o table + ``` ++ The output will resemble the following example: ++ ```azurecli + Name CloudName SubscriptionId State IsDefault + -- - -- + Microsoft Azure AzureCloud 0fb60ef2-03cc-4290-b595-e71108e8f4ce Enabled True + ``` ++ Copy the value for **SubscriptionId**. ++1. Switch to the subscription hosting the Log Analytics workspace by using the following command: ++ ```azurecli + az account set -s <subscriptionId of the workspace> + ``` ++1. The following example displays the list of workspaces in your subscriptions in the default JSON format: ++ ```azurecli + az resource list --resource-type Microsoft.OperationalInsights/workspaces -o json + ``` ++ In the output, find the workspace name. Then copy the full resource ID of that Log Analytics workspace under the field **ID**. ++1. Copy and paste the following JSON syntax into your file: ++ ```json + { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "workspaceResourceId": { + "type": "string", + "metadata": { + "description": "Azure Monitor Log Analytics Workspace Resource ID" + } + }, + "workspaceRegion": { + "type": "string", + "metadata": { + "description": "Azure Monitor Log Analytics Workspace region" + } + } + }, + "resources": [ + { + "type": "Microsoft.Resources/deployments", + "name": "[Concat('ContainerInsights', '-', uniqueString(parameters('workspaceResourceId')))]", + "apiVersion": "2017-05-10", + "subscriptionId": "[split(parameters('workspaceResourceId'),'/')[2]]", + "resourceGroup": "[split(parameters('workspaceResourceId'),'/')[4]]", + "properties": { + "mode": "Incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": {}, + "variables": {}, + "resources": [ + { + "apiVersion": "2015-11-01-preview", + "type": "Microsoft.OperationsManagement/solutions", + "location": "[parameters('workspaceRegion')]", + "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]", + "properties": { + "workspaceResourceId": "[parameters('workspaceResourceId')]" + }, + "plan": { + "name": "[Concat('ContainerInsights', '(', split(parameters('workspaceResourceId'),'/')[8], ')')]", + "product": "[Concat('OMSGallery/', 'ContainerInsights')]", + "promotionCode": "", + "publisher": "Microsoft" + } + } + ] + }, + "parameters": {} + } + } + ] + } + ``` ++1. Save this file as **containerSolution.json** to a local folder. ++1. Paste the following JSON syntax into your file: ++ ```json + { + "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "workspaceResourceId": { + "value": "<workspaceResourceId>" + }, + "workspaceRegion": { + "value": "<workspaceRegion>" + } + } + } + ``` ++1. Edit the values for **workspaceResourceId** by using the value you copied in step 3. For **workspaceRegion**, copy the **Region** value after running the Azure CLI command [az monitor log-analytics workspace show](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-list&preserve-view=true). ++1. Save this file as **containerSolutionParams.json** to a local folder. ++1. You're ready to deploy this template. ++ - To deploy with Azure PowerShell, use the following commands in the folder that contains the template: ++ ```powershell + # configure and login to the cloud of Log Analytics workspace.Specify the corresponding cloud environment of your workspace to below command. + Connect-AzureRmAccount -Environment <AzureCloud | AzureChinaCloud | AzureUSGovernment> + ``` ++ ```powershell + # set the context of the subscription of Log Analytics workspace + Set-AzureRmContext -SubscriptionId <subscription Id of Log Analytics workspace> + ``` ++ ```powershell + # execute deployment command to add Container Insights solution to the specified Log Analytics workspace + New-AzureRmResourceGroupDeployment -Name OnboardCluster -ResourceGroupName <resource group of Log Analytics workspace> -TemplateFile .\containerSolution.json -TemplateParameterFile .\containerSolutionParams.json + ``` ++ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result: ++ ```powershell + provisioningState : Succeeded + ``` ++ - To deploy with the Azure CLI, run the following commands: ++ ```azurecli + az login + az account set --name <AzureCloud | AzureChinaCloud | AzureUSGovernment> + az login + az account set --subscription "Subscription Name" + # execute deployment command to add container insights solution to the specified Log Analytics workspace + az deployment group create --resource-group <resource group of log analytics workspace> --name <deployment name> --template-file ./containerSolution.json --parameters @./containerSolutionParams.json + ``` ++ The configuration change can take a few minutes to finish. When it's finished, a message similar to the following example includes this result: ++ ```azurecli + provisioningState : Succeeded + ``` ++ After you've enabled monitoring, it might take about 15 minutes before you can view health metrics for the cluster. ++## Install the Helm chart ++In this section, you install the containerized agent for Container insights. Before you proceed, identify the workspace ID required for the `amalogsagent.secret.wsid` parameter and the primary key required for the `amalogsagent.secret.key` parameter. To identify this information, follow these steps and then run the commands to install the agent by using the Helm chart. ++1. Run the following command to identify the workspace ID: ++ `az monitor log-analytics workspace list --resource-group <resourceGroupName>` ++ In the output, find the workspace name under the field **name**. Then copy the workspace ID of that Log Analytics workspace under the field **customerID**. ++1. Run the following command to identify the primary key for the workspace: ++ `az monitor log-analytics workspace get-shared-keys --resource-group <resourceGroupName> --workspace-name <logAnalyticsWorkspaceName>` ++ In the output, find the primary key under the field **primarySharedKey** and then copy the value. ++ >[!NOTE] + >The following commands are applicable only for Helm version 2. Use of the `--name` parameter isn't applicable with Helm version 3. + + If your Kubernetes cluster communicates through a proxy server, configure the parameter `amalogsagent.proxy` with the URL of the proxy server. If the cluster doesn't communicate through a proxy server, you don't need to specify this parameter. For more information, see the section [Configure the proxy endpoint](#configure-the-proxy-endpoint) later in this article. ++1. Add the Azure charts repository to your local list by running the following command: ++ ``` + helm repo add microsoft https://microsoft.github.io/charts/repo + ```` ++1. Install the chart by running the following command: ++ ``` + $ helm install --name myrelease-1 \ + --set amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<my_prod_cluster> microsoft/azuremonitor-containers + ``` ++ If the Log Analytics workspace is in Azure China 21Vianet, run the following command: ++ ``` + $ helm install --name myrelease-1 \ + --set amalogsagent.domain=opinsights.azure.cn,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers + ``` ++ If the Log Analytics workspace is in Azure US Government, run the following command: ++ ``` + $ helm install --name myrelease-1 \ + --set amalogsagent.domain=opinsights.azure.us,amalogsagent.secret.wsid=<logAnalyticsWorkspaceId>,amalogsagent.secret.key=<logAnalyticsWorkspaceKey>,amalogsagent.env.clusterName=<your_cluster_name> incubator/azuremonitor-containers + ``` ++### Enable the Helm chart by using the API model ++You can specify an add-on in the AKS Engine cluster specification JSON file, which is also referred to as the API model. In this add-on, provide the base64-encoded version of `WorkspaceGUID` and `WorkspaceKey` of the Log Analytics workspace where the collected monitoring data is stored. You can find `WorkspaceGUID` and `WorkspaceKey` by using steps 1 and 2 in the previous section. ++Supported API definitions for the Azure Stack Hub cluster can be found in the example [kubernetes-container-monitoring_existing_workspace_id_and_key.json](https://github.com/Azure/aks-engine/blob/master/examples/addons/container-monitoring/kubernetes-container-monitoring_existing_workspace_id_and_key.json). Specifically, find the **addons** property in **kubernetesConfig**: ++```json +"orchestratorType": "Kubernetes", + "kubernetesConfig": { + "addons": [ + { + "name": "container-monitoring", + "enabled": true, + "config": { + "workspaceGuid": "<Azure Log Analytics Workspace Id in Base-64 encoded>", + "workspaceKey": "<Azure Log Analytics Workspace Key in Base-64 encoded>" + } + } + ] + } +``` ++## Configure agent data collection ++Starting with chart version 1.0.0, the agent data collection settings are controlled from the ConfigMap. For more information on agent data collection settings, see [Configure agent data collection for Container insights](container-insights-agent-config.md). ++After you've successfully deployed the chart, you can review the data for your hybrid Kubernetes cluster in Container insights from the Azure portal. ++>[!NOTE] +>Ingestion latency is around 5 to 10 minutes from the agent to commit in the Log Analytics workspace. Status of the cluster shows the value **No data** or **Unknown** until all the required monitoring data is available in Azure Monitor. ++## Configure the proxy endpoint ++Starting with chart version 2.7.1, the chart will support specifying the proxy endpoint with the `amalogsagent.proxy` chart parameter. In this way, it can communicate through your proxy server. Communication between the Container insights agent and Azure Monitor can be an HTTP or HTTPS proxy server. Both anonymous and basic authentication with a username and password are supported. ++The proxy configuration value has the syntax `[protocol://][user:password@]proxyhost[:port]`. ++> [!NOTE] +>If your proxy server doesn't require authentication, you still need to specify a pseudo username and password. It can be any username or password. ++|Property| Description | +|--|-| +|protocol | HTTP or HTTPS | +|user | Optional username for proxy authentication | +|password | Optional password for proxy authentication | +|proxyhost | Address or FQDN of the proxy server | +|port | Optional port number for the proxy server | ++An example is `amalogsagent.proxy=http://user01:password@proxy01.contoso.com:8080`. ++If you specify the protocol as **http**, the HTTP requests are created by using an SSL/TLS secure connection. Your proxy server must support SSL/TLS protocols. ++## Troubleshooting ++If you encounter an error while you attempt to enable monitoring for your hybrid Kubernetes cluster, copy the PowerShell script [TroubleshootError_nonAzureK8s.ps1](https://aka.ms/troubleshoot-non-azure-k8s) and save it to a folder on your computer. This script is provided to help you detect and fix the issues you encounter. It's designed to detect and attempt correction of the following issues: ++- The specified Log Analytics workspace is valid. +- The Log Analytics workspace is configured with the Container insights solution. If not, configure the workspace. +- The Azure Monitor Agent replicaset pods are running. +- The Azure Monitor Agent daemonset pods are running. +- The Azure Monitor Agent Health service is running. +- The Log Analytics workspace ID and key configured on the containerized agent match with the workspace that the insight is configured with. +- Validate that all the Linux worker nodes have the `kubernetes.io/role=agent` label to the schedulers pod. If it doesn't exist, add it. +- Validate that `cAdvisor secure port:10250` or `unsecure port: 10255` is opened on all nodes in the cluster. ++To execute with Azure PowerShell, use the following commands in the folder that contains the script: ++```powershell +.\TroubleshootError_nonAzureK8s.ps1 - azureLogAnalyticsWorkspaceResourceId </subscriptions/<subscriptionId>/resourceGroups/<resourcegroupName>/providers/Microsoft.OperationalInsights/workspaces/<workspaceName> -kubeConfig <kubeConfigFile> -clusterContextInKubeconfig <clusterContext> +``` ++## Next steps ++Now that monitoring is enabled to collect health and resource utilization of your hybrid Kubernetes clusters and workloads are running on them, learn [how to use](container-insights-analyze.md) Container insights. |
azure-monitor | Container Insights Livedata Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md | -Container insights includes the Live Data feature. You can use this advanced diagnostic feature for direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time. +The Live Data feature in Container insights gives you direct access to your Azure Kubernetes Service (AKS) container logs (stdout/stderror), events, and pod metrics. It exposes direct access to `kubectl logs -c`, `kubectl get` events, and `kubectl top pods`. A console pane shows the logs, events, and metrics generated by the container engine to help with troubleshooting issues in real time. > [!NOTE] > AKS uses [Kubernetes cluster-level logging architectures](https://kubernetes.io/docs/concepts/cluster-administration/logging/#cluster-level-logging-architectures). You can use tools such as Fluentd or Fluent Bit to collect logs. |
azure-monitor | Container Insights Log Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md | The required tables for this chart include KubeNodeInventory, KubePodInventory, | project ClusterName, NodeName, LastReceivedDateTime, Status, ContainerCount, UpTimeMs = UpTimeMs_long, Aggregation = Aggregation_real, LimitValue = LimitValue_real, list_TrendPoint, Labels, ClusterId ``` -## Resource logs --For details on resource logs for AKS clusters, see [Collect control plane logs](../../aks/monitor-aks.md#resource-logs). -- ## Prometheus metrics The following examples requires the configuration described in [Send Prometheus metrics to Log Analytics workspace with Container insights](container-insights-prometheus-logs.md). |
azure-monitor | Container Insights Manage Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-manage-agent.md | Container insights uses a containerized version of the Log Analytics agent for L Container insights uses a containerized version of the Log Analytics agent for Linux. When a new version of the agent is released, the agent is automatically upgraded on your managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS) and Azure Arc-enabled Kubernetes. -If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://github.com/microsoft/docker-provider/tree/ci_feature_prod). +If the agent upgrade fails for a cluster hosted on AKS, this article also describes the process to manually upgrade the agent. To follow the versions released, see [Agent release announcements](https://aka.ms/ci-logs-agent-release-notes). ### Upgrade the agent on an AKS cluster With the rise of Kubernetes and the OSS ecosystem, Container Insights migrate to ## Next steps If you experience issues when you upgrade the agent, review the [troubleshooting guide](container-insights-troubleshoot.md) for support.+ |
azure-monitor | Container Insights Metric Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-metric-alerts.md | The following metrics have unique behavior characteristics: - The `oomKilledContainerCount` metric is only sent when there are OOM killed containers. - The `cpuExceededPercentage`, `memoryRssExceededPercentage`, and `memoryWorkingSetExceededPercentage` metrics are sent when the CPU, memory RSS, and memory working set values exceed the configured threshold. The default threshold is 95%. The `cpuThresholdViolated`, `memoryRssThresholdViolated`, and `memoryWorkingSetThresholdViolated` metrics are equal to 0 if the usage percentage is below the threshold and are equal to 1 if the usage percentage is above the threshold. These thresholds are exclusive of the alert condition threshold specified for the corresponding alert rule. - The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule.-- The `pvUsageExceededPercentage` metric is sent when the persistent volume usage percentage exceeds the configured threshold. The default threshold is 60%. The `pvUsageThresholdViolated` metric is equal to 0 when the persistent volume usage percentage is below the threshold and is equal to 1 if the usage is above the threshold. This threshold is exclusive of the alert condition threshold specified for the corresponding alert rule. **Prometheus only** |
azure-monitor | Container Insights Optout Hybrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout-hybrid.md | + + Title: Disable Container insights on your hybrid Kubernetes cluster +description: This article describes how you can stop monitoring of your hybrid Kubernetes cluster with Container insights. + Last updated : 08/21/2023++++# Disable Container insights on your hybrid Kubernetes cluster ++This article shows how to disable Container insights for the following Kubernetes environments: ++- AKS Engine on Azure and Azure Stack +- OpenShift version 4 and higher +- Azure Arc-enabled Kubernetes (preview) ++## How to stop monitoring using Helm ++The following steps apply to the following environments: ++- AKS Engine on Azure and Azure Stack +- OpenShift version 4 and higher ++1. To first identify the Container insights helm chart release installed on your cluster, run the following helm command. ++ ``` + helm list + ``` ++ The output resembles the following: ++ ``` + NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION + azmon-containers-release-1 default 3 2020-04-21 15:27:24.1201959 -0700 PDT deployed azuremonitor-containers-2.7.0 7.0.0-1 + ``` ++ *azmon-containers-release-1* represents the helm chart release for Container insights. ++2. To delete the chart release, run the following helm command. ++ `helm delete <releaseName>` ++ Example: ++ `helm delete azmon-containers-release-1` ++ This removes the release from the cluster. You can verify by running the `helm list` command: ++ ``` + NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION + ``` ++The configuration change can take a few minutes to complete. Because Helm tracks your releases even after youΓÇÖve deleted them, you can audit a clusterΓÇÖs history, and even undelete a release with `helm rollback`. ++## How to stop monitoring on Azure Arc-enabled Kubernetes ++### Using PowerShell ++1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands: ++ ```powershell + wget https://aka.ms/disable-monitoring-powershell-script -OutFile disable-monitoring.ps1 + ``` ++2. Configure the `$azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource. ++ ```powershell + $azureArcClusterResourceId = "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>" + ``` ++3. Configure the `$kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. If you want to use the current context, set the value to `""`. ++ ```powershell + $kubeContext = "<kubeContext name of your k8s cluster>" + ``` ++4. Run the following command to stop monitoring the cluster. ++ ```powershell + .\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext + ``` ++#### Using service principal +The script *disable-monitoring.ps1* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass $servicePrincipalClientId, $servicePrincipalClientSecret and $tenantId parameters with values of service principal you have intended to use to enable-monitoring.ps1 script. ++```powershell +$subscriptionId = "<subscription Id of the Azure Arc-connected cluster resource>" +$servicePrincipal = New-AzADServicePrincipal -Role Contributor -Scope "/subscriptions/$subscriptionId" ++$servicePrincipalClientId = $servicePrincipal.ApplicationId.ToString() +$servicePrincipalClientSecret = [System.Net.NetworkCredential]::new("", $servicePrincipal.Secret).Password +$tenantId = (Get-AzSubscription -SubscriptionId $subscriptionId).TenantId +``` ++For example: ++```powershell +\disable-monitoring.ps1 -clusterResourceId $azureArcClusterResourceId -kubeContext $kubeContext -servicePrincipalClientId $servicePrincipalClientId -servicePrincipalClientSecret $servicePrincipalClientSecret -tenantId $tenantId +``` +++### Using bash ++1. Download and save the script to a local folder that configures your cluster with the monitoring add-on using the following commands: ++ ```bash + curl -o disable-monitoring.sh -L https://aka.ms/disable-monitoring-bash-script + ``` ++2. Configure the `azureArcClusterResourceId` variable by setting the corresponding values for `subscriptionId`, `resourceGroupName` and `clusterName` representing the resource ID of your Azure Arc-enabled Kubernetes cluster resource. ++ ```bash + export AZUREARCCLUSTERRESOURCEID="/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Kubernetes/connectedClusters/<clusterName>" + ``` ++3. Configure the `kubeContext` variable with the **kube-context** of your cluster by running the command `kubectl config get-contexts`. ++ ```bash + export KUBECONTEXT="<kubeContext name of your k8s cluster>" + ``` ++4. To stop monitoring your cluster, there are different commands provided based on your deployment scenario. ++ Run the following command to stop monitoring the cluster using the current context. ++ ```bash + bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID + ``` ++ Run the following command to stop monitoring the cluster by specifying a context ++ ```bash + bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT + ``` ++#### Using service principal +The bash script *disable-monitoring.sh* uses the interactive device login. If you prefer non-interactive login, you can use an existing service principal or create a new one that has the required permissions as described in [Prerequisites](container-insights-enable-arc-enabled-clusters.md#prerequisites). To use service principal, you have to pass --client-id, --client-secret and --tenant-id values of service principal you have intended to use to *enable-monitoring.sh* bash script. ++```bash +SUBSCRIPTIONID="<subscription Id of the Azure Arc-connected cluster resource>" +SERVICEPRINCIPAL=$(az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/${SUBSCRIPTIONID}") +SERVICEPRINCIPALCLIENTID=$(echo $SERVICEPRINCIPAL | jq -r '.appId') ++SERVICEPRINCIPALCLIENTSECRET=$(echo $SERVICEPRINCIPAL | jq -r '.password') +TENANTID=$(echo $SERVICEPRINCIPAL | jq -r '.tenant') +``` ++For example: ++```bash +bash disable-monitoring.sh --resource-id $AZUREARCCLUSTERRESOURCEID --kube-context $KUBECONTEXT --client-id $SERVICEPRINCIPALCLIENTID --client-secret $SERVICEPRINCIPALCLIENTSECRET --tenant-id $TENANTID +``` ++## Next steps ++If the Log Analytics workspace was created only to support monitoring the cluster and it's no longer needed, you have to manually delete it. If you are not familiar with how to delete a workspace, see [Delete an Azure Log Analytics workspace](../logs/delete-workspace.md). |
azure-monitor | Container Insights Optout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-optout.md | Title: Stop monitoring your Azure Kubernetes Service cluster | Microsoft Docs + Title: Disable Container insights on your Azure Kubernetes Service (AKS) cluster description: This article describes how you can discontinue monitoring of your Azure AKS cluster with Container insights. Previously updated : 05/24/2022 Last updated : 08/21/2023 ms.devlang: azurecli -# Stop monitoring your Azure Kubernetes Service cluster with Container insights +# Disable Container insights on your Azure Kubernetes Service (AKS) cluster After you enable monitoring of your Azure Kubernetes Service (AKS) cluster, you can stop monitoring the cluster if you decide you no longer want to monitor it. This article shows you how to do this task by using the Azure CLI or the provided Azure Resource Manager templates (ARM templates). |
azure-monitor | Container Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md | Title: Overview of Container insights in Azure Monitor description: This article describes Container insights, which monitors the AKS Container insights solution, and the value it delivers by monitoring the health of your AKS clusters and Container Instances in Azure. Previously updated : 09/28/2022 Last updated : 08/14/2023 # Container insights overview -Container insights is a feature designed to monitor the performance of container workloads deployed to the cloud. It gives you performance visibility by collecting memory and processor metrics from controllers, nodes, and containers that are available in Kubernetes through the Metrics API. After you enable monitoring from Kubernetes clusters, metrics and Container logs are automatically collected for you through a containerized version of the Log Analytics agent for Linux. Metrics are sent to the [metrics database in Azure Monitor](../essentials/data-platform-metrics.md). Log data is sent to your [Log Analytics workspace](../logs/log-analytics-workspace-overview.md). +Container insights is a feature of Azure Monitor that monitors the performance and health of container workloads deployed to [Azure](../../aks/intro-kubernetes.md) or that are managed by [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). It collects memory and processor metrics from controllers, nodes, and containers in addition to gathering container logs. You can analyze the collected data for the different components in your cluster with a collection of [views](container-insights-analyze.md) and pre-built [workbooks](container-insights-reports.md). +The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*. +> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA] ## Features of Container insights -Container insights deliver a comprehensive monitoring experience to understand the performance and health of your Kubernetes cluster and container workloads. You can: +Container insights includes the following features to provide to understand the performance and health of your Kubernetes cluster and container workloads: -- Identify resource bottlenecks by identifying AKS containers running on the node and their processor and memory utilization.-- Identify processor and memory utilization of container groups and their containers hosted in Azure Container Instances.+- Identify resource bottlenecks by identifying containers running on each node and their processor and memory utilization. +- Identify processor and memory utilization of container groups and their containers hosted in container instances. - View the controller's or pod's overall performance by identifying where the container resides in a controller or a pod. - Review the resource utilization of workloads running on the host that are unrelated to the standard processes that support the pod. - Identify capacity needs and determine the maximum load that the cluster can sustain by understanding the behavior of the cluster under average and heaviest loads.+- Access live container logs and metrics generated by the container engine to help with troubleshooting issues in real time. - Configure alerts to proactively notify you or record when CPU and memory utilization on nodes or containers exceed your thresholds, or when a health state change occurs in the cluster at the infrastructure or nodes health rollup.-- Integrate with [Prometheus](https://aka.ms/azureprometheus-promio-docs) to view application and workload metrics it collects from nodes and Kubernetes by using [queries](container-insights-log-query.md) to create custom alerts and dashboards and perform detailed analysis. -The following video provides an intermediate-level deep dive to help you learn about monitoring your AKS cluster with Container insights. The video refers to *Azure Monitor for Containers*, which is the previous name for *Container insights*. +## Access Container insights -> [!VIDEO https://www.youtube.com/embed/XEdwGvS2AwA] +Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page. -## Access Container insights +## Data collected +Container insights sends data to [Logs](../logs/data-platform-logs.md) and [Metrics](../essentials/data-platform-metrics.md) where you can analyze it using different features of Azure Monitor. It works with other Azure services such as [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) and [Managed Grafana](../../managed-grafan#monitoring-data). -Access Container insights in the Azure portal from **Containers** in the **Monitor** menu or directly from the selected AKS cluster by selecting **Insights**. The Azure Monitor menu gives you the global perspective of all the containers that are deployed and monitored. This information allows you to search and filter across your subscriptions and resource groups. You can then drill into Container insights from the selected container. Access Container insights for a particular AKS container directly from the AKS page. ## Supported configurations+Container insights supports the following configurations: -- Managed Kubernetes clusters hosted on [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md).-- Self-managed Kubernetes clusters hosted on Azure using [AKS Engine](https://github.com/Azure/aks-engine).+- [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). - [Azure Container Instances](../../container-instances/container-instances-overview.md). - Self-managed Kubernetes clusters hosted on [Azure Stack](/azure-stack/user/azure-stack-kubernetes-aks-engine-overview) or on-premises. - [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/overview.md). Container insights supports clusters running the Linux and Windows Server 2019 o >[!NOTE] > Container insights support for Windows Server 2022 operating system is in public preview. ++ ## Next steps To begin monitoring your Kubernetes cluster, review [Enable Container insights](container-insights-onboard.md) to understand the requirements and available methods to enable monitoring. |
azure-monitor | Monitor Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md | Title: Monitor Azure Kubernetes Service (AKS) with Azure Monitor -description: Describes how to use Azure Monitor monitor the health and performance of AKS clusters and their workloads. + Title: Monitor Kubernetes clusters using Azure services and cloud native tools +description: Describes how to monitor the health and performance of the different layers of your Kubernetes environment using Azure Monitor and cloud native services in Azure. - Previously updated : 03/08/2023 Last updated : 08/17/2023 -# Monitor Azure Kubernetes Service (AKS) with Azure Monitor +# Monitor Kubernetes clusters using Azure services and cloud native tools -This article describes how to use Azure Monitor to monitor the health and performance of [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md). It includes collection of telemetry critical for monitoring, analysis and visualization of collected data to identify trends, and how to configure alerting to be proactively notified of critical issues. +This article describes how to monitor the health and performance of your Kubernetes clusters and the workloads running on them using Azure Monitor and related Azure and cloud native services. This includes clusters running in Azure Kubernetes Service (AKS) or other clouds such as [AWS](https://aws.amazon.com/kubernetes/) and [GCP](https://cloud.google.com/kubernetes-engine). Different sets of guidance are provided for the different roles that typically manage unique components that make up a Kubernetes environment. -The [Cloud Monitoring Guide](/azure/cloud-adoption-framework/manage/monitor/) defines the [primary monitoring objectives](/azure/cloud-adoption-framework/strategy/monitoring-strategy#formulate-monitoring-requirements) you should focus on for your Azure resources. This scenario focuses on health and status monitoring using Azure Monitor. -## Scope of the scenario +> [!IMPORTANT] +> This article provides complete guidance on monitoring the different layers of your Kubernetes environment based on Azure Kubernetes Service (AKS) or Kubernetes clusters in other clouds. If you're just getting started with AKS or Azure Monitor, see [Monitoring AKS](../../aks/monitor-aks.md) for basic information for getting started monitoring an AKS cluster. ++## Layers and roles of Kubernetes environment ++Following is an illustration of a common model of a typical Kubernetes environment, starting from the infrastructure layer up through applications. Each layer has distinct monitoring requirements that are addressed by different services and typically managed by different roles in the organization. +++Responsibility for the different layers of a Kubernetes environment and the applications that depend on it are typically addressed by multiple roles. Depending on the size of your organization, these roles may be performed by different people or even different teams. The following table describes the different roles while the sections below provide the monitoring scenarios that each will typically encounter. ++| Roles | Description | +|:|:| +| [Developer](#developer) | Develop and maintaining the application running on the cluster. Responsible for application specific traffic including application performance and failures. Maintains reliability of the application according to SLAs. | +| [Platform engineer](#platform-engineer) | Responsible for the Kubernetes cluster. Provisions and maintains the platform used by developer. | +| [Network engineer](#network-engineer) | Responsible for traffic between workloads and any ingress/egress with the cluster. Analyzes network traffic and performs threat analysis. | ++## Selection of monitoring tools ++Azure provides a complete set of services based on [Azure Monitor](../overview.md) for monitoring the health and performance of different layers of your Kubernetes infrastructure and the applications that depend on it. These services work in conjunction with each other to provide a complete monitoring solution and are recommended both for [AKS](../../aks/intro-kubernetes.md) and your Kubernetes clusters running in other clouds. You may have an existing investment in cloud native technologies endorsed by the [Cloud Native Computing Foundation](https://www.cncf.io/), in which case you may choose to integrate Azure tools into your existing environment. ++Your choice of which tools to deploy and their configuration will depend on the requirements of your particular environment. For example, you may use the managed offerings in Azure for Prometheus and Grafana, or you may choose to use your existing installation of these tools with your Kubernetes clusters in Azure. Your organization may also use alternative tools to Container insights to collect and analyze Kubernetes logs, such as Splunk or Datadog. ++> [!IMPORTANT] +> Monitoring a complex environment such as Kubernetes involves collecting a significant amount of telemetry, much of which incurs a cost. You should collect just enough data to meet your requirements. This includes the amount of data collected, the frequency of collection, and the retention period. If you're very cost conscious, you may choose to implement a subset of the full functionality in order to reduce your monitoring spend. ++## Network engineer +The *Network Engineer* is responsible for traffic between workloads and any ingress/egress with the cluster. They analyze network traffic and perform threat analysis. +++### Azure services for network administrator ++The following table lists the services that are commonly used by the network engineer to monitor the health and performance of the network supporting the Kubernetes cluster. +++| Service | Description | +|:|:| +| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Suite of tools in Azure to monitor the virtual networks used by your Kubernetes clusters and diagnose detected issues. | +| [Network insights](../../network-watcher/network-insights-overview.md) | Feature of Azure Monitor that includes a visual representation of the performance and health of different network components and provides access to the network monitoring tools that are part of Network Watcher. | ++[Network insights](../../network-watcher/network-insights-overview.md) is enabled by default and requires no configuration. Network Watcher is also typically [enabled by default in each Azure region](../../network-watcher/network-watcher-create.md). ++### Monitor level 1 - Network ++Following are common scenarios for monitoring the network. ++- Create [flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md) to log information about the IP traffic flowing through network security groups used by your cluster and then use [traffic analytics](../../network-watcher/traffic-analytics.md) to analyze and provide insights on this data. You'll most likely use the same Log Analytics workspace for traffic analytics that you use for Container insights and your control plane logs. +- Using [traffic analytics](../../network-watcher/traffic-analytics.md), you can determine if any traffic is flowing either to or from any unexpected ports used by the cluster and also if any traffic is flowing over public IPs that shouldn't be exposed. Use this information to determine whether your network rules need modification. +++## Platform engineer ++The *platform engineer*, also known as the cluster administrator, is responsible for the Kubernetes cluster itself. They provision and maintain the platform used by developers. They need to understand the health of the cluster and its components, and be able to troubleshoot any detected issues. They also need to understand the cost to operate the cluster and potentially to be able to allocate costs to different teams. ++++Large organizations may also have a *fleet architect*, which is similar to the platform engineer but is responsible for multiple clusters. They need visibility across the entire environment and must perform administrative tasks at scale. At scale recommendations for the fleet architect are included in the guidance below. +++### Azure services for platform engineer ++The following table lists the Azure services for the platform engineer to monitor the health and performance of the Kubernetes cluster and its components. ++| Service | Description | +|:|:| +| [Container Insights](container-insights-overview.md) | Azure service for AKS and Azure Arc-enabled Kubernetes clusters that use a containerized version of the [Azure Monitor agent](../agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster. It also collects metrics from the Kubernetes control plane and stores them in the workspace. You can view the data in the Azure portal or query it using [Log Analytics](../logs/log-analytics-overview.md). | +| [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully-managed solution that's compatible with the Prometheus query language (PromQL) and Prometheus alerts and integrates with Azure Managed Grafana for visualization. This service supports your investment in open source tools without the complexity of managing your own Prometheus environment. | +| [Azure Arc-enabled Kubernetes](container-insights-enable-arc-enabled-clusters.md) | Allows you to attach to Kubernetes clusters running in other clouds so that you can manage and configure them in Azure. With the Arc agent installed, you can monitor AKS and hybrid clusters together using the same methods and tools, including Container insights and Prometheus. | +| [Azure Managed Grafana](../../managed-grafan) | Fully managed implementation of [Grafana](https://grafana.com/), which is an open-source data visualization platform commonly used to present Prometheus and other data. Multiple predefined Grafana dashboards are available for monitoring Kubernetes and full-stack troubleshooting. | ++### Configure monitoring for platform engineer ++The sections below identify the steps for complete monitoring of your Kubernetes environment using the Azure services in the above table. Functionality and integration options are provided for each to help you determine where you may need to modify this configuration to meet your particular requirements. -This article does *not* include information on the following scenarios: -- Monitoring of Kubernetes clusters outside of Azure except for referring to existing content for Azure Arc-enabled Kubernetes-- Monitoring of AKS with tools other than Azure Monitor except to fill gaps in Azure Monitor and Container Insights+#### Enable scraping of Prometheus metrics ++> [!IMPORTANT] +> To use Azure Monitor managed service for Prometheus, you need to have an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md). For information on design considerations for a workspace configuration, see [Azure Monitor workspace architecture](../essentials/azure-monitor-workspace-overview.md#azure-monitor-workspace-architecture). -> [!NOTE] -> Azure Monitor was designed to monitor the availability and performance of cloud resources. While the operational data stored in Azure Monitor may be useful for investigating security incidents, other services in Azure were designed to monitor security. Security monitoring for AKS is done with [Microsoft Sentinel](../../sentinel/overview.md) and [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md). See [Monitor virtual machines with Azure Monitor - Security monitoring](../vm/monitor-virtual-machine-security.md) for a description of the security monitoring tools in Azure and their relationship to Azure Monitor. -> -> For information on using the security services to monitor AKS, see [Microsoft Defender for Kubernetes - the benefits and features](../../defender-for-cloud/defender-for-kubernetes-introduction.md) and [Connect Azure Kubernetes Service (AKS) diagnostics logs to Microsoft Sentinel](../../sentinel/data-connectors/azure-kubernetes-service-aks.md). +Enable scraping of Prometheus metrics by Azure Monitor managed service for Prometheus from your cluster using one of the following methods: -## Container Insights +- Select the option **Enable Prometheus metrics** when you [create an AKS cluster](../../aks/learn/quick-kubernetes-deploy-portal.md). +- Select the option **Enable Prometheus metrics** when you enable Container insights on an existing [AKS cluster](container-insights-enable-aks.md) or [Azure Arc-enabled Kubernetes cluster](container-insights-enable-arc-enabled-clusters.md). +- Enable for an existing [AKS cluster](../essentials/prometheus-metrics-enable.md) or [Arc-enabled Kubernetes cluster (preview)](../essentials/prometheus-metrics-from-arc-enabled-cluster.md). -AKS generates [platform metrics and resource logs](../../aks/monitor-aks-reference.md) that you can use to monitor basic health and performance. Enable [Container Insights](container-insights-overview.md) to expand on this monitoring. Container Insights is a feature in Azure Monitor that monitors the health and performance of managed Kubernetes clusters hosted on AKS and provides interactive views and workbooks that analyze collected data for a variety of monitoring scenarios. -[Prometheus](https://aka.ms/azureprometheus-promio) and [Grafana](https://aka.ms/azureprometheus-promio-grafana) are popular CNCF-backed open-source tools for Kubernetes monitoring. AKS exposes many metrics in Prometheus format, which makes Prometheus a popular choice for monitoring. [Container Insights](container-insights-overview.md) has native integration with AKS, like collecting critical metrics and logs, alerting on identified issues, and providing visualization with workbooks. It also collects certain Prometheus metrics. Many native Azure Monitor insights are built on top of Prometheus metrics. Container Insights complements and completes E2E monitoring of AKS, including log collection, which Prometheus as stand-alone tool doesnΓÇÖt provide. You can use Prometheus integration and Azure Monitor together for E2E monitoring. +If you already have a Prometheus environment that you want to use for your AKS clusters, then enable Azure Monitor managed service for Prometheus and then use remote-write to send data to your existing Prometheus environment. You can also [use remote-write to send data from your existing self-managed Prometheus environment to Azure Monitor managed service for Prometheus](../essentials/prometheus-remote-write.md). -To learn more about using Container Insights, see the [Container Insights overview](container-insights-overview.md). To learn more about features and monitoring scenarios of Container Insights, see [Monitor layers of AKS with Container Insights](#monitor-layers-of-aks-with-container-insights). +See [Default Prometheus metrics configuration in Azure Monitor](../essentials/prometheus-metrics-scrape-default.md) for details on the metrics that are collected by default and their frequency of collection. If you want to customize the configuration, see [Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-scrape-configuration.md). -## Configure monitoring +#### Enable Grafana for analysis of Prometheus data -The following sections describe the steps required to configure full monitoring of your AKS cluster using Azure Monitor. +[Create an instance of Managed Grafana](../../managed-grafan) -### Create Log Analytics workspace +If you have an existing Grafana environment, then you can continue to use it and add Azure Monitor managed service for [Prometheus as a data source](https://grafana.com/docs/grafana/latest/datasources/prometheus/). You can also [add the Azure Monitor data source to Grafana](https://grafana.com/docs/grafana/latest/datasources/azure-monitor/) to use data collected by Container insights in custom Grafana dashboards. Perform this configuration if you want to focus on Grafana dashboards rather than using the Container insights views and reports. -You need at least one Log Analytics workspace to support Container Insights and to collect and analyze other telemetry about your AKS cluster. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. See [Azure Monitor Logs pricing details](../logs/cost-logs.md) for details. +A variety of prebuilt dashboards are available for monitoring Kubernetes clusters including several that present similar information as Container insights views. [Search the available Grafana dashboards templates](https://grafana.com/grafan). -If you're just getting started with Azure Monitor, we recommend starting with a single workspace and creating additional workspaces as your requirements evolve. Many environments will use a single workspace for all the Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](../vm/monitor-virtual-machine-security.md), although it's common to segregate availability and performance telemetry from security data. -For information on design considerations for a workspace configuration, see [Designing your Azure Monitor Logs deployment](../logs/workspace-design.md). +#### Enable Container Insights for collection of logs -### Enable Container Insights +When you enable Container Insights for your Kubernetes cluster, it deploys a containerized version of the [Azure Monitor agent](../agents/..//agents/log-analytics-agent.md) that sends data to a Log Analytics workspace in Azure Monitor. Container insights collects container stdout/stderr, infrastructure logs, and performance data. All log data is stored in a Log Analytics workspace where they can be analyzed using [Kusto Query Language (KQL)](../logs/log-query-overview.md). -When you enable Container Insights for your AKS cluster, it deploys a containerized version of the [Log Analytics agent](../agents/../agents/log-analytics-agent.md) that sends data to Azure Monitor. For prerequisites and configuration options, see [Enable Container Insights](container-insights-onboard.md). +See [Enable Container insights](../containers/container-insights-onboard.md) for prerequisites and configuration options for onboarding your Kubernetes clusters. [Onboard using Azure Policy](container-insights-enable-aks-policy.md) to ensure that all clusters retain a consistent configuration. -### Configure collection from Prometheus +Once Container insights is enabled for a cluster, perform the following actions to optimize your installation. -Container Insights allows you to send Prometheus metrics to [Azure Monitor managed service for Prometheus](../essentials/prometheus-metrics-overview.md) or to your Log Analytics workspace without requiring a local Prometheus server. You can analyze this data using Azure Monitor features along with other data collected by Container Insights. For details on this configuration, see [Collect Prometheus metrics with Container Insights](container-insights-prometheus.md). +- To improve your query experience with data collected by Container insights and to reduce collection costs, [enable the ContainerLogV2 schema](container-insights-logging-v2.md) for each cluster. If you only use logs for occasional troubleshooting, then consider configuring this table as [basic logs](../logs/basic-logs-configure.md). +- Reduce your cost for Container insights data ingestion by reducing the amount of data that's collected. See [Enable cost optimization settings in Container insights (preview)](../containers/container-insights-cost-config.md) for details. -### Collect resource logs +If you have an existing solution for collection of logs, then follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to [Azure Event Hubs](../../event-hubs/event-hubs-about.md) to forward to alternate system. -The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](container-insights-log-query.md#resource-logs). -You need to create a diagnostic setting to collect resource logs. You can create multiple diagnostic settings to send different sets of logs to different locations. To create diagnostic settings for your AKS cluster, see [Create diagnostic settings to send platform logs and metrics to different destinations](../essentials/diagnostic-settings.md). +#### Collect control plane logs for AKS clusters -There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md). +The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](../../aks/monitor-aks.md#resource-logs). ++[Create a diagnostic setting](../../aks/monitor-aks.md#resource-logs) for each AKS cluster to send resource logs to a Log Analytics workspace. Use [Azure Policy](../essentials/diagnostic-settings-policy.md) to ensure consistent configuration across multiple clusters. ++There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information for compliance reasons. For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md). ++If you're unsure which resource logs to initially enable, use the following recommendations, which are based on the most common customer requirements. You can enable other categories later if you need to. -If you're unsure which resource logs to initially enable, use the following recommendations: | Category | Enable? | Destination | |:|:|:|-| cluster-autoscaler | Enable if autoscale is enabled | Log Analytics workspace | -| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace | | kube-apiserver | Enable | Log Analytics workspace | | kube-audit | Enable | Azure storage. This keeps costs to a minimum yet retains the audit logs if they're required by an auditor. | | kube-audit-admin | Enable | Log Analytics workspace | | kube-controller-manager | Enable | Log Analytics workspace | | kube-scheduler | Disable | |-| AllMetrics | Enable | Log Analytics workspace | --The recommendations are based on the most common customer requirements. You can enable other categories later if you need to. --## Access Azure Monitor features --Access Azure Monitor features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal, or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu. The following image shows the **Monitoring** menu for your AKS cluster: -+| cluster-autoscaler | Enable if autoscale is enabled | Log Analytics workspace | +| guard | Enable if Azure Active Directory is enabled | Log Analytics workspace | +| AllMetrics | Disable since metrics are collected in Managed Prometheus | Log Analytics workspace | -| Menu option | Description | -|:|:| -| Insights | Opens Container Insights for the current cluster. Select **Containers** from the **Monitor** menu to open Container Insights for all clusters. | -| Alerts | View alerts for the current cluster. | -| Metrics | Open metrics explorer with the scope set to the current cluster. | -| Diagnostic settings | Create diagnostic settings for the cluster to collect resource logs. | -| Advisor | Recommendations for the current cluster from Azure Advisor. | -| Logs | Open Log Analytics with the scope set to the current cluster to analyze log data and access prebuilt queries. | -| Workbooks | Open workbook gallery for Kubernetes service. | -## Monitor layers of AKS with Container Insights +If you have an existing solution for collection of logs, either follow the guidance for that tool or enable Container insights and use the [data export feature of Log Analytics workspace](../logs/logs-data-export.md) to send data to Azure event hub to forward to alternate system. -Your monitoring approach should be based on your unique workload requirements, and factors such as scale, topology, organizational roles, and multi-cluster tenancy. This section presents a common bottoms-up approach, starting from infrastructure up through applications. Each layer has distinct monitoring requirements. +#### Collect Activity log for AKS clusters +Configuration changes to your AKS clusters are stored in the [Activity log](../essentials/activity-log.md). [Create a diagnostic setting to send this data to your Log Analytics workspace](../essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with other monitoring data. There's no cost for this data collection, and you can analyze or alert on the data using Log Analytics. -### Level 1 - Cluster level components +### Monitor level 2 - Cluster level components -The cluster level includes the following component: +The cluster level includes the following components: | Component | Monitoring requirements | |:|:| | Node | Understand the readiness status and performance of CPU, memory, disk and IP usage for each node and proactively monitor their usage trends before deploying any workloads. | -Use existing views and reports in Container Insights to monitor cluster level components. +Following are common scenarios for monitoring the cluster level components. +**Container insights**<br> - Use the **Cluster** view to see the performance of the nodes in your cluster, including CPU and memory utilization. - Use the **Nodes** view to see the health of each node and the health and performance of the pods running on them. For more information on analyzing node health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md). - Under **Reports**, use the **Node Monitoring** workbooks to analyze disk capacity, disk IO, and GPU usage. For more information about these workbooks, see [Node Monitoring workbooks](container-insights-reports.md#node-monitoring-workbooks).+- Under **Monitoring**, select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range. - :::image type="content" source="media/monitor-kubernetes/container-insights-cluster-view.png" alt-text="Screenshot of Container Insights cluster view." lightbox="media/monitor-kubernetes/container-insights-cluster-view.png"::: +**Network observability (east-west traffic)** +- For AKS clusters, use the [Network Observability add-on for AKS (preview)](https://aka.ms/NetObsAddonDoc) to monitor and observe access between services in the cluster (east-west traffic). -- Under **Monitoring**, you can select **Workbooks**, then **Subnet IP Usage** to see the IP allocation and assignment on each node for a selected time-range.+**Grafana dashboards**<br> +- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus. +- Use Grafana dashboards with [Prometheus metric values](../essentials/prometheus-metrics-scrape-default.md) related to disk such as `node_disk_io_time_seconds_total` and `windows_logical_disk_free_bytes` to monitor attached storage. - :::image type="content" source="media/monitor-kubernetes/monitoring-workbooks-subnet-ip-usage.png" alt-text="Screenshot of Container Insights workbooks." lightbox="media/monitor-kubernetes/monitoring-workbooks-subnet-ip-usage.png"::: +**Log Analytics** +- Select the [Containers category](../logs/queries.md?tabs=groupby#find-and-filter-queries) in the [queries dialog](../logs/queries.md#queries-dialog) for your Log Analytics workspace to access prebuilt log queries for your cluster, including the **Image inventory** log query that retrieves data from the [ContainerImageInventory](/azure/azure-monitor/reference/tables/containerimageinventory) table populated by Container insights. -For troubleshooting scenarios, you may need to access the AKS nodes directly for maintenance or immediate log collection. For security purposes, the AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](../../aks/ssh.md). +**Troubleshooting**<br> +- For troubleshooting scenarios, you may need to access nodes directly for maintenance or immediate log collection. For security purposes, AKS nodes aren't exposed to the internet but you can use the `kubectl debug` command to SSH to the AKS nodes. For more information on this process, see [Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting](../../aks/ssh.md). -### Level 2 - Managed AKS components +**Cost analysis**<br> +- Configure [OpenCost](https://www.opencost.io), which is an open-source, vendor-neutral CNCF sandbox project for understanding your Kubernetes costs, to support your analysis of your cluster costs. It exports detailed costing data to Azure storage. +- Use data from OpenCost to breakdown relative usage of the cluster by different teams in your organization so that you can allocate the cost between each. +- Use data from OpenCost to ensure that the cluster is using the full capacity of its nodes by densely packing workloads, using fewer large nodes as opposed to many smaller nodes. -The managed AKS level includes the following components: ++### Monitor level 3 - Managed Kubernetes components ++The managed Kubernetes level includes the following components: | Component | Monitoring | |:|:| | API Server | Monitor the status of API server and identify any increase in request load and bottlenecks if the service is down. | | Kubelet | Monitor Kubelet to help troubleshoot pod management issues, pods not starting, nodes not ready, or pods getting killed. | -Azure Monitor and Container Insights don't provide full monitoring for the API server. +Following are common scenarios for monitoring your managed Kubernetes components. -- Under **Monitoring**, you can select **Metrics** to view the **Inflight Requests** counter, but you should refer to metrics in Prometheus for a complete view of the API server performance. This includes such values as request latency and workqueue processing time.-- To see critical metrics for the API server, see [Grafana Labs](https://grafana.com/grafan).+**Container insights**<br> +- Under **Monitoring**, select **Metrics** to view the **Inflight Requests** counter. +- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks). - :::image type="content" source="media/monitor-kubernetes/grafana-api-server.png" alt-text="Screenshot of dashboard for Grafana API server." lightbox="media/monitor-kubernetes/grafana-api-server.png"::: +**Grafana**<br> +- Use a dashboard such as [Kubernetes apiserver](https://grafana.com/grafana/dashboards/12006) for a complete view of the API server performance. This includes such values as request latency and workqueue processing time. -- Under **Reports**, use the **Kubelet** workbook to see the health and performance of each kubelet. For more information about these workbooks, see [Resource Monitoring workbooks](container-insights-reports.md#resource-monitoring-workbooks). For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](../../aks/kubelet-logs.md).+**Log Analytics**<br> +- Use [log queries with resource logs](../../aks/monitor-aks.md#sample-log-queries) to analyze [control plane logs](#collect-control-plane-logs-for-aks-clusters) generated by AKS components. +- Any configuration activities for AKS are logged in the Activity log. When you [send the Activity log to a Log Analytics workspace](#collect-activity-log-for-aks-clusters) you can analyze it with Log Analytics. For example, the following sample query can be used to return records identifying a successful upgrade across all your AKS clusters. -### Resource logs + ``` kql + AzureActivity + | where CategoryValue == "Administrative" + | where OperationNameValue == "MICROSOFT.CONTAINERSERVICE/MANAGEDCLUSTERS/WRITE" + | extend properties=parse_json(Properties_d) + | where properties.message == "Upgrade Succeeded" + | order by TimeGenerated desc + ``` -Use [log queries with resource logs](container-insights-log-query.md#resource-logs) to analyze control plane logs generated by AKS components. -### Level 3 - Kubernetes objects and workloads +**Troubleshooting**<br> +- For troubleshooting scenarios, you can access kubelet logs using the process described at [Get kubelet logs from Azure Kubernetes Service (AKS) cluster nodes](../../aks/kubelet-logs.md). ++### Monitor level 4 - Kubernetes objects and workloads The Kubernetes objects and workloads level includes the following components: The Kubernetes objects and workloads level includes the following components: | Pods | Monitor status and resource utilization, including CPU and memory, of the pods running on your AKS cluster. | | Containers | Monitor resource utilization, including CPU and memory, of the containers running on your AKS cluster. | -Use existing views and reports in Container Insights to monitor containers and pods. --- Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers.-- Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md#analyze-nodes-controllers-and-container-health).+Following are common scenarios for monitoring your Kubernetes objects and workloads. - :::image type="content" source="media/monitor-kubernetes/container-insights-containers-view.png" alt-text="Screenshot of Container Insights containers view." lightbox="media/monitor-kubernetes/container-insights-containers-view.png"::: -- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, ee [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md). - :::image type="content" source="media/monitor-kubernetes/container-insights-deployments-workbook.png" alt-text="Screenshot of Container Insights deployments workbook." lightbox="media/monitor-kubernetes/container-insights-deployments-workbook.png"::: -#### Live data -In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md). -+**Container insights**<br> +- Use the **Nodes** and **Controllers** views to see the health and performance of the pods running on them and drill down to the health and performance of their containers. +- Use the **Containers** view to see the health and performance for the containers. For more information on analyzing container health and performance, see [Monitor your Kubernetes cluster performance with Container Insights](container-insights-analyze.md#analyze-nodes-controllers-and-container-health). +- Under **Reports**, use the **Deployments** workbook to see deployment metrics. For more information, see [Deployment & HPA metrics with Container Insights](container-insights-deployment-hpa-metrics.md). -### Level 4 - Applications +**Grafana dashboards**<br> +- Multiple [Kubernetes dashboards](https://grafana.com/grafana/dashboards/?search=kubernetes) are available that visualize the performance and health of your nodes based on data stored in Prometheus. -The application level includes the following component: +**Live data** +- In troubleshooting scenarios, Container Insights provides access to live AKS container logs (stdout/stderror), events and pod metrics. For more information about this feature, see [How to view Kubernetes logs, events, and pod metrics in real-time](container-insights-livedata-overview.md). -| Component | Monitoring requirements | -|:|:| -| Applications | Monitor microservice application deployments to identify application failures and latency issues, including information like request rates, response times, and exceptions. | +### Alerts for the platform engineer -Application Insights provides complete monitoring of applications running on AKS and other environments. If you have a Java application, you can provide monitoring without instrumenting your code by following [Zero instrumentation application monitoring for Kubernetes - Azure Monitor Application Insights](../app/kubernetes-codeless.md). +[Alerts in Azure Monitor](..//alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. If you have an existing [ITSM solution](../alerts/itsmc-overview.md) for alerting, you can [integrate it with Azure Monitor](../alerts/itsmc-overview.md). You can also [export workspace data](../logs/logs-data-export.md) to send data from your Log Analytics workspace to another location that supports your current alerting solution. -If you want complete monitoring, you should configure code-based monitoring depending on your application: +#### Alert types +The following table describes the different types of custom alert rules that you can create based on the data collected by the services described above. -- [ASP.NET applications](../app/asp-net.md)-- [ASP.NET Core applications](../app/asp-net-core.md)-- [.NET Console applications](../app/console.md)-- [Java](../app/opentelemetry-enable.md?tabs=java)-- [Node.js](../app/nodejs.md)-- [Python](../app/opencensus-python.md)-- [Other platforms](../app/app-insights-overview.md#supported-languages)+| Alert type | Description | +|:|:| +| Prometheus alerts | [Prometheus alerts](../alerts/prometheus-alerts.md) are written in Prometheus Query Language (Prom QL) and applied on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). Recommended alerts already include the most common Prometheus alerts, and you can [create addition alert rules](../essentials/prometheus-rule-groups.md) as required. | +| Metric alert rules | Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. Metric alert rules can be useful to alert on AKS performance using any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). | +| Log alert rules | Use log alert rules to generate an alert from the results of a log query. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). | -For more information, see [What is Application Insights?](../app/app-insights-overview.md). +#### Recommended alerts +Start with a set of recommended Prometheus alerts from [Metric alert rules in Container insights (preview)](container-insights-metric-alerts.md#prometheus-alert-rules) which include the most common alerting conditions for a Kubernetes cluster. You can add more alert rules later as you identify additional alerting conditions. -### Level 5 - External components +## Developer -The components external to AKS include the following: +In addition to developing the application, the *developer* maintains the application running on the cluster. They're responsible for application specific traffic including application performance and failures and maintain reliability of the application according to company-defined SLAs. -| Component | Monitoring requirements | -|:|:| -| Service Mesh, Ingress, Egress | Metrics based on component. | -| Database and work queues | Metrics based on component. | -Monitor external components such as Service Mesh, Ingress, Egress with Prometheus and Grafana, or other proprietary tools. Monitor databases and other Azure resources using other features of Azure Monitor. +### Azure services for developer -## Analyze metric data with the Metrics explorer +The following table lists the services that are commonly used by the developer to monitor the health and performance of the application running on the cluster. -Use the **Metrics** explorer to perform custom analysis of metric data collected for your containers. It allows you plot charts, visually correlate trends, and investigate spikes and dips in your metrics values. You can create metrics alert to proactively notify you when a metric value crosses a threshold and pin charts to dashboards for use by different members of your organization. -For more information, see [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md). For a list of the platform metrics collected for AKS, see [Monitoring AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics). When Container Insights is enabled for a cluster, [addition metric values](container-insights-update-metrics.md) are available. -## Analyze log data with Log Analytics -Select **Logs** to use the Log Analytics tool to analyze resource logs or dig deeper into data used to create the views in Container Insights. Log Analytics allows you to perform custom analysis of your log data. +| Service | Description | +|:|:| +| [Application insights](../app/app-insights-overview.md) | Feature of Azure Monitor that provides application performance monitoring (APM) to monitor applications running on your Kubernetes cluster from development, through test, and into production. Quickly identify and mitigate latency and reliability issues using distributed traces. Supports [OpenTelemetry](../app/opentelemetry-overview.md#opentelemetry) for vendor-neutral instrumentation. | -For more information on Log Analytics and to get started with it, see: -- [How to query logs from Container Insights](container-insights-log-query.md)-- [Using queries in Azure Monitor Log Analytics](../logs/queries.md)-- [Monitoring AKS data reference logs](../../aks/monitor-aks-reference.md#azure-monitor-logs-tables)-- [Log Analytics tutorial](../logs/log-analytics-tutorial.md) -You can also use log queries to analyze resource logs from AKS. For a list of the log categories available, see [AKS data reference resource logs](../../aks/monitor-aks-reference.md#resource-logs). You must create a diagnostic setting to collect each category as described in [Configure monitoring](#configure-monitoring) before the data can be collected. -## Alerts -[Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. There are no preconfigured alert rules for AKS clusters, but you can create your own based on data collected by Container Insights. +See [Data Collection Basics of Azure Monitor Application Insights](../app/opentelemetry-overview.md) for options on configuring data collection from the application running on your cluster and decision criteria on the best method for your particular requirements. -> [!IMPORTANT] -> Most alert rules have a cost dependent on the type of rule, how many dimensions it includes, and how frequently it runs. Refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before creating any alert rules. +### Monitor level 5 - Application -### Choose an alert type +Following are common scenarios for monitoring your application. -The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md). The type of alert rule that you create for a particular scenario will depend on where the data is located that you want to set an alert for. -You may have cases where data for a particular alerting scenario is available in both **Metrics** and **Logs**, and you need to determine which rule type to use. It's typically the best strategy to use metric alerts instead of log alerts when possible, because metric alerts are more responsive and stateful. You can create a metric alert on any values you can analyze in the Metrics explorer. If the logic for your alert rule requires data in **Logs**, or if it requires more complex logic, then you can use a log query alert rule. -For example, if you want an alert when an application workload is consuming excessive CPU, you can create a metric alert using the CPU metric. If you need an alert when a particular message is found in a control plane log, then you'll require a log alert. -### Metric alert rules -Metric alert rules use the same metric values as the Metrics explorer. In fact, you can create an alert rule directly from the metrics explorer with the data you're currently analyzing. You can use any of the values in [AKS data reference metrics](../../aks/monitor-aks-reference.md#metrics) for metric alert rules. +**Application performance**<br> +- Use the **Performance** view in Application insights to view the performance of different operations in your application. +- Use [Profiler](../profiler/profiler-overview.md) to capture and view performance traces for your application. +- Use [Application Map](../app/app-map.md) to view the dependencies between your application components and identify any bottlenecks. +- Enable [distributed tracing](../app/distributed-tracing-telemetry-correlation.md), which provides a performance profiler that works like call stacks for cloud and microservices architectures, to gain better observability into the interaction between services. -Container Insights includes a feature that creates a recommended set of metric alert rules for your AKS cluster. This feature creates new metric values used by the alert rules that you can also use in the Metrics explorer. For more information, see [Recommended metric alerts (preview) from Container Insights](container-insights-metric-alerts.md). +**Application failures**<br> +- Use the **Failures** tab of Application insights to view the number of failed requests and the most common exceptions. +- Ensure that alerts for [failure anomalies](../alerts/proactive-failure-diagnostics.md) identified with [smart detection](../alerts/proactive-diagnostics.md) are configured properly. -### Log alert rules +**Health monitoring**<br> +- Create an [Availability test](../app/availability-overview.md) in Application insights to create a recurring test to monitor the availability and responsiveness of your application. +- Use the [SLA report](../app/sla-report.md) to calculate and report SLA for web tests. +- Use [annotations](../app/annotations.md) to identify when a new build is deployed so that you can visually inspect any change in performance after the update. -Use log alert rules to generate an alert from the results of a log query. This may be data collected by Container Insights or from AKS resource logs. For more information, see [How to create log alerts from Container Insights](container-insights-log-alerts.md) and [How to query logs from Container Insights](container-insights-log-query.md). +**Application logs**<br> +- Container insights sends stdout/stderr logs to a Log Analytics workspace. See [Resource logs](../../aks/monitor-aks-reference.md#resource-logs) for a description of the different logs and [Kubernetes Services](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services) for a list of the tables each is sent to. -### Virtual machine alerts +**Service mesh** -AKS relies on a Virtual Machine Scale Set that must be healthy to run AKS workloads. You can alert on critical metrics such as CPU, memory, and storage for the virtual machines using the guidance at [Monitor virtual machines with Azure Monitor: Alerts](../vm/monitor-virtual-machine-alerts.md). +- For AKS clusters, deploy the [Istio-based service mesh add-on](../../aks/istio-about.md) which provides observability to your microservices architecture. [Istio](https://istio.io/) is an open-source service mesh that layers transparently onto existing distributed applications. The add-on assists in the deployment and management of Istio for AKS. -### Prometheus alerts +## See also -You can configure Prometheus alerts to cover scenarios where Azure Monitor either doesn't have the data required for an alerting condition or the alerting may not be responsive enough. For example, Azure Monitor doesn't collect critical information for the API server. You can create a log query alert using the data from the kube-apiserver resource log category, but it can take up to several minutes before you receive an alert, which may not be sufficient for your requirements. In this case, we recommend configuring Prometheus alerts. +- See [Monitoring AKS](../../aks/monitor-aks.md) for guidance on monitoring specific to Azure Kubernetes Service (AKS). -## Next steps -- For more information about AKS metrics, logs, and other important values, see [Monitoring AKS data reference](../../aks/monitor-aks-reference.md). |
azure-monitor | Prometheus Metrics Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-troubleshoot.md | Follow the steps in this article to determine the cause of Prometheus metrics no Replica pod scrapes metrics from `kube-state-metrics` and custom scrape targets in the `ama-metrics-prometheus-config` configmap. DaemonSet pods scrape metrics from the following targets on their respective node: `kubelet`, `cAdvisor`, `node-exporter`, and custom scrape targets in the `ama-metrics-prometheus-config-node` configmap. The pod that you want to view the logs and the Prometheus UI for it depends on which scrape target you're investigating. +## Troubleshoot using powershell script ++If you encounter an error while you attempt to enable monitoring for your AKS cluster, please follow the instructions mentioned [here](https://github.com/Azure/prometheus-collector/tree/main/internal/scripts/troubleshoot) to run the troubleshooting script. This script is designed to do a basic diagnosis of for any configuration issues on your cluster and you can ch the generated files while creating a support request for faster resolution for your support case. + ## Metrics Throttling In the Azure portal, navigate to your Azure Monitor Workspace. Go to `Metrics` and verify that the metrics `Active Time Series % Utilization` and `Events Per Minute Ingested % Utilization` are below 100%. If you see metrics missed, you can first check if the ingestion limits are being Refer to [service quotas and limits](../service-limits.md#prometheus-metrics) for default quotas and also to understand what can be increased based on your usage. You can request quota increase for Azure Monitor workspaces using the `Support Request` menu for the Azure Monitor workspace. Ensure you include the ID, internal ID and Location/Region for the Azure Monitor workspace in the support request, which you can find in the `Properties' menu for the Azure Monitor workspace in the Azure portal. - ## Next steps - [Check considerations for collecting metrics at high scale](prometheus-metrics-scrape-scale.md). |
azure-monitor | Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md | The following table provides unique requirements for each destination including | Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.| | Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details. +> [!CAUTION] +> If you want to store diagnostic logs in a Log Analytics workspace, there are two points to consider to avoid seeing duplicate data in Application Insights: +> * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on. +> * The Application Insights user can't have access to both workspaces. Set the Log Analytics access control mode to Requires workspace permissions. Through Azure role-based access control, ensure the user only has access to the Log Analytics workspace the Application Insights resource is based on. +> +> These steps are necessary because Application Insights accesses telemetry across Application Insight resources, including Log Analytics workspaces, to provide complete end-to-end transaction operations and accurate application maps. Because diagnostic logs use the same table names, duplicate telemetry can be displayed if the user has access to multiple resources that contain the same data. + ## Controlling costs There is a cost for collecting data in a Log Analytics workspace, so you should only collect the categories you require for each service. The data volume for resource logs varies significantly between services, |
azure-monitor | Metrics Charts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-charts.md | Title: Advanced features of Metrics Explorer -description: Metrics are a series of measured values and counts that Azure collects. Learn to use Metrics Explorer to investigate the health and usage of resources. + Title: Advanced features of Metrics Explorer in Azure Monitor +description: Learn how to use Metrics Explorer to investigate the health and usage of resources. -> [!NOTE] -> This article assumes you're familiar with basic features of the Metrics Explorer feature of Azure Monitor. If you're a new user and want to learn how to create your first metric chart, see [Getting started with the Metrics Explorer](./metrics-getting-started.md). +In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called *platform*) or custom. -In Azure Monitor, [metrics](data-platform-metrics.md) are a series of measured values and counts that are collected and stored over time. Metrics can be standard (also called "platform") or custom. +The Azure platform provides standard metrics. These metrics reflect the health and usage statistics of your Azure resources. -Standard metrics are provided by the Azure platform. They reflect the health and usage statistics of your Azure resources. +This article describes advanced features of Metrics Explorer in Azure Monitor. It assumes that you're familiar with basic features of Metrics Explorer. If you're a new user and want to learn how to create your first metric chart, see [Get started with Metrics Explorer](./metrics-getting-started.md). ## Resource scope picker -The resource scope picker allows you to view metrics across single resources and multiple resources. The following sections explain how to use the resource scope picker. +Use the resource scope picker to view metrics across single resources and multiple resources. ### Select a single resource -In the Azure portal, select **Metrics** from the **Monitor** menu or from the **Monitoring** section of a resource's menu. Then choose **Select a scope** to open the scope picker. +1. In the Azure portal, select **Metrics** from the **Monitor** menu or from the **Monitoring** section of a resource's menu. -Use the scope picker to select the resources whose metrics you want to see. If you opened the Azure Metrics Explorer from a resource's menu, the scope should be populated. +1. Choose **Select a scope**. -![Screenshot showing how to open the resource scope picker.](./media/metrics-charts/scope-picker.png) + :::image source="./media/metrics-charts/scope-picker.png" alt-text="Screenshot that shows the button that opens the resource scope picker." lightbox="./media/metrics-charts/scope-picker.png"::: -For some resources, you can view only one resource's metrics at a time. In the **Resource types** menu, these resources are in the **All resource types** section. +1. Use the scope picker to select the resources whose metrics you want to see. If you opened Metrics Explorer from a resource's menu, the scope should be populated. -![Screenshot showing a single resource.](./media/metrics-charts/single-resource-scope.png) + For some resources, you can view only one resource's metrics at a time. On the **Resource types** menu, these resources are in the **All resource types** section. -After selecting a resource, you see all subscriptions and resource groups that contain that resource. + :::image source="./media/metrics-charts/single-resource-scope.png" alt-text="Screenshot that shows available resources." lightbox="./media/metrics-charts/single-resource-scope.png"::: -![Screenshot showing available resources.](./media/metrics-charts/available-single-resource.png) +1. Select a resource. All subscriptions and resource groups that contain that resource appear. -> [!TIP] -> If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**. + :::image source="./media/metrics-charts/available-single-resource.png" alt-text="Screenshot that shows a single resource." lightbox="./media/metrics-charts/available-single-resource.png"::: -When you're satisfied with your selection, select **Apply**. + If you want the capability to view the metrics for multiple resources at the same time, or to view metrics across a subscription or resource group, select **Upvote**. -### Select multiple resources +1. When you're satisfied with your selection, select **Apply**. -Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu. +### Select multiple resources -For more information, see [Select multiple resources](./metrics-dynamic-scope.md#select-multiple-resources). +Some resource types can query for metrics over multiple resources. The resources must be within the same subscription and location. Find these resource types at the top of the **Resource types** menu. For more information, see [Select multiple resources](./metrics-dynamic-scope.md#select-multiple-resources). -![Screenshot showing cross-resource types.](./media/metrics-charts/multi-resource-scope.png) For types that are compatible with multiple resources, you can query for metrics across a subscription or multiple resource groups. For more information, see [Select a resource group or subscription](./metrics-dynamic-scope.md#select-a-resource-group-or-subscription). ## Multiple metric lines and charts -In the Azure Metrics Explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to: +In Metrics Explorer, you can create charts that plot multiple metric lines or show multiple metric charts at the same time. This functionality allows you to: - Correlate related metrics on the same graph to see how one value relates to another. - Display metrics that use different units of measure in close proximity. - Visually aggregate and compare metrics from multiple resources. -For example, imagine you have five storage accounts, and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time. +For example, imagine that you have five storage accounts, and you want to know how much space they consume together. You can create a stacked area chart that shows the individual values and the sum of all the values at points in time. ### Multiple metrics on the same chart To view multiple metrics on the same chart, first [create a new chart](./metrics-getting-started.md#create-your-first-metric-chart). Then select **Add metric**. Repeat this step to add another metric on the same chart. -> [!NOTE] -> Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. -> -> In these cases, consider using multiple charts instead. In Metrics Explorer, select **New chart** to create a new chart. -![Screenshot showing multiple metrics.](./media/metrics-charts/multiple-metrics-chart.png) +Typically, your charts shouldn't mix metrics that use different units of measure. For example, avoid mixing one metric that uses milliseconds with another that uses kilobytes. Also avoid mixing metrics whose scales differ significantly. In these cases, consider using multiple charts instead. ### Multiple charts To create another chart that uses a different metric, select **New chart**. -To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then choose **Move up**, **Move down**, or **Delete**. +To reorder or delete multiple charts, select the ellipsis (**...**) button to open the chart menu. Then select **Move up**, **Move down**, or **Delete**. -![Screenshot showing multiple charts.](./media/metrics-charts/multiple-charts.png) ## Time range controls -In addition to changing the time range using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can also pan and zoom using the controls in the chart area. +In addition to changing the time range by using the [time picker panel](metrics-getting-started.md#select-a-time-range), you can pan and zoom by using the controls in the chart area. ### Pan -To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half the chart's time span. For example, if you're viewing the past 24 hours, clicking on the left arrow causes the time range to shift to span a day and a half to 12 hours ago. +To pan, select the left and right arrows at the edge of the chart. The arrow control moves the selected time range back and forward by one half of the chart's time span. For example, if you're viewing the past 24 hours, selecting the left arrow causes the time range to shift to span a day and a half to 12 hours ago. -Most metrics support 93 days of retention but only let you view 30 days at a time. Using the pan controls, you look at the past 30 days and then easily walk back 15 days at a time to view the rest of the retention period. +Most metrics support 93 days of retention but let you view only 30 days at a time. By using the pan controls, you look at the past 30 days and then easily go back 15 days at a time to view the rest of the retention period. -![Animated gif showing the left and right pan controls.](./media/metrics-charts/metrics-pan-controls.gif) ### Zoom -You can select and drag on the chart to zoom into a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to Automatic, zooming selects a smaller time grain. The new time range applies to all charts in Metrics. +You can select and drag on the chart to zoom in to a section of a chart. Zooming updates the chart's time range to span your selection. If the time grain is set to **Automatic**, zooming selects a smaller time grain. The new time range applies to all charts in Metrics Explorer. -![Animated gif showing the metrics zoom feature.](./media/metrics-charts/metrics-zoom-control.gif) ## Aggregation When you add a metric to a chart, Metrics Explorer applies a default aggregation Before you use different aggregations on a chart, you should understand how Metrics Explorer handles them. Metrics are a series of measurements (or "metric values") that are captured over a time period. When you plot a chart, the values of the selected metric are separately aggregated over the *time granularity*. -You select the size of the time grain by using Metrics Explorer's time picker panel. If you don't explicitly select the time grain, the currently selected time range is used by default. After the time grain is determined, the metric values that were captured during each time grain are aggregated on the chart, one data point per time grain. +You select the size of the time grain by using the time picker panel in Metrics Explorer. If you don't explicitly select the time grain, Metrics Explorer uses the currently selected time range by default. After Metrics Explorer determines the time grain, the metric values that it captures during each time grain are aggregated on the chart, one data point per time grain. ++For example, suppose a chart shows the *Server response time* metric. It uses the average aggregation over the time span of the last 24 hours. -For example, suppose a chart shows the *Server response time* metric. It uses the *average* aggregation over time span of the *last 24 hours*. In this example: +In this example: -- If the time granularity is set to 30 minutes, the chart is drawn from 48 aggregated data points. That is, 2 data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the *average* of all captured response times for server requests that occurred during each of the relevant 30-minute time periods.-- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, 4 data points per hour for 24 hours.+- If you set the time granularity to 30 minutes, Metrics Explorer draws the chart from 48 aggregated data points. That is, it uses two data points per hour for 24 hours. The line chart connects 48 dots in the chart plot area. Each data point represents the average of all captured response times for server requests that occurred during each of the relevant 30-minute time periods. +- If you switch the time granularity to 15 minutes, you get 96 aggregated data points. That is, you get four data points per hour for 24 hours. Metrics Explorer has five aggregation types:-- **Sum**: The sum of all values captured during the aggregation interval. The *sum* aggregation is sometimes called the *total* aggregation.++- **Sum**: The sum of all values captured during the aggregation interval. The sum aggregation is sometimes called the *total* aggregation. - **Count**: The number of measurements captured during the aggregation interval.+ When the metric is always captured with the value of 1, the count aggregation is equal to the sum aggregation. This scenario is common when the metric tracks the count of distinct events and each measurement represents one event. The code emits a metric record every time a new request arrives. - **Average**: The average of the metric values captured during the aggregation interval. - **Min**: The smallest value captured during the aggregation interval. - **Max**: The largest value captured during the aggregation interval. - :::image type="content" source="media/metrics-charts/aggregations.png" alt-text="A screenshot showing the aggregation dropdown." lightbox="media/metrics-charts/aggregations.png"::: -Metrics Explorer hides the aggregations that are irrelevant and can't be used. +Metrics Explorer hides the aggregations that are irrelevant and can't be used. For a deeper discussion of how metric aggregation works, see [Azure Monitor metrics aggregation and display explained](metrics-aggregation-explained.md). ## Filters -You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, a chart line is displayed for only successful or only failed transactions. +You can apply filters to charts whose metrics have dimensions. For example, imagine a *Transaction count* metric that has a *Response type* dimension. This dimension indicates whether the response from transactions succeeded or failed. If you filter on this dimension, Metrics Explorer displays a chart line for only successful or only failed transactions. ### Add a filter 1. Above the chart, select **Add filter**. -1. Select a dimension from the **Property** dropdown to filter. +1. Select a dimension from the **Property** dropdown list. ++ :::image type="content" source="./media/metrics-charts/filter-property.png" alt-text="Screenshot that shows the dropdown list for filter properties." lightbox="./media/metrics-charts/filter-property.png"::: - :::image type="content" source="./media/metrics-charts/filter-property.png" alt-text="Screenshot that shows the filter properties dropdown." lightbox="./media/metrics-charts/filter-property.png"::: +1. Select the operator that you want to apply against the dimension (property). The default operator is equals (**=**). + + :::image type="content" source="./media/metrics-charts/filter-operator.png" alt-text="Screenshot that shows the operator that you can use with the filter." lightbox="./media/metrics-charts/filter-operator.png"::: -1. Select the operator you want to apply against the dimension (property). The default operator is = (equals) - :::image type="content" source="./media/metrics-charts/filter-operator.png" alt-text="Screenshot that shows the operator you can use with the filter." lightbox="./media/metrics-charts/filter-operator.png"::: +1. Select which dimension values you want to apply to the filter when you're plotting the chart. This example shows filtering out the successful storage transactions. -1. Select which dimension values you want to apply to the filter when plotting the chart. This example shows filtering out the successful storage transactions. - :::image type="content" source="./media/metrics-charts/filter-values.png" alt-text="Screenshot that shows the filter values dropdown." lightbox="./media/metrics-charts/filter-values.png"::: + :::image type="content" source="./media/metrics-charts/filter-values.png" alt-text="Screenshot that shows the dropdown list for filter values." lightbox="./media/metrics-charts/filter-values.png"::: -1. After selecting the filter values, click away from the filter selector to close it. The chart shows how many storage transactions have failed: - :::image type="content" source="./media/metrics-charts/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions." lightbox="./media/metrics-charts/filtered-chart.png"::: +1. After you select the filter values, click away from the filter selector to close it. The chart shows how many storage transactions have failed. ++ :::image type="content" source="./media/metrics-charts/filtered-chart.png" alt-text="Screenshot that shows the successful filtered storage transactions." lightbox="./media/metrics-charts/filtered-chart.png"::: 1. Repeat these steps to apply multiple filters to the same charts. You can split a metric by dimension to visualize how different segments of the m ### Apply splitting 1. Above the chart, select **Apply splitting**.- -1. Choose dimensions on which to segment your chart: - :::image type="content" source="./media/metrics-charts/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart." lightbox="./media/metrics-charts/apply-splitting.png"::: - The chart shows multiple lines, one for each dimension segment: - :::image type="content" source="./media/metrics-charts/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/metrics-charts/segment-dimension.png"::: +1. Choose dimensions on which to segment your chart. ++ :::image type="content" source="./media/metrics-charts/apply-splitting.png" alt-text="Screenshot that shows the selected dimension on which to segment the chart." lightbox="./media/metrics-charts/apply-splitting.png"::: ++ The chart shows multiple lines, one for each dimension segment. ++ :::image type="content" source="./media/metrics-charts/segment-dimension.png" alt-text="Screenshot that shows multiple lines, one for each segment of dimension." lightbox="./media/metrics-charts/segment-dimension.png"::: ++1. Choose a limit on the number of values to be displayed after you split by the selected dimension. The default limit is 10, as shown in the preceding chart. The range of the limit is 1 to 50. ++ :::image type="content" source="./media/metrics-charts/segment-dimension-limit.png" alt-text="Screenshot that shows the split limit, which restricts the number of values after splitting." lightbox="./media/metrics-charts/segment-dimension-limit.png"::: +1. Choose the sort order on segments: **Descending** (default) or **Ascending**. -1. Choose a limit on the number of values to be displayed after splitting by selected dimension. The default limit is 10 as shown in the above chart. The range of limit is 1 - 50. - :::image type="content" source="./media/metrics-charts/segment-dimension-limit.png" alt-text="Screenshot that shows split limit, which restricts the number of values after splitting." lightbox="./media/metrics-charts/segment-dimension-limit.png"::: + :::image type="content" source="./media/metrics-charts/segment-dimension-sort.png" alt-text="Screenshot that shows the sort order on split values." lightbox="./media/metrics-charts/segment-dimension-sort.png"::: -1. Choose the sort order on segments: **Ascending** or **Descending**. The default selection is **Descending**. +1. Segment by multiple segments by selecting multiple dimensions from the **Values** dropdown list. The legend shows a comma-separated list of dimension values for each segment. - - :::image type="content" source="./media/metrics-charts/segment-dimension-sort.png" alt-text="Screenshot that shows sort order on split values." lightbox="./media/metrics-charts/segment-dimension-sort.png"::: + :::image type="content" source="./media/metrics-charts/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/metrics-charts/segment-dimension-multiple.png"::: -1. Segment by multiple segments by selecting multiple dimensions from the values dropdown. The legends shows a comma-separated list of dimension values for each segment - :::image type="content" source="./media/metrics-charts/segment-dimension-multiple.png" alt-text="Screenshot that shows multiple segments selected, and the corresponding chart." lightbox="./media/metrics-charts/segment-dimension-multiple.png"::: - 1. Click away from the grouping selector to close it. - > [!TIP] - > To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension. +> [!TIP] +> To hide segments that are irrelevant for your scenario and to make your charts easier to read, use both filtering and splitting on the same dimension. ## Locking the range of the y-axis Locking the range of the value (y) axis becomes important in charts that show small fluctuations of large values. -For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small numeric value fluctuation would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent. +For example, a drop in the volume of successful requests from 99.99 percent to 99.5 percent might represent a significant reduction in the quality of service. Noticing a small fluctuation in a numeric value would be difficult or even impossible if you're using the default chart settings. In this case, you could lock the lowest boundary of the chart to 99 percent to make a small drop more apparent. ++Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot. ++To control the y-axis range: -Another example is a fluctuation in the available memory. In this scenario, the value technically never reaches 0. Fixing the range to a higher value might make drops in available memory easier to spot. +1. Open the chart menu by selecting the ellipsis (**...**). Then select **Chart settings** to access advanced chart settings. -1. To control the y-axis range, open the chart menu **...**. Then select **Chart settings** to access advanced chart settings. - :::image source="./media/metrics-charts/select-chart-settings.png" alt-text="Screenshot that highlights the chart settings selection." lightbox="./media/metrics-charts/select-chart-settings.png"::: + :::image source="./media/metrics-charts/select-chart-settings.png" alt-text="Screenshot that shows the menu option for chart settings." lightbox="./media/metrics-charts/select-chart-settings.png"::: 1. Modify the values in the **Y-axis range** section, or select **Auto** to revert to the default values.- :::image type="content" source="./media/metrics-charts/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/metrics-charts/chart-settings.png"::: - -> [!NOTE] -> If you lock the boundaries of the y-axis for charts that tracks count, sum, min, or max aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults. -> -> A fixed time granularity is chosen because chart values change when the time granularity is automatically modified when a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range. + :::image type="content" source="./media/metrics-charts/chart-settings.png" alt-text="Screenshot that shows the Y-axis range section." lightbox="./media/metrics-charts/chart-settings.png"::: ++If you lock the boundaries of the y-axis for a chart that tracks count, sum, minimum, or maximum aggregations over a period of time, specify a fixed time granularity. Don't rely on the automatic defaults. ++You choose a fixed time granularity because chart values change when the time granularity is automatically modified after a user resizes a browser window or changes screen resolution. The resulting change in time granularity affects the appearance of the chart, invalidating the selection of the y-axis range. ## Line colors -Chart lines are automatically assigned a color from a default palette. +Chart lines are automatically assigned a color from a default palette. To change the color of a chart line, select the colored bar in the legend that corresponds to the line on the chart. Use the color picker to select the line color. Customized colors are preserved when you pin the chart to a dashboard. The following section shows how to pin a chart. ## Saving to dashboards or workbooks -After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information. +After you configure a chart, you can add it to a dashboard or workbook. By adding a chart to a dashboard or workbook, you can make it accessible to your team. You can also gain insights by viewing it in the context of other monitoring information. -- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** and then **Pin to dashboard**.-- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** and then **Save to workbook**.+- To pin a configured chart to a dashboard, in the upper-right corner of the chart, select **Save to dashboard** > **Pin to dashboard**. +- To save a configured chart to a workbook, in the upper-right corner of the chart, select **Save to dashboard** > **Save to workbook**. ## Alert rules -You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the alert rule creation pane. +You can use your visualization criteria to create a metric-based alert rule. The new alert rule includes your chart's target resource, metric, splitting, and filter dimensions. You can modify these settings by using the **Create an alert rule** pane. -To create an alert rule, -1. Select **New alert rule** in the upper-right corner of the chart +To create an alert rule: -1. On the **Condition** tab, the **Signal name** is defaulted to the metric from your chart. You can choose a different metric. +1. Select **New alert rule** in the upper-right corner of the chart. -1. Enter a **Threshold value**. The threshold value is the value that triggers the alert. The Preview chart shows the threshold value as a horizontal line over the metric values. + :::image source="./media/metrics-charts/new-alert.png" alt-text="Screenshot that shows the button for creating a new alert rule." lightbox="./media/metrics-charts/new-alert.png"::: -1. Select the **Details** tab. +1. Select the **Condition** tab. The **Signal name** entry defaults to the metric from your chart. You can choose a different metric. -1. On the **Details** tab, enter a **Name** and **Description** for the alert rule. +1. Enter a number for **Threshold value**. The threshold value is the value that triggers the alert. The **Preview** chart shows the threshold value as a horizontal line over the metric values. When you're ready, select the **Details** tab. -1. Select a **Severity** level for the alert rule. Severities include Critical, Error Warning, Informational, and Verbose. + :::image source="./media/metrics-charts/alert-rule-condition.png" alt-text="Screenshot that shows the Condition tab on the pane for creating an alert rule." lightbox="./media/metrics-charts/alert-rule-condition.png"::: -1. Select **Review + create** to review the alert rule, then select **Create** to create the alert rule. +1. Enter **Name** and **Description** values for the alert rule. -For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md). +1. Select a **Severity** level for the alert rule. Severities include **Critical**, **Error Warning**, **Informational**, and **Verbose**. -## Correlate metrics to logs +1. Select **Review + create** to review the alert rule. -**Drill into Logs** helps you diagnose the root cause of anomalies in your metrics chart. Drilling into logs allows you to correlate spikes in your metrics chart to logs and queries. + :::image source="./media/metrics-charts/alert-rule-details.png" alt-text="Screenshot that shows the Details tab on the pane for creating an alert rule." lightbox="./media/metrics-charts/alert-rule-details.png"::: -The following table summarizes the types of logs and queries provided: +1. Select **Create** to create the alert rule. -| Term | Definition | +For more information, see [Create, view, and manage metric alerts](../alerts/alerts-metric.md). ++## Correlating metrics to logs ++In Metrics Explorer, **Drill into Logs** helps you diagnose the root cause of anomalies in your metric chart. Drilling into logs allows you to correlate spikes in your metric chart to the following types of logs and queries: ++| Term | Definition | ||-|-| Activity logs | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane) in addition to updates on Service Health events. Use the Activity log to determine the what, who, and when for any write operations (PUT, POST, DELETE) taken on the resources in your subscription. There's a single Activity log for each Azure subscription. | -| Diagnostic log | Provides insight into operations that were performed within an Azure resource (the data plane). For example, getting a secret from a Key Vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource. | -| Recommended log | Scenario-based queries that you can use to investigate anomalies in Metrics Explorer. | +| Activity log | Provides insight into the operations on each Azure resource in the subscription from the outside (the management plane), in addition to updates on Azure Service Health events. Use the activity log to determine the what, who, and when for any write operations (`PUT`, `POST`, or `DELETE`) taken on the resources in your subscription. There's a single activity log for each Azure subscription. | +| Diagnostic log | Provides insight into operations that you performed within an Azure resource (the data plane). Examples include getting a secret from a key vault or making a request to a database. The content of resource logs varies by the Azure service and resource type. You must enable logs for the resource. | +| Recommended log | Provides scenario-based queries that you can use to investigate anomalies in Metrics Explorer. | -Currently, Drill into Logs is available for select resource providers. The following resource providers offer the complete Drill into Logs experience: +Currently, **Drill into Logs** is available for select resource providers. The following resource providers offer the complete **Drill into Logs** experience: - Application Insights - Autoscale-- App Services-- Storage+- Azure App Service +- Azure Storage ++To diagnose a spike in failed requests: ++1. Select **Drill into Logs**. - :::image source="./media/metrics-charts/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures in app insights metrics pane." lightbox="./media/metrics-charts/drill-into-log-ai.png"::: + :::image source="./media/metrics-charts/drill-into-log-ai.png" alt-text="Screenshot that shows a spike in failures on an Application Insights metrics pane." lightbox="./media/metrics-charts/drill-into-log-ai.png"::: -1. To diagnose the spike in failed requests, select **Drill into Logs**. +1. In the dropdown list, select **Failures**. - ![Screenshot shows the Drill into Logs dropdown menu.](./media/metrics-charts/drill-into-logs-dropdown.png) + :::image source="./media/metrics-charts/drill-into-logs-dropdown.png" alt-text="Screenshot that shows the dropdown menu for drilling into logs." lightbox="./media/metrics-charts/drill-into-logs-dropdown.png"::: -1. Select **Failures** to open a custom failure pane that provides you with the failed operations, top exceptions types, and dependencies. +1. On the custom failure pane, check for failed operations, top exception types, and failed dependencies. - ![Screenshot of app insights failure pane.](./media/metrics-charts/ai-failure-blade.png) + :::image source="./media/metrics-charts/ai-failure-blade.png" alt-text="Screenshot of the Application Insights failure pane." lightbox="./media/metrics-charts/ai-failure-blade.png"::: ## Next steps -To create actionable dashboards by using metrics, see [Creating custom KPI dashboards](../app/tutorial-app-dashboards.md). +To create actionable dashboards by using metrics, see [Create custom KPI dashboards](../app/tutorial-app-dashboards.md). |
azure-monitor | Migrate To Azure Storage Lifecycle Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md | This guide walks you through migrating from using Azure diagnostic settings stor > [!IMPORTANT] > **Deprecation Timeline.**-> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. If you have configured retention settings, you'll still be able to see and change them. -> - September 30, 2023 ΓÇô You will no longer be able to use the API or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected. +> - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. This includes using the portal, CLI PowerShell, and ARM and Bicep templates. If you have configured retention settings, you'll still be able to see and change them in the portal. +> - September 30, 2023 ΓÇô You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected. > - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments. To migrate your diagnostics settings retention rules, follow the steps below: 1. Set your retention time, then select **Next** :::image type="content" source="./media/retention-migration/lifecycle-management-add-rule-base-blobs.png" alt-text="A screenshot showing the Base blobs tab for adding a lifecycle rule."::: -1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. -For example, for all Function App logs, you could use the container *insights-logs-functionapplogs* to set the retention for all Function App logs. -To set the rule for a specific subscription, resource group, and function app name, use *insights-logs-functionapplogs/resourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your function app name\>*. +1. On the **Filters** tab, under **Blob prefix** set path or prefix to the container or logs you want the retention rule to apply to. The path or prefix can be at any level within the container and will apply to all blobs under that path or prefix. +For example, for *all* insight activity logs, use the container *insights-activity-logs* to set the retention for all of the log in that container logs. +To set the rule for a specific webapp app, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/\<your subscription Id\>/RESOURCEGROUPS/\<your resource group\>/PROVIDERS/MICROSOFT.WEB/SITES/\<your webapp name\>*. ++ Use the Storage browser to help you find the path or prefix. + The example below shows the prefix for a specific web app: **insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001/PROVIDERS/MICROSOFT.WEB/SITES/appfromdocker1*. + To set the rule for all resources in the resource group, use *insights-activity-logs/ResourceId=/SUBSCRIPTIONS/d05145d-4a5d-4a5d-4a5d-5267eae1bbc7/RESOURCEGROUPS/rg-001*. + :::image type="content" source="./media/retention-migration/blob-prefix.png" alt-text="A screenshot showing the Storage browser and resource path." lightbox="./media/retention-migration/blob-prefix.png"::: 1. Select **Add** to save the rule. ## Next steps |
azure-monitor | Prometheus Rule Groups | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-rule-groups.md | Last updated 09/28/2022 # Azure Monitor managed service for Prometheus rule groups Rules in Prometheus act on data as it's collected. They're configured as part of a Prometheus rule group, which is stored in [Azure Monitor workspace](azure-monitor-workspace-overview.md). Rules are run sequentially in the order they're defined in the group. - ## Rule types There are two types of Prometheus rules as described in the following table. There are two types of Prometheus rules as described in the following table. | Recording |[Recording rules](https://aka.ms/azureprometheus-promio-recrules) allow you to precompute frequently needed or computationally extensive expressions and store their result as a new set of time series. Time series created by recording rules are ingested back to your Azure Monitor workspace as new Prometheus metrics. | ## Create Prometheus rules-Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties.Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell. +Azure Managed Prometheus rule groups, recording rules and alert rules can be created and configured using The Azure resource type **Microsoft.AlertsManagement/prometheusRuleGroups**, where the alert rules and recording rules are defined as part of the rule group properties. Prometheus rule groups are defined with a scope of a specific [Azure Monitor workspace](azure-monitor-workspace-overview.md). Prometheus rule groups can be created using Azure Resource Manager (ARM) templates, API, Azure CLI, or PowerShell. ++Azure managed Prometheus rule groups follow the structure and terminology of the open source Prometheus rule groups. Rule names, expression, 'for' clause, labels, annotations are all supported in the Azure version. The following key differences between OSS rule groups and Azure managed Prometheus should be noted: +* Azure managed Prometheus rule groups are managed as Azure resources, and include necessary information for resource management, such as the subscription and resource group where the Azure rule group should reside. +* Azure managed Prometheus alert rules include dedicated properties that allow alerts to be processed like other Azure Monitor alerts. For example, alert severity, action group association, and alert auto resolve configuration are supported as part of Azure managed Prometheus alert rules. > [!NOTE] > For your AKS or ARC Kubernetes clusters, you can use some of the recommended alerts rules. See pre-defined alert rules [here](../containers/container-insights-metric-alerts.md#enable-prometheus-alert-rules). ### Limiting rules to a specific cluster -You can optionally limit the rules in a rule group to query data originating from a specific cluster, using the rule group `clusterName` property. -You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the `clusterName` property, you can create multiple rule groups, each configured with the same rules, and therefore limit each group to cover a different cluster. +You can optionally limit the rules in a rule group to query data originating from a single specific cluster, by adding a cluster scope to your rule group, and/or by using the rule group `clusterName` property. +You should limit rules to a single cluster if your Azure Monitor workspace contains a large amount of data from multiple clusters. In such a case, there's a concern that running a single set of rules on all the data may cause performance or throttling issues. By using the cluster scope, you can create multiple rule groups, each configured with the same rules, with each group covering a different cluster. ++To limit your rule group to a cluster scope, you should add the Azure Resource ID of your cluster to the rule group **scopes[]** list. **The scopes list must still include the Azure Monitor workspace resource ID**. The following cluster resource types are supported as a cluster scope: +* Azure Kubernetes Service clusters (AKS) (Microsoft.ContainerService/managedClusters) +* Azure Arc-enabled Kubernetes clusters (Microsoft.kubernetes/connectedClusters) +* Azure connected appliances (Microsoft.ResourceConnector/appliances) ++In addition to the cluster ID, you can configure the **clusterName** property of your rule group. The 'clusterName' property must match the `cluster` label that is added to your metrics when scraped from a specific cluster. By default, this label is set to the last part (resource name) of your cluster ID. If you've changed this label using the ['cluster_alias'](../essentials/prometheus-metrics-scrape-configuration.md#cluster-alias) setting in your cluster scraping configmap, you must include the updated value in the rule group 'clusterName' property. If your scraping uses the default 'cluster' label value, the 'clusterName' property is optional. ++Here's an example of how a rule group is configured to limit query to a specific cluster: -- The `clusterName` value must be identical to the `cluster` label that is added to the metrics from a specific cluster during data collection.-- If `clusterName` isn't specified for a specific rule group, the rules in the group query all the data in the workspace from all clusters.+``` json +{ + "name": "sampleRuleGroup", + "type": "Microsoft.AlertsManagement/prometheusRuleGroups", + "apiVersion": "2023-03-01", + "location": "northcentralus", + "properties": { + "description": "Sample Prometheus Rule Group limited to a specific cluster", + "scopes": [ + "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>", + "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.containerservice/managedclusters/<myClusterName>" + ], + "clusterName": "<myCLusterName>", + "rules": [ + { + ... + } + ] + } +} +``` +If both cluster ID scope and `clusterName` aren't specified for a rule group, the rules in the group query data from all the clusters in the workspace from all clusters. ### Creating Prometheus rule group using Resource Manager template The basic steps are as follows: 2. Deploy the template using any deployment method, such as [Azure portal](../../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../../azure-resource-manager/templates/deploy-cli.md), [Azure PowerShell](../../azure-resource-manager/templates/deploy-powershell.md), or [Rest API](../../azure-resource-manager/templates/deploy-rest.md). ### Template example for a Prometheus rule group-Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The rules are executed in the order they appear within a group. +Following is a sample template that creates a Prometheus rule group, including one recording rule and one alert rule. This template creates a resource of type `Microsoft.AlertsManagement/prometheusRuleGroups`. The scope of this group is limited to a single AKS cluster. The rules are executed in the order they appear within a group. ``` json { Following is a sample template that creates a Prometheus rule group, including o "properties": { "description": "Sample Prometheus Rule Group", "scopes": [- "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>" + "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.monitor/accounts/<azure-monitor-workspace-name>", + "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.containerservice/managedclusters/<myClusterName>" ], "enabled": true, "clusterName": "<myCLusterName>", Following is a sample template that creates a Prometheus rule group, including o }, "actions": [ {- "actionGroupId": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>" + "actionGroupID": "/subscriptions/<subscription-id>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>" } ] } The rule group contains the following properties. | `name` | True | string | Prometheus rule group name | | `type` | True | string | `Microsoft.AlertsManagement/prometheusRuleGroups` | | `apiVersion` | True | string | `2023-03-01` |-| `location` | True | string | Resource location from regions supported in the preview | -| `properties.description` | False | string | Rule group description | -| `properties.scopes` | True | string[] | Target Azure Monitor workspace. Only one scope currently supported | +| `location` | True | string | Resource location from regions supported in the preview. | +| `properties.description` | False | string | Rule group description. | +| `properties.scopes` | True | string[] | Must include the target Azure Monitor workspace ID. Can optionally include one more cluster ID, as well. | | `properties.enabled` | False | boolean | Enable/disable group. Default is true. |-| `properties.clusterName` | False | string | Apply rule to data from a specific cluster. Default is apply to all data in workspace. | +| `properties.clusterName` | False | string | Must match the `cluster` label that is added to metrics scraped from your target cluster. By default, set to the last part (resource name) of cluster ID that appears in scopes[]. | | `properties.interval` | False | string | Group evaluation interval. Default = PT1M | ### Recording rules The `rules` section contains the following properties for alerting rules. |:|:|:|:|:| | `alert` | False | string | Alert rule name | | `expression` | True | string | PromQL expression to evaluate. |-| `for` | False | string | Alert firing timeout. Values - 'PT1M', 'PT5M' etc. | +| `for` | False | string | Alert firing timeout. Values - PT1M, PT5M etc. | | `labels` | False | object | labels key-value pairs | Prometheus alert rule labels. These labels are added to alerts fired by this rule. | | `rules.annotations` | False | object | Annotations key-value pairs to add to the alert. | | `enabled` | False | boolean | Enable/disable group. Default is true. | The `rules` section contains the following properties for alerting rules. If you have a [Prometheus rules configuration file](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#configuring-rules) (in YAML format), you can now convert it to an Azure Prometheus rule group ARM template, using the [az-prom-rules-converter utility](https://github.com/Azure/prometheus-collector/tree/main/tools/az-prom-rules-converter#az-prom-rules-converter). The rules file can contain definition of one or more rule groups. -In addition to the rules file, you can provide the utility with additional properties that are needed to create the Azure Prometheus rule groups, including: subscription, resource group, location, target Azure Monitor workspace, target cluster name, and action groups (used for alert rules). The utility creates a template file that can be deployed directly or within a deployment pipe providing some of these properties as parameters. Note that properties provided to the utility are used for all the rule groups in the template, e.g., all rule groups in the file will be created in the same subscription/resource group/location, using the same Azure Monitor workspace, etc. If an action group is provided as a parameter to the utility, the same action group will be used in all the alert rules in the template. If you want to change this default configuration (e.g., use different action groups in different rules) you can edit the resulting template according to your needs, before deploying it. +In addition to the rules file, you must provide the utility with other properties that are needed to create the Azure Prometheus rule groups, including: subscription, resource group, location, target Azure Monitor workspace, target cluster ID and name, and action groups (used for alert rules). The utility creates a template file that can be deployed directly or within a deployment pipe providing some of these properties as parameters. Properties that you provide to the utility are used for all the rule groups in the template. For example, all rule groups in the file are created in the same subscription, resource group and location, and using the same Azure Monitor workspace. If an action group is provided as a parameter to the utility, the same action group is used in all the alert rules in the template. If you want to change this default configuration (for example, use different action groups in different rules) you can edit the resulting template according to your needs, before deploying it. > [!NOTE] -> !The az-prom-convert-utility is provided as a courtesy tool. We recommend that you review the resulting template and verify it matches your intended configuration. +> The az-prom-convert-utility is provided as a courtesy tool. We recommend that you review the resulting template and verify it matches your intended configuration. ### Creating Prometheus rule group using Azure CLI To enable or disable a rule, select the rule in the Azure portal. Select either > After you disable or re-enable a rule or a rule group, it may take few minutes for the rule group list to reflect the updated status of the rule or the group. + ## Next steps - [Learn more about the Azure alerts](../alerts/alerts-types.md). |
azure-monitor | Cost Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/cost-logs.md | Subscriptions that contained a Log Analytics workspace or Application Insights r Access to the legacy Free Trial pricing tier was limited on July 1, 2022. Pricing information for the Standalone and Per Node pricing tiers is available [here](https://aka.ms/OMSpricing). +> [!IMPORTANT] +> The legacy pricing tiers do not support access to some of the newest features in Log Analytics such as ingesting data as cost-effective Basic Logs. + ### Free Trial pricing tier -Workspaces in the Free Trial pricing tier will have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). The data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes. No SLA is provided for the Free Trial tier. +Workspaces in the Free Trial pricing tier have daily data ingestion limited to 500 MB (except for security data types collected by [Microsoft Defender for Cloud](../../security-center/index.yml)). Data retention is limited to seven days. The Free Trial pricing tier is intended only for evaluation purposes, not production workloads. No SLA is provided for the Free Trial tier. > [!NOTE] > Creating new workspaces in, or moving existing workspaces into, the legacy Free Trial pricing tier was possible only until July 1, 2022. |
azure-monitor | Custom Fields | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md | The following sections provide the procedure for creating a custom field. At th > The custom field is populated as records matching the specified criteria are added to the Log Analytics workspace, so it will only appear on records collected after the custom field is created. The custom field will not be added to records that are already in the data store when itΓÇÖs created. > -### Step 1 ΓÇô Identify records that will have the custom field +### Step 1: Identify records that will have the custom field The first step is to identify the records that will get the custom field. You start with a [standard log query](./log-query-overview.md) and then select a record to act as the model that Azure Monitor will learn from. When you specify that you are going to extract data into a custom field, the **Field Extraction Wizard** is opened where you validate and refine the criteria. 1. Go to **Logs** and use a [query to retrieve the records](./log-query-overview.md) that will have the custom field. The first step is to identify the records that will get the custom field. You s 4. The **Field Extraction Wizard** is opened, and the record you selected is displayed in the **Main Example** column. The custom field will be defined for those records with the same values in the properties that are selected. 5. If the selection is not exactly what you want, select additional fields to narrow the criteria. In order to change the field values for the criteria, you must cancel and select a different record matching the criteria you want. -### Step 2 - Perform initial extract. +### Step 2: Perform initial extract. Once youΓÇÖve identified the records that will have the custom field, you identify the data that you want to extract. Log Analytics will use this information to identify similar patterns in similar records. In the step after this you will be able to validate the results and provide further details for Log Analytics to use in its analysis. 1. Highlight the text in the sample record that you want to populate the custom field. You will then be presented with a dialog box to provide a name and data type for the field and to perform the initial extract. The characters **\_CF** will automatically be appended. 2. Click **Extract** to perform an analysis of collected records. 3. The **Summary** and **Search Results** sections display the results of the extract so you can inspect its accuracy. **Summary** displays the criteria used to identify records and a count for each of the data values identified. **Search Results** provides a detailed list of records matching the criteria. -### Step 3 ΓÇô Verify accuracy of the extract and create custom field +### Step 3: Verify accuracy of the extract and create custom field Once you have performed the initial extract, Log Analytics will display its results based on data that has already been collected. If the results look accurate then you can create the custom field with no further work. If not, then you can refine the results so that Log Analytics can improve its logic. 1. If any values in the initial extract arenΓÇÖt correct, then click the **Edit** icon next to an inaccurate record and select **Modify this highlight** in order to modify the selection. |
azure-monitor | Daily Cap | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/daily-cap.md | Until September 18, 2023, the following is true. If a workspace enabled the [Mic To set or change the daily cap for a Log Analytics workspace in the Azure portal: 1. From the **Log Analytics workspaces** menu, select your workspace, and then **Usage and estimated costs**.-2. Select **Data Cap** at the top of the page. +2. Select **Daily Cap** at the top of the page. 3. Select **ON** and then set the data volume limit in GB/day. :::image type="content" source="media/manage-cost-storage/set-daily-volume-cap-01.png" lightbox="media/manage-cost-storage/set-daily-volume-cap-01.png" alt-text="Log Analytics configure data limit"::: |
azure-monitor | Log Powerbi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-powerbi.md | This article explains how to feed data from Log Analytics into Power BI to produ > [!NOTE] > You can use free Power BI features to integrate and create reports and dashboards. More advanced features, such as sharing your work, scheduled refreshes, dataflows, and incremental refresh might require purchasing a Power BI Pro or Premium account. For more information, see [Learn more about Power BI pricing and features](https://powerbi.microsoft.com/pricing/). +## Prerequisites ++- To export the query to a .txt file that you can use in Power BI Desktop, you need [Power BI Desktop](https://powerbi.microsoft.com/desktop/). +- To create a new dataset based on your query directly in the Power BI service: + - You need a Power BI account. + - You must give permission in Azure for the Power BI service to write logs. For more information, see [Prerequisites to configure Azure Log Analytics for Power BI](/power-bi/transform-model/log-analytics/desktop-log-analytics-configure#prerequisites). ++## Permissions required ++- To export the query to a .txt file that you can use in Power BI Desktop, you need `Microsoft.OperationalInsights/workspaces/query/*/read` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Reader built-in role](./manage-access.md#log-analytics-reader), for example. +- To create a new dataset based on your query directly in the Power BI service, you need `Microsoft.OperationalInsights/workspaces/write` permissions to the Log Analytics workspaces you query, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example. + ## Create Power BI datasets and reports from Log Analytics queries From the **Export** menu in Log Analytics, select one of the two options for creating Power BI datasets and reports from your Log Analytics queries: |
azure-monitor | Logs Dedicated Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md | Capabilities that require dedicated clusters: - **Cost optimization** - Link your workspaces in same region to cluster to get commitment tier discount to all workspaces, even to ones with low ingestion that eligible for commitment tier discount. - **[Availability zones](../../availability-zones/az-overview.md)** - Protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking. The physical separation in zones and independent infrastructure makes an incident far less likely since the workspace can rely on the resources from any of the zones. [Azure Monitor availability zones](./availability-zones.md#service-resiliencesupported-regions) covers broader parts of the service and when available in your region, extends your Azure Monitor resilience automatically. Azure Monitor creates dedicated clusters as availability-zone-enabled (`isAvailabilityZonesEnabled`: 'true') by default in supported regions. [Dedicated clusters Availability zones](./availability-zones.md#data-resiliencesupported-regions) aren't supported in all regions currently.-- **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an Event Bubs into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier. +- **[Ingest from Azure Event Hubs](../logs/ingest-logs-event-hub.md)** - Lets you ingest data directly from an event hub into a Log Analytics workspace. Dedicated cluster lets you use capability when ingestion from all linked workspaces combined meet commitment tier. ## Cluster pricing model Log Analytics Dedicated Clusters use a commitment tier pricing model of at least 500 GB/day. Any usage above the tier level incurs charges based on the per-GB rate of that commitment tier. See [Azure Monitor Logs pricing details](cost-logs.md#dedicated-clusters) for pricing details for dedicated clusters. The commitment tiers have a 31-day commitment period from the time a commitment tier is selected. |
azure-monitor | Tutorial Logs Ingestion Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tutorial-logs-ingestion-api.md | Last updated 03/20/2023 The [Logs Ingestion API](logs-ingestion-api-overview.md) in Azure Monitor allows you to send custom data to a Log Analytics workspace. This tutorial uses Azure Resource Manager templates (ARM templates) to walk through configuration of the components required to support the API and then provides a sample application using both the REST API and client libraries for [.NET](/dotnet/api/overview/azure/Monitor.Ingestion-readme), [Java](/java/api/overview/azure/monitor-ingestion-readme), [JavaScript](/javascript/api/overview/azure/monitor-ingestion-readme), and [Python](/python/api/overview/azure/monitor-ingestion-readme). > [!NOTE]-> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-api.md) for a similar tutorial that uses Azure Resource Manager templates to configure these components. +> This tutorial uses ARM templates to configure the components required to support the Logs ingestion API. See [Tutorial: Send data to Azure Monitor Logs with Logs ingestion API (Azure portal)](tutorial-logs-ingestion-portal.md) for a similar tutorial that uses the Azure portal UI to configure these components. The steps required to configure the Logs ingestion API are as follows: |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-monitor | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
azure-netapp-files | Azacsnap Cmd Ref Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-configure.md | The process described in the Azure Backup documentation has been implemented wit 1. re-enable the backint-based backup. By default this option is disabled, but it can be enabled by running `azacsnap -c configure ΓÇôconfiguration edit` and answering ΓÇÿyΓÇÖ (yes) to the question -ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. Editing the configuration as described will set the +ΓÇ£Do you need AzAcSnap to automatically disable/enable backint during snapshot? (y/n) [n]ΓÇ¥. Editing the configuration as described sets the autoDisableEnableBackint value to true in the JSON configuration file (for example, `azacsnap.json`). It's also possible to change this value by editing the configuration file directly. When you add an *Oracle database* to the configuration, the following values are - **SID** = The database System ID. - **Oracle Connect String** = The Connect String used by `sqlplus` to connect to Oracle and enable/disable backup mode. +# [IBM Db2](#tab/db2) ++When adding a *Db2 database* to the configuration, the following values are required: ++- **Db2 Server's Address** = The database server hostname or IP address. + - If Db2 Server Address (serverAddress) matches '127.0.0.1' or 'localhost' then azacsnap executes all `db2` commands locally (refer "Local connectivity"). Otherwise AzAcSnap uses the serverAddress as the host to connect to via SSH using the "Instance User" as the SSH login name. Remote access via SSH can be validated with `ssh <instanceUser>@<serverAddress>` replacing instanceUser and serverAddress with the respective values (refer "Remote connectivity"). +- **Instance User** = The database System Instance User. +- **SID** = The database System ID. ++> [!IMPORTANT] +> Setting the Db2 Server Address (serverAddress) aligns directly with the method used to communicate with Db2, ensure this is set correctly as described. + # [Azure Large Instance (Bare Metal)](#tab/azure-large-instance) When you add *HLI Storage* to a database section, the following values are requi When you add *ANF Storage* to a database section, the following values are required: -- **Service Principal Authentication filename** = the `authfile.json` file generated in the Cloud Shell when configuring+- **Service Principal Authentication filename** (JSON field: authFile) + - To use a System Managed Identity, leave empty with no value and press [Enter] to go to the next field. + - An example to set up an Azure System Managed Identity can be found on the [AzAcSnap Installation](azacsnap-installation.md). + - To use a Service Principal, use name of the authentication file (for example, `authfile.json`) generated in the Cloud Shell when configuring communication with Azure NetApp Files storage.-- **Full ANF Storage Volume Resource ID** = the full Resource ID of the Volume being snapshot. This string can be retrieved from:+ - An example to set up a Service Principal can be found on the [AzAcSnap Installation](azacsnap-installation.md). +- **Full ANF Storage Volume Resource ID** (JSON field: resourceId) = the full Resource ID of the Volume being snapshot. This string can be retrieved from: Azure portal ΓÇô> ANF ΓÇô> Volume ΓÇô> Settings/Properties ΓÇô> Resource ID For **Azure Large Instance** system, this information is provided by Microsoft S is made available in an Excel file that is provided during handover. Open a service request if you need to be provided this information again. -The following output is an example configuration file only and is the content of the file as generated by the configuration session above, update all the values accordingly. +The following output is an example configuration file only and is the content of the file as generated by the configuration example, update all the values accordingly. ```bash cat azacsnap.json |
azure-netapp-files | Azacsnap Installation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-installation.md | This article provides a guide for installation of the Azure Application Consiste ## Introduction -The downloadable self-installer is designed to make the snapshot tools easy to set up and run with non-root user privileges (for example, azacsnap). The installer will set up the user and put the snapshot tools into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`). -The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the pre-requisite steps (enable communication with storage and SAP HANA) were run as root, then the installation will copy the private key and `hdbuserstore` to the backup userΓÇÖs location. The steps to enable communication with the storage back-end and SAP HANA can be manually done by a knowledgeable administrator after the installation. +The downloadable self-installer is designed to make the snapshot tools easy to set up and run with non-root user privileges (for example, azacsnap). The installer sets up the user and puts the snapshot tools into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`). +The self-installer tries to determine the correct settings and paths for all the files based on the configuration of the user performing the installation (for example, root). If the prerequisite steps (enable communication with storage and SAP HANA) were run as root, then the installation copies the private key and `hdbuserstore` to the back-up userΓÇÖs location. The steps to enable communication with the storage back-end and SAP HANA can be manually done by a knowledgeable administrator after the installation. ## Prerequisites for installation Follow the guidelines to set up and execute the snapshots and disaster recovery is recommended the following steps are completed as root before installing and using the snapshot tools. -1. **OS is patched**: See patching and SMT setup in [How to install and configure SAP HANA (Large Instances) on Azure](../virtual-machines/workloads/sap/hana-installation.md#operating-system). -1. **Time Synchronization is set up**. The customer will need to provide an NTP compatible time - server, and configure the OS accordingly. -1. **HANA is installed** : See HANA installation instructions in [SAP NetWeaver Installation on HANA database](/archive/blogs/saponsqlserver/sap-netweaver-installation-on-hana-database). +1. **OS is patched**: See patching and SMT set up in [How to install and configure SAP HANA (Large Instances) on Azure](../virtual-machines/workloads/sap/hana-installation.md#operating-system). +1. **Time Synchronization is set up**. The customer needs to provide an NTP compatible time server, and configure the OS accordingly. +1. **Database is installed** : Refer to separate instructions for each supported database. 1. **[Enable communication with storage](#enable-communication-with-storage)** (for more information, see separate section): Select the storage back-end you're using for your deployment. # [Azure NetApp Files](#tab/azure-netapp-files) - 1. **For Azure NetApp Files (for more information, see separate section)**: Customer must generate the service principal authentication file. + 1. **For Azure NetApp Files (for more information, see separate section)**: Customer must either set up a System Managed Identity or generate the Service Principal authentication file. > [!IMPORTANT] > When validating communication with Azure NetApp Files, communication might fail or time-out. Check to ensure firewall rules are not blocking outbound traffic from the system running AzAcSnap to the following addresses and TCP/IP ports: tools. # [Azure Large Instance (Bare Metal)](#tab/azure-large-instance) - 1. **For Azure Large Instance (for more information, see separate section)**: Set up SSH with a - private/public key pair. Provide the public key for each node, where the snapshot tools are - planned to be executed, to Microsoft Operations for setup on the storage back-end. + 1. **For Azure Large Instance (for more information, see separate section)**: Generate an SSH private/public key pair. For each node where the snapshot tools will be run, provide the generated public key to Microsoft Operations so they can install on the storage back-end. - Test this by using SSH to connect to one of the nodes (for example, `ssh -l <Storage UserName> <Storage IP Address>`). + Test connectivity by using SSH to connect to one of the nodes (for example, `ssh -l <Storage UserName> <Storage IP Address>`). Type `exit` to logout of the storage prompt. - Microsoft operations will provide the storage user and storage IP at the time of provisioning. + Microsoft Operations provides the storage user and storage IP at the time of provisioning. tools. > [!NOTE] > These examples are for non-SSL communication to SAP HANA. - # [Oracle](#tab/oracle) + # [Oracle](#tab/oracle) Set up an appropriate Oracle database and Oracle Wallet following the instructions in the Enable communication with database](#enable-communication-with-database) section. tools. 1. `sqlplus /@<ORACLE_USER> as SYSBACKUP` + # [IBM Db2](#tab/db2) ++ Set up an appropriate IBM Db2 connection method following the instructions in the Enable communication with database](#enable-communication-with-database) section. ++ 1. After set up the connection can be tested from the command line as follows using these examples: ++ 1. Installed onto the database server, then complete the set up with "[Db2 local connectivity](#db2-local-connectivity)". ++ `db2 "QUIT"` + + 1. Installed onto a centralized back-up system, then complete the set up with "[Db2 remote connectivity](#db2-remote-connectivity)". ++ `ssh <InstanceUser>@<ServerAddress> 'db2 "QUIT"'` ++ 1. Both of the commands run in step 1 should produce the output: ++ ```output + DB20000I The QUIT command completed successfully. + ``` + This section explains how to enable communication with storage. Ensure the stora # [Azure NetApp Files (with Virtual Machine)](#tab/azure-netapp-files) -Create RBAC Service Principal +### Azure System Managed Identity ++From AzAcSnap 9, it's possible to use a System Managed Identity instead of a Service Principal for operation. Using this feature avoids the need to store Service Principal credentials on a VM. The steps to follow to set up an Azure Managed Identity using the Azure Portal Cloud Shell are as follows. ++1. Within an Azure Cloud Shell session with Bash, use the following example to set the shell variables appropriately and apply to the subscription where you want to create the Azure Managed Identity: ++ ```azurecli-interactive + export SUBSCRIPTION="99z999zz-99z9-99zz-99zz-9z9zz999zz99" + export VM_NAME="MyVM" + export RESOURCE_GROUP="MyResourceGroup" + export ROLE="Contributor" + export SCOPE="/subscriptions/${SUBSCRIPTION}/resourceGroups/${RESOURCE_GROUP}" + ``` ++ > [!NOTE] + > Set the `SUBSCRIPTION`, `VM_NAME`, and `RESOURCE_GROUP` to your site specific values. ++1. Set the Cloud Shell to the correct subscription: ++ ```azurecli-interactive + az account set -s "${SUBSCRIPTION}" + ``` ++1. Create the managed identity for the virtual machine. The following command sets, or shows if already set, the AzAcSnap virtual machine Managed Identity. ++ ```azurecli-interactive + az vm identity assign --name "${VM_NAME}" --resource-group "${RESOURCE_GROUP}" + ``` ++1. Get the Principal ID for use to assign a role: ++ ```azurecli-interactive + PRINCIPAL_ID=$(az resource list -n ${VM_NAME} --query [*].identity.principalId --out tsv) + ``` ++1. Assign the ΓÇÿContributorΓÇÖ role to the Principal ID: ++ ```azurecli-interactive + az role assignment create --assignee "${PRINCIPAL_ID}" --role "${ROLE}" --scope "${SCOPE}" + ``` ++#### Optional RBAC ++ItΓÇÖs possible to limit the permissions for the Managed Identity using a custom role definition. Create a suitable role definition for the virtual machine to be able to manage snapshots (example permissions settings can be found in [Tips and tricks for using Azure Application Consistent Snapshot tool](azacsnap-tips.md). ++Then assign the role to the Azure Virtual Machine Principal ID (also displayed as `SystemAssignedIdentity`): ++```azurecli-interactive +az role assignment create --assignee ${PRINCIPAL_ID} --role "AzAcSnap on ANF" --scope "${SCOPE}" +``` ++### Generate Service Principal file 1. Within an Azure Cloud Shell session, make sure you're logged on at the subscription where you want to be associated with the service principal by default: Create RBAC Service Principal az account show ``` -1. If the subscription isn't correct, use the following command: +1. If the subscription isn't correct, use the `az account set` command: ```azurecli-interactive az account set -s <subscription name or id> ``` -1. Create a service principal using Azure CLI per the following example: +1. Create a service principal using Azure CLI per this example: ```azurecli-interactive az ad sp create-for-rbac --name "AzAcSnap" --role Contributor --scopes /subscriptions/{subscription-id} --sdk-auth ``` - 1. This should generate an output like the following example: + 1. This command should generate an output like this example: ```output { Create RBAC Service Principal command and secure the file with appropriate system permissions. > [!WARNING]- > Make sure the format of the JSON file is exactly as described above. Especially with the URLs enclosed in double quotes ("). + > Make sure the format of the JSON file is exactly as described in the step to "Create a service principal using Azure CLI". Ensure the URLs are enclosed in double quotes ("). # [Azure Large Instance (Bare Metal)](#tab/azure-large-instance) Communication with the storage back-end executes over an encrypted SSH channel. The following-example steps are to provide guidance on setup of SSH for this communication. +example steps are to provide guidance on set up of SSH for this communication. 1. Modify the `/etc/ssh/ssh_config` file example steps are to provide guidance on setup of SSH for this communication. 1. Create a private/public key pair - Using the following example command to generate the key pair, do not enter a password when generating a key. + Using the following example command to generate the key pair, don't enter a password when generating a key. ```bash ssh-keygen -t rsa ΓÇôb 5120 -C "" This section explains how to enable communication with the database. Ensure the > If deploying to a centralized virtual machine, then it will need to have the SAP HANA client installed and set up so the AzAcSnap user can run `hdbsql` and `hdbuserstore` commands. The SAP HANA Client can downloaded from https://tools.hana.ondemand.com/#hanatools. The snapshot tools communicate with SAP HANA and need a user with appropriate permissions to-initiate and release the database save-point. The following example shows the setup of the SAP +initiate and release the database save-point. The following example shows the set up of the SAP HANA v2 user and the `hdbuserstore` for communication to the SAP HANA database. The following example commands set up a user (AZACSNAP) in the SYSTEMDB on SAP HANA 2. database, change the IP address, usernames, and passwords as appropriate: > [!NOTE] > Check with corporate policy before making this change. - This example disables the password expiration for the AZACSNAP user, without this change the user's password will expire preventing snapshots to be taken correctly. + This example disables the password expiration for the AZACSNAP user, without this change the user's password can expire preventing snapshots to be taken correctly. ```sql hdbsql SYSTEMDB=> ALTER USER AZACSNAP DISABLE PASSWORD LIFETIME; database, change the IP address, usernames, and passwords as appropriate: ### Using SSL for communication with SAP HANA -The `azacsnap` tool utilizes SAP HANA's `hdbsql` command to communicate with SAP HANA. This -includes the use of SSL options when encrypting communication with SAP HANA. `azacsnap` uses +The `azacsnap` tool utilizes SAP HANA's `hdbsql` command to communicate with SAP HANA. Using `hdbsql` allows the +the use of SSL options to encrypt communication with SAP HANA. `azacsnap` uses the `hdbsql` command's SSL options as follows. The following are always used when using the `azacsnap --ssl` option: as specified in the `azacsnap` configuration file. - For commoncrypto: - `mv sapcli.pse <securityPath>/<SID>_keystore` -When `azacsnap` calls `hdbsql`, it will add `-sslkeystore=<securityPath>/<SID>_keystore` -to the command line. +When `azacsnap` calls `hdbsql`, it adds `-sslkeystore=<securityPath>/<SID>_keystore` +to the `hdbsql` command line. #### Trust Store files multiple parameters passed on the command line. # [Oracle](#tab/oracle) -The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable/disable backup mode. After putting the database in backup mode, `azacsnap` will query the Oracle database to get a list of files, which have backup-mode as active. This file list is output into an external file, which is in the same location and basename as the log file, but with a ".protected-tables" extension (output filename detailed in the AzAcSnap log file). +The snapshot tools communicate with the Oracle database and need a user with appropriate permissions to enable/disable back-up mode. After `azacsnap` puts the database in back-up mode, `azacsnap` will query the Oracle database to get a list of files, which have back-up mode as active. This file list is output into an external file, which is in the same location and basename as the log file, but with a `.protected-tables` filename extension (output filename detailed in the AzAcSnap log file). The following examples show the set up of the Oracle database user, the use of `mkstore` to create an Oracle Wallet, and the `sqlplus` configuration files required for communication to the Oracle database. The following example commands set up a user (AZACSNAP) in the Oracle database, User created. ``` -1. Grant the user permissions - This example sets the permission for the AZACSNAP user to allow for putting the database in backup mode. +1. Grant the user permissions - This example sets the permission for the AZACSNAP user to allow for putting the database in back-up mode. ```sql SQL> GRANT CREATE SESSION TO azacsnap; The following example commands set up a user (AZACSNAP) in the Oracle database, SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME unlimited; ``` - After making this change, there should be no password expiry date for user's with the DEFAULT profile. + After this change is made to the database setting, there should be no password expiry date for user's with the DEFAULT profile. ```sql SQL> SELECT username, account_status,expiry_date,profile FROM dba_users WHERE username='AZACSNAP'; The following example commands set up a user (AZACSNAP) in the Oracle database, 1. The Oracle Wallet provides a method to manage database credentials across multiple domains. This capability is accomplished by using a database - connection string in the datasource definition, which is resolved by an entry in the wallet. When used correctly, the Oracle Wallet makes having + connection string in the datasource definition, which is resolved with an entry in the wallet. When used correctly, the Oracle Wallet makes having passwords in the datasource configuration unnecessary. - This makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, thus hiding + This set up makes it possible to use the Oracle Transparent Network Substrate (TNS) administrative file with a connection string alias, thus hiding details of the database connection string. If the connection information changes, it's a matter of changing the `tnsnames.ora` file instead of potentially many datasource definitions. The following example commands set up a user (AZACSNAP) in the Oracle database, 1. Create the Linux user to generate the Oracle Wallet and associated `*.ora` files using the output from the previous step. > [!NOTE]- > In these examples we are using the `bash` shell. If you're using a different shell (for example, csh), then ensure environment + > In these examples we're using the `bash` shell. If you're using a different shell (for example, csh), then ensure environment > variables have been set correctly. ```bash The following example commands set up a user (AZACSNAP) in the Oracle database, > If deploying to a centralized virtual machine, then it will need to have the Oracle instant client installed and set up so > the AzAcSnap user can run `sqlplus` commands. > The Oracle Instant Client can downloaded from https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html.- > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package. + > In order for SQL\*Plus to run correctly, download both the required package (for example, Basic Light Package) and the optional SQL\*Plus tools package. 1. Complete the following steps on the system running AzAcSnap. The following example commands set up a user (AZACSNAP) in the Oracle database, 1. Test the set up with AzAcSnap - After configuring AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connect string (for example, `/@AZACSNAP`), + After you configure AzAcSnap (for example, `azacsnap -c configure --configuration new`) with the Oracle connect string (for example, `/@AZACSNAP`), it should be possible to connect to the Oracle database. Check the `$TNS_ADMIN` variable is set for the correct Oracle target system The following example commands set up a user (AZACSNAP) in the Oracle database, > or by exporting it before each run (for example, `export TNS_ADMIN="/home/orasnap/ORACLE19c" ; cd /home/orasnap/bin ; > ./azacsnap --configfile ORACLE19c.json -c backup --volume data --prefix hourly-ora19c --retention 12`) ++# [IBM Db2](#tab/db2) ++The snapshot tools issue commands to the IBM Db2 database using the command line processor `db2` to enable and disable back-up mode. ++After putting the database in back-up mode, `azacsnap` will query the IBM Db2 database to get a list of "protected paths", which are part of the database where back-up mode is active. This list is output into an external file, which is in the same location and basename as the log file, but with a `.\<DBName>-protected-paths` extension (output filename detailed in the AzAcSnap log file). ++AzAcSnap uses the IBM Db2 command line processor `db2` to issue SQL commands, such as `SET WRITE SUSPEND` or `SET WRITE RESUME`. Therefore AzAcSnap should be installed in one of the following two ways: ++ 1. Installed onto the database server, then complete the set up with "[Db2 local connectivity](#db2-local-connectivity)". + 1. Installed onto a centralized back-up system, then complete the set up with "[Db2 remote connectivity](#db2-remote-connectivity)". ++#### Db2 local connectivity ++If AzAcSnap has been installed onto the database server, then be sure to add the `azacsnap` user to the correct Linux group and import the Db2 instance user's profile per the following example set up. ++##### `azacsnap` user permissions ++The `azacsnap` user should belong to the same Db2 group as the database instance user. Here we're getting the group membership of the IBM Db2 installation's database instance user `db2tst`. ++```bash +id db2tst +``` ++```output +uid=1101(db2tst) gid=1001(db2iadm1) groups=1001(db2iadm1) +``` ++From the output, we can confirm the `db2tst` user has been added to the `db2iadm1` group, therefore add the `azacsnap` user to the group. ++```bash +usermod -a -G db2iadm1 azacsnap +``` ++##### `azacsnap` user profile ++The `azacsnap` user needs to be able to execute the `db2` command. By default the `db2` command won't be in the `azacsnap` user's $PATH, therefore add the following to the user's `.bashrc` file using your own IBM Db2 installation value for `INSTHOME`. ++```output +# The following four lines have been added to allow this user to run the DB2 command line processor. +INSTHOME="/db2inst/db2tst" +if [ -f ${INSTHOME}/sqllib/db2profile ]; then + . ${INSTHOME}/sqllib/db2profile +fi +``` ++Test the user can run the `db2` command line processor. ++```bash +su - azacsnap +db2 +``` ++```output +(c) Copyright IBM Corporation 1993,2007 +Command Line Processor for DB2 Client 11.5.7.0 ++You can issue database manager commands and SQL statements from the command +prompt. For example: + db2 => connect to sample + db2 => bind sample.bnd ++For general help, type: ?. +For command help, type: ? command, where command can be +the first few keywords of a database manager command. For example: + ? CATALOG DATABASE for help on the CATALOG DATABASE command + ? CATALOG for help on all of the CATALOG commands. ++To exit db2 interactive mode, type QUIT at the command prompt. Outside +interactive mode, all commands must be prefixed with 'db2'. +To list the current command option settings, type LIST COMMAND OPTIONS. ++For more detailed help, refer to the Online Reference Manual. +``` ++```sql +db2 => quit +DB20000I The QUIT command completed successfully. +``` ++Now configure azacsnap to user localhost. Once this preliminary test as the `azacsnap` user is working correctly, go on to configure (`azacsnap -c configure`) with the `serverAddress=localhost` and test (`azacsnap -c test --test db2`) azacsnap database connectivity. +++#### Db2 remote connectivity ++If AzAcSnap has been installed following option 2, then be sure to allow SSH access to the Db2 database instance per the following example set up. ++Log in to the AzAcSnap system as the `azacsnap` user and generate a public/private SSH key pair. ++```bash +ssh-keygen +``` ++```output +Generating public/private rsa key pair. +Enter file in which to save the key (/home/azacsnap/.ssh/id_rsa): +Enter passphrase (empty for no passphrase): +Enter same passphrase again: +Your identification has been saved in /home/azacsnap/.ssh/id_rsa. +Your public key has been saved in /home/azacsnap/.ssh/id_rsa.pub. +The key fingerprint is: +SHA256:4cr+0yN8/dawBeHtdmlfPnlm1wRMTO/mNYxarwyEFLU azacsnap@db2-02 +The key's randomart image is: ++[RSA 2048]-++| ... o. | +| . . +. | +| .. E + o.| +| .... B..| +| S. . o *=| +| . . . o o=X| +| o. . + .XB| +| . + + + +oX| +| ...+ . =.o+| ++-[SHA256]--++``` ++Get the contents of the public key. ++```bash +cat .ssh/id_rsa.pub +``` ++```output +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02 +``` ++Log in to the IBM Db2 system as the Db2 Instance User. ++Add the contents of the AzAcSnap user's public key to the Db2 Instance Users `authorized_keys` file. ++```bash +echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02" >> ~/.ssh/authorized_keys +``` ++Log in to the AzAcSnap system as the `azacsnap` user and test SSH access. ++```bash +ssh <InstanceUser>@<ServerAddress> +``` ++```output +[InstanceUser@ServerName ~]$ +``` ++Test the user can run the `db2` command line processor. ++```bash +db2 +``` ++```output +(c) Copyright IBM Corporation 1993,2007 +Command Line Processor for DB2 Client 11.5.7.0 ++You can issue database manager commands and SQL statements from the command +prompt. For example: + db2 => connect to sample + db2 => bind sample.bnd ++For general help, type: ?. +For command help, type: ? command, where command can be +the first few keywords of a database manager command. For example: + ? CATALOG DATABASE for help on the CATALOG DATABASE command + ? CATALOG for help on all of the CATALOG commands. ++To exit db2 interactive mode, type QUIT at the command prompt. Outside +interactive mode, all commands must be prefixed with 'db2'. +To list the current command option settings, type LIST COMMAND OPTIONS. ++For more detailed help, refer to the Online Reference Manual. +``` ++```sql +db2 => quit +DB20000I The QUIT command completed successfully. +``` ++```bash +[prj@db2-02 ~]$ exit ++```output +logout +Connection to <serverAddress> closed. +``` ++Once this is working correctly go on to configure (`azacsnap -c configure`) with the Db2 server's external IP address and test (`azacsnap -c test --test db2`) azacsnap database connectivity. ++Run the `azacsnap` test command ++```bash +cd ~/bin +azacsnap -c test --test db2 --configfile Db2.json +``` ++```output +BEGIN : Test process started for 'db2' +BEGIN : Db2 DB tests +PASSED: Successful connectivity to Db2 DB version v11.5.7.0 +END : Test process complete for 'db2' +``` + ## Installing the snapshot tools The downloadable self-installer is designed to make the snapshot tools easy to set up and run with-non-root user privileges (for example, azacsnap). The installer will set up the user and put the snapshot tools +non-root user privileges (for example, azacsnap). The installer sets up the user and puts the snapshot tools into the users `$HOME/bin` subdirectory (default = `/home/azacsnap/bin`). The self-installer tries to determine the correct settings and paths for all the files based on the-configuration of the user performing the installation (for example, root). If the previous setup steps (Enable -communication with storage and SAP HANA) were run as root, then the installation will copy the -private key and the `hdbuserstore` to the backup user's location. The steps to enable communication with the storage back-end -and SAP HANA can be manually done by a knowledgeable administrator after the installation. +configuration of the user performing the installation (for example, root). If the previous set up steps (Enable +communication with storage and SAP HANA) were run as root, then the installation copies the +private key and the `hdbuserstore` to the back-up user's location. The steps to enable communication with the storage back-end +and database can be manually done by a knowledgeable administrator after the installation. > [!NOTE] > For earlier SAP HANA on Azure Large Instance installations, the directory of pre-installed snapshot tools was `/hana/shared/<SID>/exe/linuxx86_64/hdb`. -With the [pre-requisite steps](#prerequisites-for-installation) completed, itΓÇÖs now possible to install the snapshot tools using the self-installer as follows: +With the [prerequisite steps](#prerequisites-for-installation) completed, itΓÇÖs now possible to install the snapshot tools using the self-installer as follows: 1. Copy the downloaded self-installer to the target system. 1. Execute the self-installer as the `root` user, see the following example. If necessary, make the file executable using the `chmod +x *.run` command. -Running the self-installer command without any arguments will display help on using the installer to -install the snapshot tools as follows: +Running the self-installer command without any arguments displays help on using the installer as follows: ```bash chmod +x azacsnap_installer_v5.0.run Examples of a target directory are ./tmp or /usr/local/bin > [!NOTE] > The self-installer has an option to extract (-X) the snapshot tools from the bundle without-performing any user creation and setup. This allows an experienced administrator to -complete the setup steps manually, or to copy the commands to upgrade an existing +performing any user creation and set up. This allows an experienced administrator to +complete the set up steps manually, or to copy the commands to upgrade an existing installation. ### Easy installation of snapshot tools (default) The installer has been designed to quickly install the snapshot tools for SAP HANA on Azure. By default, if the-installer is run with only the -I option, it will do the following steps: +installer is run with only the -I option, it does the following steps: -1. Create Snapshot user 'azacsnap', home directory, and set group membership. +1. Create Snapshot user `azacsnap`, home directory, and set group membership. 1. Configure the azacsnap user's login `~/.profile`.-1. Search filesystem for directories to add to azacsnap's `$PATH`, these are typically the paths to - the SAP HANA tools, such as `hdbsql` and `hdbuserstore`. +1. Search filesystem for directories to add to azacsnap's `$PATH`. This task allows the user who runs azacsnap to use SAP HANA commands, such as `hdbsql` and `hdbuserstore`. 1. Search filesystem for directories to add to azacsnap's `$LD_LIBRARY_PATH`. Many commands- require a library path to be set in order to execute correctly, this configures it for the + require a library path to be set in order to execute correctly, this task configures it for the installed user.-1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running the install). This assumes the "root" user has +1. Copy the SSH keys for back-end storage for azacsnap from the "root" user (the user running the install). This task assumes the "root" user has already configured connectivity to the storage (for more information, see section [Enable communication with storage](#enable-communication-with-storage)).-3. Copy the SAP HANA connection secure user store for the target user, azacsnap. This +3. Copy the SAP HANA connection secure user store for the target user, azacsnap. This task assumes the "root" user has already configured the secure user store (for more information, see section "Enable communication with SAP HANA"). 1. The snapshot tools are extracted into `/home/azacsnap/bin/`. 1. The commands in `/home/azacsnap/bin/` have their permissions set (ownership and executable bit, etc.). The following output shows the steps to complete after running the installer wit 1. Run your first snapshot backup 1. `azacsnap -c backup ΓÇô-volume data--prefix=hana_test --retention=1` -Step 2 will be necessary if "[Enable communication with database](#enable-communication-with-database)" wasn't done before the +Step 2 is necessary if "[Enable communication with database](#enable-communication-with-database)" wasn't done before the installation. > [!NOTE] This section explains how to configure the data base. ### SAP HANA Configuration -There are some recommended changes to be applied to SAP HANA to ensure protection of the log backups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` will output their files to the `$(DIR_INSTANCE)/backup/log` directory, and it's unlikely this path is on a volume which `azacsnap` is configured to snapshot these files won't be protected with storage snapshots. +There are some recommended changes to be applied to SAP HANA to ensure protection of the log back-ups and catalog. By default, the `basepath_logbackup` and `basepath_catalogbackup` are set so SAP HANA will put related files into the `$(DIR_INSTANCE)/backup/log` directory. It's unlikely this location is on a volume which `azacsnap` is configured to snapshot, therefore these files won't be protected with storage snapshots. The following `hdbsql` command examples demonstrate setting the log and catalog paths to locations, which are on storage volumes that can be snapshot by `azacsnap`. Be sure to check the values on the command line match the local SAP HANA configuration. drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog ``` If the path needs to be created, the following example creates the path and sets the correct-ownership and permissions. These commands will need to be run as root. +ownership and permissions. These commands need to be run as root. ```bash mkdir /hana/logbackups/H80/catalog ls -ld /hana/logbackups/H80/catalog drwxr-x 4 h80adm sapsys 4096 Jan 17 06:55 /hana/logbackups/H80/catalog ``` -The following example will change the SAP HANA setting. +The following example changes the SAP HANA setting. ```bash hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD> "ALTER SYSTEM ALTER CONFIGURATION ('global.ini', 'SYSTEM') SET ('persistence', 'basepath_catalogbackup') = '/hana/logbackups/H80/catalog' WITH RECONFIGURE" hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD ### Check log and catalog backup locations -After making the changes to the log and catalog backup locations, confirm the settings are correct with the following command. -In this example, the settings that have been set following the example will display as SYSTEM settings. +After making the changes to the log and catalog back-up locations, confirm the settings are correct with the following command. +In this example, the settings that have been set following the example are displayed as SYSTEM settings. > This query also returns the DEFAULT settings for comparison. global.ini,SYSTEM,,,persistence,basepath_logvolumes,/hana/log/H80 ### Configure log backup timeout -The default setting for SAP HANA to perform a log backup is 900 seconds (15 minutes). It's +The default setting for SAP HANA to perform a log back-up is 900 seconds (15 minutes). It's recommended to reduce this value to 300 seconds (for example, 5 minutes). Then it's possible to run regular-backups of these files (for example, every 10 minutes). This is done by adding the log_backups volumes to the OTHER volume section of the +back-ups of these files (for example, every 10 minutes). These back-ups can be taken by adding the log_backups volumes to the OTHER volume section of the configuration file. ```bash hdbsql -jaxC -n <HANA_ip_address>:30013 -i 00 -u SYSTEM -p <SYSTEM_USER_PASSWORD #### Check log backup timeout -After making the change to the log backup timeout, check to ensure this has been set as follows. -In this example, the settings that have been set will display as the SYSTEM settings, but this +After making the change to the log back-up timeout, check to ensure the timeout is set as follows. +In this example, the settings that have been set are displayed as SYSTEM settings, but this query also returns the DEFAULT settings for comparison. ```bash The following changes must be applied to the Oracle Database to allow for monito QUIT ``` +# [IBM Db2](#tab/db2) ++No special database configuration is required for Db2 as we're using the Instance User's local operating system environment. + ## Next steps |
azure-netapp-files | Azacsnap Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-introduction.md | Azure Application Consistent Snapshot tool (AzAcSnap) is a command-line tool tha - **Databases** - SAP HANA (refer to [support matrix](azacsnap-get-started.md#snapshot-support-matrix-from-sap) for details) - Oracle Database release 12 or later (refer to [Oracle VM images and their deployment on Microsoft Azure](../virtual-machines/workloads/oracle/oracle-vm-solutions.md) for details)+ - IBM Db2 for LUW on Linux-only version 10.5 or later (refer to [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](../virtual-machines/workloads/sap/dbms_guide_ibm.md) for details) - **Operating Systems** - SUSE Linux Enterprise Server 12+ |
azure-netapp-files | Azacsnap Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md | -The preview features provided with **AzAcSnap 7** are: +The preview features provided with **AzAcSnap 9** are: - Azure NetApp Files Backup.-- IBM Db2 Database. - Azure Managed Disk.-- Azure Key Vault support for storing Service Principal. ## Providing feedback This can be enabled in AzAcSnap by setting `"anfBackup": "renameOnly"` in the co This can also be done using the `azacsnap -c configure --configuration edit --configfile <configfilename>` and when asked to `Enter new value for 'ANF Backup (none, renameOnly)' (current = 'none'):` enter `renameOnly`. -## IBM Db2 Database --### Supported platforms and operating systems --> [!NOTE] -> Support for IBM Db2 is Preview feature. -> This section's content supplements [What is Azure Application Consistent Snapshot tool](azacsnap-introduction.md) page. --New database platforms and operating systems supported with this preview release. --- **Databases**- - IBM Db2 for LUW on Linux-only is in preview as of Db2 version 10.5 (refer to [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](../virtual-machines/workloads/sap/dbms_guide_ibm.md) for details) ---### Enable communication with database --> [!NOTE] -> Support for IBM Db2 is Preview feature. -> This section's content supplements [Install Azure Application Consistent Snapshot tool](azacsnap-installation.md) page. --This section explains how to enable communication with the database. Ensure the database you're using is correctly selected from the tabs. --# [IBM Db2](#tab/db2) --The snapshot tools issue commands to the IBM Db2 database using the command line processor `db2` to enable and disable backup mode. --After putting the database in backup mode, `azacsnap` will query the IBM Db2 database to get a list of "protected paths", which are part of the database where backup-mode is active. This list is output into an external file, which is in the same location and basename as the log file, but with a ".\<DBName>-protected-paths" extension (output filename detailed in the AzAcSnap log file). --AzAcSnap uses the IBM Db2 command line processor `db2` to issue SQL commands, such as `SET WRITE SUSPEND` or `SET WRITE RESUME`. Therefore AzAcSnap should be installed in one of the following two ways: -- 1. Installed onto the database server, then complete the setup with "[Local connectivity](#local-connectivity)". - 1. Installed onto a centralized backup system, then complete the setup with "[Remote connectivity](#remote-connectivity)". --#### Local connectivity --If AzAcSnap has been installed onto the database server, then be sure to add the `azacsnap` user to the correct Linux group and import the Db2 instance user's profile per the following example setup. --##### `azacsnap` user permissions --The `azacsnap` user should belong to the same Db2 group as the database instance user. Here we are getting the group membership of the IBM Db2 installation's database instance user `db2tst`. --```bash -id db2tst -``` --```output -uid=1101(db2tst) gid=1001(db2iadm1) groups=1001(db2iadm1) -``` --From the output we can confirm the `db2tst` user has been added to the `db2iadm1` group, therefore add the `azacsnap` user to the group. --```bash -usermod -a -G db2iadm1 azacsnap -``` --##### `azacsnap` user profile --The `azacsnap` user will need to be able to execute the `db2` command. By default the `db2` command will not be in the `azacsnap` user's $PATH, therefore add the following to the user's `.bashrc` file using your own IBM Db2 installation value for `INSTHOME`. --```output -# The following four lines have been added to allow this user to run the DB2 command line processor. -INSTHOME="/db2inst/db2tst" -if [ -f ${INSTHOME}/sqllib/db2profile ]; then - . ${INSTHOME}/sqllib/db2profile -fi -``` --Test the user can run the `db2` command line processor. --```bash -su - azacsnap -db2 -``` --```output -(c) Copyright IBM Corporation 1993,2007 -Command Line Processor for DB2 Client 11.5.7.0 --You can issue database manager commands and SQL statements from the command -prompt. For example: - db2 => connect to sample - db2 => bind sample.bnd --For general help, type: ?. -For command help, type: ? command, where command can be -the first few keywords of a database manager command. For example: - ? CATALOG DATABASE for help on the CATALOG DATABASE command - ? CATALOG for help on all of the CATALOG commands. --To exit db2 interactive mode, type QUIT at the command prompt. Outside -interactive mode, all commands must be prefixed with 'db2'. -To list the current command option settings, type LIST COMMAND OPTIONS. --For more detailed help, refer to the Online Reference Manual. -``` --```sql -db2 => quit -DB20000I The QUIT command completed successfully. -``` --Now configure azacsnap to user localhost. -Once this is working correctly go on to configure (`azacsnap -c configure`) with the `serverAddress=localhost` and test (`azacsnap -c test --test db2`) azacsnap database connectivity. ---#### Remote connectivity --If AzAcSnap has been installed following option 2, then be sure to allow SSH access to the Db2 database instance per the following example setup. ---Log in to the AzAcSnap system as the `azacsnap` user and generate a public/private SSH key pair. --```bash -ssh-keygen -``` --```output -Generating public/private rsa key pair. -Enter file in which to save the key (/home/azacsnap/.ssh/id_rsa): -Enter passphrase (empty for no passphrase): -Enter same passphrase again: -Your identification has been saved in /home/azacsnap/.ssh/id_rsa. -Your public key has been saved in /home/azacsnap/.ssh/id_rsa.pub. -The key fingerprint is: -SHA256:4cr+0yN8/dawBeHtdmlfPnlm1wRMTO/mNYxarwyEFLU azacsnap@db2-02 -The key's randomart image is: -+[RSA 2048]-+ -| ... o. | -| . . +. | -| .. E + o.| -| .... B..| -| S. . o *=| -| . . . o o=X| -| o. . + .XB| -| . + + + +oX| -| ...+ . =.o+| -+-[SHA256]--+ -``` --Get the contents of the public key. --```bash -cat .ssh/id_rsa.pub -``` --```output -ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02 -``` --Log in to the IBM Db2 system as the Db2 Instance User. --Add the contents of the AzAcSnap user's public key to the Db2 Instance Users `authorized_keys` file. --```bash -echo "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCb4HedCPdIeft4DUp7jwSDUNef52zH8xVfu5sSErWUw3hhRQ7KV5sLqtxom7an2a0COeO13gjCiTpwfO7UXH47dUgbz+KfwDaBdQoZdsp8ed1WI6vgCRuY4sb+rY7eiqbJrLnJrmgdwZkV+HSOvZGnKEV4Y837UHn0BYcAckX8DiRl7gkrbZUPcpkQYHGy9bMmXO+tUuxLM0wBrzvGcPPZ azacsnap@db2-02" >> ~/.ssh/authorized_keys -``` --Log in to the AzAcSnap system as the `azacsnap` user and test SSH access. --```bash -ssh <InstanceUser>@<ServerAddress> -``` --```output -[InstanceUser@ServerName ~]$ -``` --Test the user can run the `db2` command line processor. --```bash -db2 -``` --```output -(c) Copyright IBM Corporation 1993,2007 -Command Line Processor for DB2 Client 11.5.7.0 --You can issue database manager commands and SQL statements from the command -prompt. For example: - db2 => connect to sample - db2 => bind sample.bnd --For general help, type: ?. -For command help, type: ? command, where command can be -the first few keywords of a database manager command. For example: - ? CATALOG DATABASE for help on the CATALOG DATABASE command - ? CATALOG for help on all of the CATALOG commands. --To exit db2 interactive mode, type QUIT at the command prompt. Outside -interactive mode, all commands must be prefixed with 'db2'. -To list the current command option settings, type LIST COMMAND OPTIONS. --For more detailed help, refer to the Online Reference Manual. -``` --```sql -db2 => quit -DB20000I The QUIT command completed successfully. -``` --```bash -[prj@db2-02 ~]$ exit --```output -logout -Connection to <serverAddress> closed. -``` --Once this is working correctly go on to configure (`azacsnap -c configure`) with the Db2 server's external IP address and test (`azacsnap -c test --test db2`) azacsnap database connectivity. --Run the `azacsnap` test command --```bash -cd ~/bin -azacsnap -c test --test db2 --configfile Db2.json -``` --```output -BEGIN : Test process started for 'db2' -BEGIN : Db2 DB tests -PASSED: Successful connectivity to Db2 DB version v11.5.7.0 -END : Test process complete for 'db2' -``` ----### Configuring the database --This section explains how to configure the data base. --# [IBM Db2](#tab/db2) --No special database configuration is required for Db2 as we are using the Instance User's local operating system environment. ----### Configuring AzAcSnap --This section explains how to configure AzAcSnap for the specified database. --> [!NOTE] -> Support for Db2 is Preview feature. -> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page. --### Details of required values --The following sections provide detailed guidance on the various values required for the configuration file. --# [IBM Db2](#tab/db2) --#### Db2 Database values for configuration --When adding a Db2 database to the configuration, the following values are required: --- **Db2 Server's Address** = The database server hostname or IP address.- - If Db2 Server Address (serverAddress) matches '127.0.0.1' or 'localhost' then azacsnap will execute all `db2` commands locally (refer "Local connectivity"). Otherwise AzAcSnap will use the serverAddress as the host to connect to via SSH using the "Instance User" as the SSH login name, this can be validated with `ssh <instanceUser>@<serverAddress>` replacing instanceUser and serverAddress with the respective values (refer "Remote connectivity"). -- **Instance User** = The database System Instance User.-- **SID** = The database System ID.--- ## Azure Managed Disk > [!NOTE] Although `azacsnap` is currently missing the `-c restore` option for Azure Manag -## Azure Key Vault --From AzAcSnap v5.1, it's possible to store the Service Principal securely as a Secret in Azure Key Vault. Using this feature allows for centralization of Service Principal credentials -where an alternate administrator can set up the Secret for AzAcSnap to use. --The steps to follow to set up Azure Key Vault and store the Service Principal in a Secret are as follows: --1. Within an Azure Cloud Shell session, make sure you're logged on at the subscription where you want to create the Azure Key Vault: -- ```azurecli-interactive - az account show - ``` --1. If the subscription isn't correct, use the following command to set the Cloud Shell to the correct subscription: -- ```azurecli-interactive - az account set -s <subscription name or id> - ``` --1. Create Azure Key Vault -- ```azurecli-interactive - az keyvault create --name "<AzureKeyVaultName>" -g <ResourceGroupName> - ``` --1. Create the trust relationship and assign the policy for virtual machine to get the Secret -- 1. Show AzAcSnap virtual machine Identity - - If the virtual machine already has an identity created, retrieve it as follows: - - ```azurecli-interactive - az vm identity show --name "<VMName>" --resource-group "<ResourceGroup>" - ``` - - The `"principalId"` in the output is used as the `--object-id` value when setting the Policy with `az keyvault set-policy`. - - ```output - { - "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "tenantId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "type": "SystemAssigned, UserAssigned", - "userAssignedIdentities": { - "/subscriptions/99z999zz-99z9-99zz-99zz-9z9zz999zz99/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-eastus2": { - "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99" - } - } - } - ``` -- 1. Set AzAcSnap virtual machine Identity (if necessary) - - If the VM doesn't have an identity, create it as follows: - - ```azurecli-interactive - az vm identity assign --name "<VMName>" --resource-group "<ResourceGroup>" - ``` - - The `"systemAssignedIdentity"` in the output is used as the `--object-id` value when setting the Policy with `az keyvault set-policy`. - - ```output - { - "systemAssignedIdentity": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "userAssignedIdentities": { - "/subscriptions/99z999zz-99z9-99zz-99zz- 9z9zz999zz99/resourceGroups/AzSecPackAutoConfigRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/AzSecPackAutoConfigUA-eastus2": { - "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "principalId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99" - } - } - } - ``` -- 1. Assign a suitable policy for the virtual machine to be able to retrieve the Secret from the Key Vault. -- ```azurecli-interactive - az keyvault set-policy --name "<AzureKeyVaultName>" --object-id "<VMIdentity>" --secret-permissions get - ``` --1. Create Azure Key Vault Secret -- Create the secret, which will store the Service Principal credential information. - - It's possible to paste the contents of the Service Principal. In the **Bash** Cloud Shell below a single apostrophe character is put after value then - press the `[Enter]` key, then paste the contents of the Service Principal, close the content by adding another single apostrophe and press the `[Enter]` key. - This command should create the Secret and store it in Azure Key Vault. - - > [!TIP] - > If you have a separate Service Principal per installation the `"<NameOfSecret>"` could be the SID, or some other suitable unique identifier. - - Following example is for using the **Bash** Cloud Shell: -- ```azurecli-interactive - az keyvault secret set --name "<NameOfSecret>" --vault-name "<AzureKeyVaultName>" --value ' - { - "clientId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "clientSecret": "<ClientSecret>", - "subscriptionId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "tenantId": "99z999zz-99z9-99zz-99zz-9z9zz999zz99", - "activeDirectoryEndpointUrl": "https://login.microsoftonline.com", - "resourceManagerEndpointUrl": "https://management.azure.com/", - "activeDirectoryGraphResourceId": "https://graph.windows.net/", - "sqlManagementEndpointUrl": "https://management.core.windows.net:8443/", - "galleryEndpointUrl": "https://gallery.azure.com/", - "managementEndpointUrl": "https://management.core.windows.net/" - }' - ``` -- Following example is for using the **PowerShell** Cloud Shell: -- > [!WARNING] - > In PowerShell the double quotes have to be escaped with an additional double quote, so one double quote (") becomes two double quotes (""). -- ```azurecli-interactive - az keyvault secret set --name "<NameOfSecret>" --vault-name "<AzureKeyVaultName>" --value ' - { - ""clientId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"", - ""clientSecret"": ""<ClientSecret>"", - ""subscriptionId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"", - ""tenantId"": ""99z999zz-99z9-99zz-99zz-9z9zz999zz99"", - ""activeDirectoryEndpointUrl"": ""https://login.microsoftonline.com"", - ""resourceManagerEndpointUrl"": ""https://management.azure.com/"", - ""activeDirectoryGraphResourceId"": ""https://graph.windows.net/"", - ""sqlManagementEndpointUrl"": ""https://management.core.windows.net:8443/"", - ""galleryEndpointUrl"": ""https://gallery.azure.com/"", - ""managementEndpointUrl"": ""https://management.core.windows.net/"" - }' - ``` -- The output of the command `az keyvault secret set` will have the URI value to use as `"authFile"` entry in the AzAcSnap JSON configuration file. The URI is - the value of the `"id"` below (for example, `"https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999"`). -- ```output - { - "attributes": { - "created": "2022-02-23T20:21:01+00:00", - "enabled": true, - "expires": null, - "notBefore": null, - "recoveryLevel": "Recoverable+Purgeable", - "updated": "2022-02-23T20:21:01+00:00" - }, - "contentType": null, - "id": "https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999", - "kid": null, - "managed": null, - "name": "AzureAuth", - "tags": { - "file-encoding": "utf-8" - }, - "value": "\n{\n \"clientId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"clientSecret\": \"<ClientSecret>\",\n \"subscriptionId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"tenantId\": \"99z999zz-99z9-99zz-99zz-9z9zz999zz99\",\n \"activeDirectoryEndpointUrl\": \"https://login.microsoftonline.com\",\n \"resourceManagerEndpointUrl\": \"https://management.azure.com/\",\n \"activeDirectoryGraphResourceId\": \"https://graph.windows.net/\",\n \"sqlManagementEndpointUrl\": \"https://management.core.windows.net:8443/\",\n \"galleryEndpointUrl\": \"https://gallery.azure.com/\",\n \"managementEndpointUrl\": \"https://management.core.windows.net/\"\n}" - } - ``` --1. Update AzAcSnap JSON configuration file -- Replace the value for the authFile entry with the Secret's ID value. Making this change can be done by editing the file using a tool like `vi`, or by using the - `azacsnap -c configure --configuration edit` option. -- 1. Old Value - - ```output - "authFile": "azureauth.json" - ``` - - 1. New Value - - ```output - "authFile": "https://<AzureKeyVaultName>.vault.azure.net/secrets/<NameOfSecret>/z9999999z9999999z9999999" - ``` -- ## Next steps - [Get started](azacsnap-get-started.md) |
azure-netapp-files | Azacsnap Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-release-notes.md | Download the [latest release](https://aka.ms/azacsnapinstaller) of the installer For specific information on Preview features, refer to the [AzAcSnap Preview](azacsnap-preview.md) page. +## Aug-2023 ++### AzAcSnap 9 (Build: 1AE5640) ++AzAcSnap 9 is being released with the following fixes and improvements: ++- Features moved to GA (generally available): + - IBM Db2 Database support. + - [System Managed Identity](azacsnap-installation.md#azure-system-managed-identity) support for easier setup while improving security posture. +- Fixes and Improvements: + - Configure (`-c configure`) changes: + - Allows for a blank value for `authFile` in the configuration file when using System Managed Identity. +- Features added to [Preview](azacsnap-preview.md): + - None. +- Features removed: + - Azure Key Vault support has been removed from Preview, it isn't needed now AzAcSnap supports a System Managed Identity directly. ++Download the [AzAcSnap 9](https://aka.ms/azacsnap-9) installer. + ## Jun-2023 ### AzAcSnap 8b (Build: 1AD3679) AzAcSnap 8b is being released with the following fixes and improvements: - Fixes and Improvements: - General improvement to `azacsnap` command exit codes.- - `azacsnap` should return an exit code of 0 (zero) when it has run as expected, otherwise it should return an exit code of non-zero. For example, running `azacsnap` will return non-zero as it has not done anything and will show usage information whereas `azacsnap -h` will return exit-code of zero as it's expected to return usage information. + - `azacsnap` should return an exit code of 0 (zero) when it has run as expected, otherwise it should return an exit code of non-zero. For example, running `azacsnap` returns non-zero as it hasn't done anything and shows usage information whereas `azacsnap -h` returns exit-code of zero as it's performing as expected by returning usage information. - Any failure in `--runbefore` exits before any backup activity and returns the `--runbefore` exit code. - Any failure in `--runafter` returns the `--runafter` exit code. - Backup (`-c backup`) changes: AzAcSnap 8 is being released with the following fixes and improvements: - Backup (`-c backup`) changes: - Fix for incorrect error output when using `-c backup` and the database has ΓÇÿbackintΓÇÖ configured. - Remove lower-case conversion for anfBackup rename-only option using `-c backup` so the snapshot name maintains case of Volume name.- - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA cannot be put into backup-mode, AzAcSnap immediately exits with an error. + - Fix for when a snapshot is created even though SAP HANA wasn't put into backup-mode. Now if SAP HANA can't be put into backup-mode, AzAcSnap immediately exits with an error. - Details (`-c details`) changes: - Fix for listing snapshot details with `-c details` when using Azure Large Instance storage. - Logging enhancements: AzAcSnap v5.1 Preview (Build: 20220125.85030) has been released with the followi AzAcSnap v5.0.2 (Build: 20210827.19086) is provided as a patch update to the v5.0 branch with the following fixes and improvements: -- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - then AzAcSnap would trap this exit code as a failure and abort. With this update extra verification is done to validate correct execution, this includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional exit code checks.-- Fix the installer's check for the location of the hdbuserstore. The installer would search the filesystem for an incorrect source directory for the hdbuserstore location for the user running the install - the installer now searches for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was pre-configured for the `root` user before installing `azacsnap`.+- Ignore `ssh` 255 exit codes. In some cases the `ssh` command, which is used to communicate with storage on Azure Large Instance, would emit an exit code of 255 when there were no errors or execution failures (refer `man ssh` "EXIT STATUS") - then AzAcSnap would trap this exit code as a failure and abort. With this update extra verification is done to validate correct execution, this validation includes parsing `ssh` STDOUT and STDERR for errors in addition to traditional exit code checks. +- Fix the installer's check for the location of the hdbuserstore. The installer would search the filesystem for an incorrect source directory for the hdbuserstore location for the user running the install - the installer now searches for `~/.hdb`. This fix is applicable to systems (for example, Azure Large Instance) where the hdbuserstore was preconfigured for the `root` user before installing `azacsnap`. - Installer now shows the version it will install/extract (if the installer is run without any arguments). ## May-2021 |
azure-netapp-files | Azure Government | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md | ms.assetid: na-+ Last updated 03/08/2023 |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | You can modify SMB share permissions using Microsoft Management Console (MMC). * [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md) * [Mount a volume for Windows or Linux virtual machines](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md) * [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md)-* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) * [Enable Continuous Availability on existing SMB volumes](enable-continuous-availability-existing-SMB.md) * [SMB encryption](azure-netapp-files-smb-performance.md#smb-encryption) * [Troubleshoot volume errors for Azure NetApp Files](troubleshoot-volumes.md) |
azure-netapp-files | Azure Netapp Files Network Topologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md | Azure NetApp Files volumes are designed to be contained in a special purpose sub * Australia Southeast * Brazil South * Canada Central+* Canada East * Central India * East Asia * East US Azure NetApp Files volumes are designed to be contained in a special purpose sub * Switzerland West * UAE Central * UAE North +* UK South * West Europe * West US * West US 2 |
azure-netapp-files | Azure Netapp Files Solution Architectures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md | This section provides references to SAP on Azure solutions. * [SAP S/4HANA in Linux on Azure - Azure Architecture Center](/azure/architecture/reference-architectures/sap/sap-s4hana) * [Run SAP BW/4HANA with Linux VMs - Azure Architecture Center](/azure/architecture/reference-architectures/sap/run-sap-bw4hana-with-linux-virtual-machines) * [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md)+* [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300) * [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417) * [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md) This section provides references to SAP on Azure solutions. * [Protecting HANA databases configured with HSR on Azure NetApp Files with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/protecting-hana-databases-configured-with-hsr-on-azure-netapp/ba-p/3654620) * [Manual Recovery Guide for SAP HANA on Azure VMs from Azure NetApp Files snapshot with AzAcSnap](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/manual-recovery-guide-for-sap-hana-on-azure-vms-from-azure/ba-p/3290161) * [SAP HANA on Azure NetApp Files - Data protection with BlueXP backup and recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-hana-on-azure-netapp-files-data-protection-with-bluexp/ba-p/3840116)+* [SAP HANA on Azure NetApp Files ΓÇô System refresh & cloning operations with BlueXP backup and recovery](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-hana-on-azure-netapp-files-system-refresh-amp-cloning/ba-p/3908660) * [Azure NetApp Files Backup for SAP Solutions](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/anf-backup-for-sap-solutions/ba-p/3717977) * [SAP HANA Disaster Recovery with Azure NetApp Files](https://docs.netapp.com/us-en/netapp-solutions-sap/pdfs/sidebar/SAP_HANA_Disaster_Recovery_with_Azure_NetApp_Files.pdf) |
azure-netapp-files | Backup Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md | Azure NetApp Files backup is supported for the following regions: * Australia East * Australia Southeast * Brazil South+* Canada Central * Canada East * East Asia * East US Azure NetApp Files backup is supported for the following regions: * Germany West Central * Japan East * Japan West+* Korea Central +* North Central US * North Europe+* Norway East * Qatar Central * South Africa North * South Central US |
azure-netapp-files | Backup Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-requirements-considerations.md | Azure NetApp Files backup in a region can only protect an Azure NetApp Files vol * Policy-based (scheduled) Azure NetApp Files backup is independent from [snapshot policy configuration](azure-netapp-files-manage-snapshots.md). -* In a cross-region replication setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a cross-region replication *destination* volume. +* In a [cross-region replication](cross-region-replication-introduction.md) (CRR) or [cross-zone replication](cross-zone-replication-introduction.md) (CZR) setting, Azure NetApp Files backup can be configured on a source volume only. Azure NetApp Files backup isn't supported on a CRR or CZR *destination* volume. * See [Restore a backup to a new volume](backup-restore-new-volume.md) for additional considerations related to restoring backups. |
azure-netapp-files | Cross Region Replication Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-delete.md | If you want to delete the source or destination volume, you must perform the fol * [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md) * [Display health status of replication relationship](cross-region-replication-display-health-status.md) * [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md)+* [Re-establish deleted volume relationship](reestablish-deleted-volume-relationships.md) * [Manage default and individual user and group quotas for a volume](manage-default-individual-user-group-quotas.md) |
azure-netapp-files | Cross Zone Replication Requirements Considerations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-zone-replication-requirements-considerations.md | This article describes requirements and considerations about [using the volume c * After you set up cross-zone replication, the replication process creates *SnapMirror snapshots* to provide references between the source volume and the destination volume. SnapMirror snapshots are cycled automatically when a new one is created for every incremental transfer. You cannot delete SnapMirror snapshots until you delete the replication relationship and volume. * You cannot mount a dual-protocol volume until you [authorize replication from the source volume](cross-region-replication-create-peering.md#authorize-replication-from-the-source-volume) and the initial [transfer](cross-region-replication-display-health-status.md#display-replication-status) happens. * You can delete manual snapshots on the source volume of a replication relationship when the replication relationship is active or broken, and also after you've deleted replication relationship. You cannot delete manual snapshots for the destination volume until you break the replication relationship.-* You can't revert a source or destination volume of cross-zone replication to a snapshot. The snapshot revert functionality is unavailable out for volumes in a replication relationship. +* When reverting a source volume with an active volume replication relationship, only snapshots that are more recent than the SnapMirror snapshot can be used in the revert operation. For more information, see [Revert a volume using snapshot revert with Azure NetApp Files](snapshots-revert-volume.md). * Data replication volumes support [customer-managed keys](configure-customer-managed-keys.md). * You can't currently use cross-zone replication with [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes) (larger than 100 TiB). |
azure-netapp-files | Double Encryption At Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/double-encryption-at-rest.md | Azure NetApp Files double encryption at rest is supported for the following regi * Norway East * Qatar Central * South Africa North -* South Central US -* Sweden Central +* South Central US * Switzerland North * UAE North * UK South |
azure-netapp-files | Faq Application Resilience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-application-resilience.md | The scale-out architecture would be comprised of multiple IBM MQ multi-instance ## I'm running Apache ActiveMQ with LevelDB or KahaDB on Azure NetApp Files. What precautions can I take to avoid disruptions due to storage service maintenance events despite using the *NFS* protocol? ->[!NOTE] -> This section contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. - If you're running the Apache ActiveMQ, it's recommended to deploy [ActiveMQ High Availability with Pluggable Storage Lockers](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq). -ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the "slave" isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover. +ActiveMQ high availability (HA) models ensure that a broker instance is always online and able to process message traffic. The two most common ActiveMQ HA models involve sharing a filesystem over a network. The purpose is to provide either LevelDB or KahaDB to the active and passive broker instances. These HA models require that an OS-level lock be obtained and maintained on a file in the LevelDB or KahaDB directories, called "lock." There are some problems with this ActiveMQ HA model. They can lead to a "no-master" situation, where the replica isnΓÇÖt aware that it can lock the file. They can also lead to a "master-master" configuration that results in index or journal corruption and ultimately message loss. Most of these problems stem from factors outside of ActiveMQ's control. For instance, a poorly optimized NFS client can cause locking data to become stale under load, leading to ΓÇ£no-masterΓÇ¥ downtime during failover. Because most problems with this HA solution stem from inaccurate OS-level file locking, the ActiveMQ community [introduced the concept of a pluggable storage locker](https://www.openlogic.com/blog/pluggable-storage-lockers-activemq) in version 5.7 of the broker. This approach allows a user to take advantage of a different means of the shared lock, using a row-level JDBC database lock as opposed to an OS-level filesystem lock. For support or consultancy on ActiveMQ HA architectures and deployments, you should [contact OpenLogic by Perforce](https://www.openlogic.com/contact-us). |
azure-netapp-files | Reestablish Deleted Volume Relationships | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/reestablish-deleted-volume-relationships.md | + + Title: Re-establish deleted volume replication relationships in Azure NetApp Files +description: You can re-establish the replication relationship between volumes. ++++++ Last updated : 02/21/2023++# Re-establish deleted volume replication relationships in Azure NetApp Files (preview) ++Azure NetApp Files allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. You can only re-establish the relationship from the destination volume. ++If the destination volume remains operational and no snapshots were deleted, the replication re-establish operation uses the last common snapshot. The operation incrementally synchronizes the destination volume based on the last known good snapshot. A baseline snapshot isn't required. ++## Considerations ++* You can only re-establish relationships when there's an existing snapshot generated either [manually](azure-netapp-files-manage-snapshots.md) or by a [snapshot policy](snapshots-manage-policy.md). ++## Register the feature ++The re-establish deleted volume replication relationships capability is currently in preview. If you're using this feature for the first time, you need to register the feature first. ++1. Register the feature by running the following commands: ++ ```azurepowershell-interactive + Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFReestablishReplication + ``` ++2. Check the status of the feature registration: ++ > [!NOTE] + > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is `Registered` before continuing. ++ ```azurepowershell-interactive + Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFReestablishReplication + ``` +You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. ++## Re-establish the relationship ++1. From the **Volumes** menu under **Storage service**, select the volume that was formerly the _destination_ volume in the replication relationship you want to restore. Then select the **Replication** tab. +1. In the **Replication** tab, select the **Re-establish** button. + :::image type="content" source="./media/reestablish-deleted-volume-relationships/reestablish-button.png" alt-text="Screenshot of volume menu that depicts no existing volume relationships. A red box surrounds the re-establish button." lightbox="./media/reestablish-deleted-volume-relationships/reestablish-button.png"::: +1. A dropdown list appears with a selection of all volumes that formerly had either a source or destination replication relationship with the selected volume. From the dropdown menu, select the volume you want to reestablish a relationship with. Select **OK** to reestablish the relationship. + :::image type="content" source="./media/reestablish-deleted-volume-relationships/reestablish-confirm.png" alt-text="Screenshot of a dropdown menu with available volume relationships to restore." lightbox="./media/reestablish-deleted-volume-relationships/reestablish-confirm.png"::: ++## Next steps ++* [Cross-region replication](cross-region-replication-introduction.md) +* [Requirements and considerations for using cross-region replication](cross-region-replication-requirements-considerations.md) +* [Display health status of replication relationship](cross-region-replication-display-health-status.md) +* [Troubleshoot cross-region-replication](troubleshoot-cross-region-replication.md) |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## August 2023 +* [Cross-region replication enhancement: re-establish deleted volume replication](reestablish-deleted-volume-relationships.md) (Preview) ++ Azure NetApp Files now allows you to re-establish a replication relationship between two volumes in case you had previously deleted it. If the destination volume remained operational and no snapshots were deleted, the replication re-establish operation will use the last common snapshot and incrementally synchronize the destination volume based on the last known good snapshot. In that case, no baseline replication will be required. + * [Backup vault](backup-vault-manage.md) (Preview) Azure NetApp Files backups are now organized under a backup vault. You must migrate all existing backups to a backup vault. For more information, see [Migrate backups to a backup vault](backup-vault-manage.md#migrate-backups-to-a-backup-vault). Azure NetApp Files is updated regularly. This article provides a summary about t * [Dynamic change of service level](dynamic-change-volume-service-level.md) * [Administrators privilege users](create-active-directory-connections.md#administrators-privilege-users) + ## March 2022 * Features that are now generally available (GA) |
azure-portal | Azure Portal Safelist Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md | Title: Allow the Azure portal URLs on your firewall or proxy server description: To optimize connectivity between your network and the Azure portal and its services, we recommend you add these URLs to your allowlist. Previously updated : 05/18/2023 Last updated : 08/22/2023 You can use [service tags](../virtual-network/service-tags-overview.md) to defin The URL endpoints to allow for the Azure portal are specific to the Azure cloud where your organization is deployed. To allow network traffic to these endpoints to bypass restrictions, select your cloud, then add the list of URLs to your proxy server or firewall. We do not recommend adding any additional portal-related URLs aside from those listed here, although you may want to add URLs related to other Microsoft products and services. Depending on which services you use, you may not need to include all of these URLs in your allowlist. -> [!NOTE] -> Including the wildcard symbol (\*) at the start of an endpoint will allow all subdomains. Avoid adding a wildcard symbol to endpoints listed here that don't already include one. Instead, if you identify additional subdomains of an endpoint that are needed for your particular scenario, we recommend that you allow only that particular subdomain. +> [!IMPORTANT] +> Including the wildcard symbol (\*) at the start of an endpoint will allow all subdomains. For endpoints with wildcards, we also advise you to add the URL without the wildcard. For example, you should add both `*.portal.azure.com` and `portal.azure.com` to ensure that access to the domain is allowed with or without a subdomain. +> +> Avoid adding a wildcard symbol to endpoints listed here that don't already include one. Instead, if you identify additional subdomains of an endpoint that are needed for your particular scenario, we recommend that you allow only that particular subdomain. ### [Public Cloud](#tab/public-cloud) login.live.com #### Azure portal framework ```-portal.azure.com *.portal.azure.com *.hosting.portal.azure.net-reactblade.portal.azure.net *.reactblade.portal.azure.net management.azure.com *.ext.azure.com-graph.windows.net *.graph.windows.net-graph.microsoft.com *.graph.microsoft.com ``` graph.microsoft.com *.account.microsoft.com *.bmx.azure.com *.subscriptionrp.trafficmanager.net-signup.azure.com *.signup.azure.com ``` |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-relay | Ip Firewall Virtual Networks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/ip-firewall-virtual-networks.md | This section shows you how to use the Azure portal to create IP firewall rules f 1. To restrict access to specific networks and IP addresses, select the **Selected networks** option. In the **Firewall** section, follow these steps: 1. Select **Add your client IP address** option to give your current client IP the access to the namespace. 2. For **address range**, enter a specific IPv4 address or a range of IPv4 address in CIDR notation. - 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow [trusted Microsoft services](#trusted-services) to bypass this firewall?**. + 3. If you want to allow Microsoft services trusted by the Azure Relay service to bypass this firewall, select **Yes** for **Allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall?**. :::image type="content" source="./media/ip-firewall/selected-networks-trusted-access-disabled.png" alt-text="Screenshot showing the Public access tab of the Networking page with the Firewall enabled."::: 1. Select **Save** on the toolbar to save the settings. Wait for a few minutes for the confirmation to show up on the portal notifications. |
azure-relay | Private Link Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/private-link-service.md | The following procedure provides step-by-step instructions for disabling public 3. Select the **namespace** from the list to which you want to add a private endpoint. 4. On the left menu, select the **Networking** tab under **Settings**. 1. On the **Networking** page, for **Public network access**, select **Disabled** if you want the namespace to be accessed only via private endpoints. -1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-services) to bypass this firewall. +1. For **Allow trusted Microsoft services to bypass this firewall**, select **Yes** if you want to allow [trusted Microsoft services](#trusted-microsoft-services) to bypass this firewall. :::image type="content" source="./media/private-link-service/public-access-disabled.png" alt-text="Screenshot of the Networking page with public network access as Disabled."::: 1. Select the **Private endpoint connections** tab at the top of the page |
azure-resource-manager | Add Template To Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md | You can use Azure Resource Group Deployment task or Azure CLI task to deploy a B ### Use Azure Resource Manager Template Deployment task +> [!NOTE] +> *AzureResourceManagerTemplateDeployment@3* task won't work if you have a *bicepparam* file. + 1. Replace your starter pipeline with the following YAML. It creates a resource group and deploys a Bicep file by using an [Azure Resource Manager Template Deployment task](/azure/devops/pipelines/tasks/reference/azure-resource-manager-template-deployment-v3). ```yml |
azure-resource-manager | Deployment Stacks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md | az stack mg create \ --name '<deployment-stack-name>' \ --location '<location>' \ --template-file '<bicep-file-name>' \- --deployment-subscription-id '<subscription-id>' \ + --deployment-subscription '<subscription-id>' \ --deny-settings-mode 'none' ``` az stack mg create \ --name '<deployment-stack-name>' \ --location '<location>' \ --template-file '<bicep-file-name>' \- --deployment-subscription-id '<subscription-id>' \ + --deployment-subscription '<subscription-id>' \ --deny-settings-mode 'none' ``` |
azure-resource-manager | User Defined Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md | Title: User-defined types in Bicep description: Describes how to define and use user-defined data types in Bicep. Previously updated : 01/09/2023 Last updated : 08/29/2023 # User-defined data types in Bicep (Preview) To enable this preview, modify your project's [bicepconfig.json](./bicep-config. You can use the `type` statement to define user-defined data types. In addition, you can also use type expressions in some places to define custom types. ```bicep-type <userDefinedDataTypeName> = <typeExpression> +type <user-defined-data-type-name> = <type-expression> ``` The valid type expressions include: |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-resource-manager | Approve Just In Time Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/approve-just-in-time-access.md | -As a consumer of a managed application, you might not be comfortable giving the publisher permanent access to the managed resource group. To give you greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access, which is currently in preview. It enables you to approve when and for how long the publisher has access to the resource group. The publisher can make required updates during that time, but when that time is over, the publisher's access expires. +As a consumer of a managed application, you might not be comfortable giving the publisher permanent access to the managed resource group. To give you greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. It enables you to approve when and for how long the publisher has access to the resource group. The publisher can make required updates during that time, but when that time is over, the publisher's access expires. The work flow for granting access is: To approve requests through the managed application: 1. Select **JIT Access** for the managed application, and select **Approve Requests**. ![Approve requests](./media/approve-just-in-time-access/approve-requests.png)- + 1. Select the request to approve. ![Select request](./media/approve-just-in-time-access/select-request.png) |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-resource-manager | Request Just In Time Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/request-just-in-time-access.md | -Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. +Consumers of your managed application may be reluctant to grant you permanent access to the managed resource group. As a publisher of a managed application, you might prefer that consumers know exactly when you need to access the managed resources. To give consumers greater control over granting access to managed resources, Azure Managed Applications provides a feature called just-in-time (JIT) access. JIT access enables you to request elevated access to a managed application's resources for troubleshooting or maintenance. You always have read-only access to the resources, but for a specific time period you can have greater access. The work flow for granting access is: In "outputs": "jitAccessPolicy": "[steps('jitConfiguration').jitConfigurationControl]" ``` -> [!NOTE] -> JIT access is in preview. The schema for JIT configuration could change in future iterations. - ## Enable JIT access When creating your offer in Partner Center, make sure you enable JIT access. To send a JIT access request: 1. On the **Activate Role** form, select a start time and duration for your role to be active. Select **Activate** to send the request. - ![Activate access](./media/request-just-in-time-access/activate-access.png) + ![Activate access](./media/request-just-in-time-access/activate-access.png) 1. View the notifications to see that the new JIT request is successfully sent to the consumer. |
azure-resource-manager | Azure Subscription Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md | Title: Azure subscription limits and quotas description: Provides a list of common Azure subscription and service limits, quotas, and constraints. This article includes information on how to increase limits along with maximum values. Previously updated : 12/13/2022 Last updated : 08/24/2023 # Azure subscription and service limits, quotas, and constraints To learn more about Azure pricing, see [Azure pricing overview](https://azure.mi > [!NOTE] > Some services have adjustable limits. >-> When a service doesn't have adjustable limits, the following tables use the header **Limit**. In those cases, the default and the maximum limits are the same. +> When the limit can be adjusted, the tables include **Default limit** and **Maximum limit** headers. The limit can be raised above the default limit but not above the maximum limit. Some services with adjustable limits use different headers with information about adjusting the limit. >-> When the limit can be adjusted, the tables include **Default limit** and **Maximum limit** headers. The limit can be raised above the default limit but not above the maximum limit. +> When a service doesn't have adjustable limits, the following tables use the header **Limit** without any additional information about adjusting the limit. In those cases, the default and the maximum limits are the same. > > If you want to raise the limit or quota above the default limit, [open an online customer support request at no charge](../templates/error-resource-quota.md). > The following table details the features and limits of the Basic, Standard, and ## Digital Twins limits > [!NOTE]-> Some areas of this service have adjustable limits, and others do not. This is represented in the tables below with the *Adjustable?* column. When the limit can be adjusted, the *Adjustable?* value is *Yes*. +> Some areas of this service have adjustable limits, and others do not. This is represented in the following tables with the *Adjustable?* column. When the limit can be adjusted, the *Adjustable?* value is *Yes*. [!INCLUDE [digital-twins-limits](../../../includes/digital-twins-limits.md)] The latest values for Microsoft Purview quotas can be found in the [Microsoft Pu ## Microsoft Sentinel limits --### Incident limits ---### Machine learning-based limits ---### Multi workspace limits ---### Notebook limits ---### Repositories limits ---### Threat intelligence limits ---## TI upload indicators API limits ---### User and Entity Behavior Analytics (UEBA) limits ---### Watchlist limits ---### Workbook limits -+For Microsoft Sentinel limits, see [Service limits for Microsoft Sentinel](../../sentinel/sentinel-service-limits.md) ## Service Bus limits For more information, see [Virtual machine sizes](../../virtual-machines/sizes.m [!INCLUDE [azure-storage-limits-vm-apps](../../../includes/azure-storage-limits-vm-apps.md)] -For more information see [VM Applications](../../virtual-machines/vm-applications.md). +For more information, see [VM Applications](../../virtual-machines/vm-applications.md). #### Disk encryption sets |
azure-resource-manager | Lock Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md | Title: Protect your Azure resources with a lock description: You can safeguard Azure resources from updates or deletions by locking all users and roles. Previously updated : 04/06/2023 Last updated : 08/24/2023 content_well_notification: - AI-contribution Applying locks can lead to unexpected results. Some operations, which don't seem For example, if a request uses [File Shares - Delete](/rest/api/storagerp/file-shares/delete), which is a control plane operation, the deletion fails. If the request uses [Delete Share](/rest/api/storageservices/delete-share), which is a data plane operation, the deletion succeeds. We recommend that you use a control plane operation. -- A read-only lock or cannot-delete lock on a **network security group (NSG)** prevents the creation of a traffic flow log for the NSG.+- A read-only lock on a **network security group (NSG)** prevents the creation of the corresponding NSG flow log. A cannot-delete lock on a **network security group (NSG)** doesn't prevent the creation or modification of the corresponding NSG flow log. - A read-only lock on an **App Service** resource prevents Visual Studio Server Explorer from displaying files for the resource because that interaction requires write access. |
azure-resource-manager | Manage Resource Groups Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resource-groups-portal.md | Title: Manage resource groups - Azure portal description: Use Azure portal to manage your resource groups through Azure Resource Manager. Shows how to create, list, and delete resource groups.- Previously updated : 03/26/2019- Last updated : 08/16/2023 # Manage Azure resource groups by using the Azure portal The resource group stores metadata about the resources. Therefore, when you spec ## Create resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).-2. Select **Resource groups** +1. Select **Resource groups**. +1. Select **Create**. - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted."::: -3. Select **Add**. -4. Enter the following values: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png" alt-text="Screenshot of the Azure portal with 'Resource groups' and 'Add' highlighted." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group.png"::: - - **Subscription**: Select your Azure subscription. - - **Resource group**: Enter a new resource group name. +1. Enter the following values: ++ - **Subscription**: Select your Azure subscription. + - **Resource group**: Enter a new resource group name. - **Region**: Select an Azure location, such as **Central US**. - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region."::: -5. Select **Review + Create** -6. Select **Create**. It takes a few seconds to create a resource group. -7. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png" alt-text="Screenshot of the Create Resource Group form in the Azure portal with fields for Subscription, Resource group, and Region." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-create-group.png"::: +1. Select **Review + Create** +1. Select **Create**. It takes a few seconds to create a resource group. +1. Select **Refresh** from the top menu to refresh the resource group list, and then select the newly created resource group to open it. Or select **Notification**(the bell icon) from the top, and then select **Go to resource group** to open the newly created resource group - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel."::: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png" alt-text="Screenshot of the Azure portal with the 'Go to resource group' button in the Notifications panel." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-group-go-to-resource-group.png"::: ## List resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).-2. To list the resource groups, select **Resource groups** -- :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups."::: +1. To list the resource groups, select **Resource groups** +1. To customize the information displayed for the resource groups, configure the filters. The following screenshot shows the additional columns you could add to the display: -3. To customize the information displayed for the resource groups, select **Edit columns**. The following screenshot shows the additional columns you could add to the display: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png" alt-text="Screenshot of the Azure portal displaying a list of resource groups." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-list-groups.png"::: ## Open resource groups 1. Sign in to the [Azure portal](https://portal.azure.com).-2. Select **Resource groups**. -3. Select the resource group you want to open. +1. Select **Resource groups**. +1. Select the resource group you want to open. ## Delete resource groups 1. Open the resource group you want to delete. See [Open resource groups](#open-resource-groups).-2. Select **Delete resource group**. +1. Select **Delete resource group**. - :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group."::: + :::image type="content" source="./media/manage-resource-groups-portal/delete-group.png" alt-text="Screenshot of the Azure portal with the Delete resource group button highlighted in a specific resource group." lightbox="./media/manage-resource-groups-portal/delete-group.png"::: For more information about how Azure Resource Manager orders the deletion of resources, see [Azure Resource Manager resource group deletion](delete-resource-group.md). You can move the resources in the group to another resource group. For more info ## Lock resource groups -Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource. +Locking prevents other users in your organization from accidentally deleting or modifying critical resources, such as Azure subscription, resource group, or resource. 1. Open the resource group you want to lock. See [Open resource groups](#open-resource-groups).-2. In the left pane, select **Locks**. -3. To add a lock to the resource group, select **Add**. -4. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**. +1. In the left pane, select **Locks**. +1. To add a lock to the resource group, select **Add**. +1. Enter **Lock name**, **Lock type**, and **Notes**. The lock types include **Read-only**, and **Delete**. - :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes."::: + :::image type="content" source="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png" alt-text="Screenshot of the Add Lock form in the Azure portal with fields for Lock name, Lock type, and Notes." lightbox="./media/manage-resource-groups-portal/manage-resource-groups-add-lock.png"::: For more information, see [Lock resources to prevent unexpected changes](lock-resources.md). |
azure-resource-manager | App Service Move Limitations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-limitations/app-service-move-limitations.md | Title: Move Azure App Service resources across resource groups or subscriptions description: Use Azure Resource Manager to move App Service resources to a new resource group or subscription. Previously updated : 03/31/2022 Last updated : 08/17/2023 # Move App Service resources to a new resource group or subscription When you move a Web App across subscriptions, the following guidance applies: - Uploaded or imported TLS/SSL certificates - App Service Environments - All App Service resources in the resource group must be moved together.-- App Service Environments can't be moved to a new resource group or subscription. However, you can move a web app and app service plan to a new subscription without moving the App Service Environment.+- App Service Environments can't be moved to a new resource group or subscription. + - You can move a Web App and App Service plan hosted on an App Service Environment to a new subscription without moving the App Service Environment. The Web App and App Service plan that you move will always be associated with your initial App Service Environment. You can't move a Web App/App Service plan to a different App Service Environment. + - If you need to move a Web App and App Service plan to a new App Service Environment, you'll need to recreate these resources in your new App Service Environment. Consider using the [backup and restore feature](../../../app-service/manage-backup.md) as way of recreating your resources in a different App Service Environment. - You can move a certificate bound to a web without deleting the TLS bindings, as long as the certificate is moved with all other resources in the resource group. However, you can't move a free App Service managed certificate. For that scenario, see [Move with free managed certificates](#move-with-free-managed-certificates). - App Service apps with private endpoints cannot be moved. Delete the private endpoint(s) and recreate it after the move. - App Service resources can only be moved from the resource group in which they were originally created. If an App Service resource is no longer in its original resource group, move it back to its original resource group. Then, move the resource across subscriptions. For help with finding the original resource group, see the next section. |
azure-resource-manager | Move Resource Group And Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md | Title: Move resources to a new subscription or resource group description: Use Azure Resource Manager to move resources to a new resource group or subscription.++ Last updated 04/24/2023 |
azure-resource-manager | Move Support Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md | Before starting your move operation, review the [checklist](./move-resource-grou > [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |-> | loadtests | No | No | No | +> | loadtests | Yes | Yes | No | ## Microsoft.LocationBasedServices |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-resource-manager | Resource Name Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md | In the following tables, the term alphanumeric refers to: > | Entity | Scope | Length | Valid Characters | > | | | | | > | certificates | resource group | 1-260 | Can't use:<br>`/` <br><br>Can't end with space or period. |-> | serverfarms | resource group | 1-40 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode | +> | serverfarms | resource group | 1-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode | > | sites | global or per domain. See note below. | 2-60 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode<br><br>Can't start or end with hyphen. | > | sites / slots | site | 2-59 | Alphanumeric, hyphens and Unicode characters that can be mapped to Punycode | |
azure-resource-manager | Resources Without Resource Group Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md | Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 02/02/2023 Last updated : 08/15/2023 # Resources not limited to 800 instances per resource group Some resources have a limit on the number instances per region. This limit is di * automationAccounts +## Microsoft.AzureArcData ++* SqlServerInstances + ## Microsoft.AzureStack * generateDeploymentLicense Some resources have a limit on the number instances per region. This limit is di * botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit +## Microsoft.Cdn ++* profiles - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit +* profiles/networkpolicies - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit + ## Microsoft.Compute +* diskEncryptionSets * disks * galleries * galleries/images Some resources have a limit on the number instances per region. This limit is di ## Microsoft.DBforPostgreSQL * flexibleServers-* serverGroups * serverGroupsv2 * servers-* serversv2 ## Microsoft.DevTestLab Some resources have a limit on the number instances per region. This limit is di ## Microsoft.EdgeOrder +* bootstrapConfigurations * orderItems * orders Some resources have a limit on the number instances per region. This limit is di * clusters * namespaces +## Microsoft.Fabric ++* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Fabric/UnlimitedResourceGroupQuota + ## Microsoft.GuestConfiguration * guestConfigurationAssignments Some resources have a limit on the number instances per region. This limit is di * machines * machines/extensions+* machines/runcommands ## Microsoft.Logic Some resources have a limit on the number instances per region. This limit is di ## Microsoft.Network -* applicationGatewayWebApplicationFirewallPolicies * applicationSecurityGroups-* bastionHosts * customIpPrefixes * ddosProtectionPlans-* dnsForwardingRulesets -* dnsForwardingRulesets/forwardingRules -* dnsForwardingRulesets/virtualNetworkLinks -* dnsResolvers -* dnsResolvers/inboundEndpoints -* dnsResolvers/outboundEndpoints -* dnszones -* dnszones/A -* dnszones/AAAA -* dnszones/all -* dnszones/CAA -* dnszones/CNAME -* dnszones/MX -* dnszones/NS -* dnszones/PTR -* dnszones/recordsets -* dnszones/SOA -* dnszones/SRV -* dnszones/TXT -* expressRouteCrossConnections * loadBalancers - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit * networkIntentPolicies * networkInterfaces * networkSecurityGroups-* privateDnsZones -* privateDnsZones/A -* privateDnsZones/AAAA -* privateDnsZones/all -* privateDnsZones/CNAME -* privateDnsZones/MX -* privateDnsZones/PTR -* privateDnsZones/SOA -* privateDnsZones/SRV -* privateDnsZones/TXT -* privateDnsZones/virtualNetworkLinks * privateEndpointRedirectMaps * privateEndpoints * privateLinkServices * publicIPAddresses * serviceEndpointPolicies-* trafficmanagerprofiles -* virtualNetworks/privateDnsZoneLinks * virtualNetworkTaps +## Microsoft.NetworkCloud ++* volumes ++## Microsoft.NetworkFunction ++* vpnBranches - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NetworkFunction/AllowNaasVpnAccess + ## Microsoft.NotificationHubs * namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit Some resources have a limit on the number instances per region. This limit is di * assignments * securityConnectors+* securityConnectors/devops ## Microsoft.ServiceBus Some resources have a limit on the number instances per region. This limit is di * accounts/jobs * accounts/models * accounts/networks+* accounts/secrets * accounts/storageContainers ## Microsoft.Sql Some resources have a limit on the number instances per region. This limit is di * storageAccounts -## Microsoft.StoragePool --* diskPools -* diskPools/iscsiTargets - ## Microsoft.StreamAnalytics * streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit Some resources have a limit on the number instances per region. This limit is di ## Microsoft.Web * apiManagementAccounts/apis+* certificates - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Web/DisableResourcesPerRGLimitForAPIMinWebApp * sites ## Next steps |
azure-resource-manager | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
azure-resource-manager | Tag Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md | To get the same data as a file of comma-separated values, download [tag-support. > [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |-> | DataControllers | Yes | Yes | +> | DataControllers | Yes | No | > | DataControllers / ActiveDirectoryConnectors | No | No |-> | PostgresInstances | Yes | Yes | -> | SqlManagedInstances | Yes | Yes | -> | SqlServerInstances | Yes | Yes | -> | SqlServerInstances / Databases | Yes | Yes | +> | PostgresInstances | Yes | No | +> | SqlManagedInstances | Yes | No | +> | SqlServerInstances | Yes | No | +> | SqlServerInstances / Databases | Yes | No | +> | SqlServerInstances / AvailabilityGroups | Yes | No | ## Microsoft.AzureCIS To get the same data as a file of comma-separated values, download [tag-support. > | dstsServiceAccounts | Yes | Yes | > | dstsServiceClientIdentities | Yes | Yes | -## Microsoft.AzureData --> [!div class="mx-tableFixed"] -> | Resource type | Supports tags | Tag in cost report | -> | - | -- | -- | -> | sqlServerRegistrations | Yes | Yes | -> | sqlServerRegistrations / sqlServers | No | No | - ## Microsoft.AzureScan > [!div class="mx-tableFixed"] To get the same data as a file of comma-separated values, download [tag-support. ## Next steps To learn how to apply tags to resources, see [Use tags to organize your Azure resources](tag-resources.md).+ |
azure-resource-manager | Tls Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tls-support.md | Title: TLS version supported by Azure Resource Manager description: Describes the deprecation of TLS versions prior to 1.2 in Azure Resource Manager Previously updated : 09/26/2022 Last updated : 08/24/2023 # Migrating to TLS 1.2 for Azure Resource Manager Transport Layer Security (TLS) is a security protocol that establishes encryption channels over computer networks. TLS 1.2 is the current industry standard and is supported by Azure Resource Manager. For backwards compatibility, Azure Resource Manager also supports earlier versions, such as TLS 1.0 and 1.1, but that support is ending. -To ensure that Azure is compliant with regulatory requirements, and provide improved security for our customers, **Azure Resource Manager will stop supporting protocols older than TLS 1.2 by Fall 2023.** +To ensure that Azure is compliant with regulatory requirements, and provide improved security for our customers, **Azure Resource Manager will stop supporting protocols older than TLS 1.2 on November 30, 2023.** This article provides guidance for removing dependencies on older security protocols. ## Why migrate to TLS 1.2 -TLS encrypts data sent over the internet to prevent malicious users from accessing private, sensitive information. The client and server perform a TLS handshake to verify each other's identity and determine how they'll communicate. During the handshake, each party identifies which TLS versions they use. The client and server can communicate if they both support a common version. +TLS encrypts data sent over the internet to prevent malicious users from accessing private, sensitive information. The client and server perform a TLS handshake to verify each other's identity and determine how they'll communicate. During the handshake, each party identifies which TLS versions they use. The client and server can communicate if they both support a common version. TLS 1.2 is more secure and faster than its predecessors. |
azure-signalr | Concept Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/concept-connection-string.md | The connection string contains: The following table lists all the valid names for key/value pairs in the connection string. -| Key | Description | Required | Default value| Example value -| | | | | | -| Endpoint | The URL of your ASRS instance. | Y | N/A |`https://foo.service.signalr.net` | -| Port | The port that your ASRS instance is listening on. on. | N| 80/443, depends on the endpoint URI schema | 8080| -| Version| The version of given connection. string. | N| 1.0 | 1.0 | -| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N| null | `https://foo.bar` | -| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app | +| Key | Description | Required | Default value | Example value | +| -- | - | -- | | | +| Endpoint | The URL of your ASRS instance. | Y | N/A | `https://foo.service.signalr.net` | +| Port | The port that your ASRS instance is listening on. on. | N | 80/443, depends on the endpoint URI schema | 8080 | +| Version | The version of given connection. string. | N | 1.0 | 1.0 | +| ClientEndpoint | The URI of your reverse proxy, such as the App Gateway or API. Management | N | null | `https://foo.bar` | +| AuthType | The auth type. By default the service uses the AccessKey authorize requests. **Case insensitive** | N | null | Azure, azure.msi, azure.app | ### Use AccessKey The local auth method is used when `AuthType` is set to null. -| Key | Description| Required | Default value | Example value| -| | | | | | -| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ | +| Key | Description | Required | Default value | Example value | +| | - | -- | - | - | +| AccessKey | The key string in base64 format for building access token. | Y | null | ABCDEFGHIJKLMNOPQRSTUVWEXYZ0123456789+=/ | -### Use Azure Active Directory +### Use Microsoft Entra ID -The Azure AD auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`. +The Microsoft Entra ID auth method is used when `AuthType` is set to `azure`, `azure.app` or `azure.msi`. -| Key| Description| Required | Default value | Example value| -| -- | | -- | - | | -| ClientId | A GUID of an Azure application or an Azure identity. | N| null| `00000000-0000-0000-0000-000000000000` | -| TenantId | A GUID of an organization in Azure Active Directory. | N| null| `00000000-0000-0000-0000-000000000000` | -| ClientSecret | The password of an Azure application instance. | N| null| `***********************.****************` | -| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N| null| `/usr/local/cert/app.cert` | +| Key | Description | Required | Default value | Example value | +| -- | | -- | - | | +| ClientId | A GUID of an Azure application or an Azure identity. | N | null | `00000000-0000-0000-0000-000000000000` | +| TenantId | A GUID of an organization in Microsoft Entra ID. | N | null | `00000000-0000-0000-0000-000000000000` | +| ClientSecret | The password of an Azure application instance. | N | null | `***********************.****************` | +| ClientCertPath | The absolute path of a client certificate (cert) file to an Azure application instance. | N | null | `/usr/local/cert/app.cert` | -A different `TokenCredential` is used to generate Azure AD tokens depending on the parameters you have given. +A different `TokenCredential` is used to generate Microsoft Entra tokens depending on the parameters you have given. - `type=azure` A different `TokenCredential` is used to generate Azure AD tokens depending on t 1. A user-assigned managed identity is used if `clientId` has been given in connection string. - ``` + ```text Endpoint=xxx;AuthType=azure.msi;ClientId=<client_id> ```- + - [ManagedIdentityCredential(clientId)](/dotnet/api/azure.identity.managedidentitycredential) is used. 1. A system-assigned managed identity is used. A different `TokenCredential` is used to generate Azure AD tokens depending on t - `type=azure.app` - `clientId` and `tenantId` are required to use [Azure AD application with service principal](../active-directory/develop/howto-create-service-principal-portal.md). + `clientId` and `tenantId` are required to use [Microsoft Entra application with service principal](../active-directory/develop/howto-create-service-principal-portal.md). 1. [ClientSecretCredential(clientId, tenantId, clientSecret)](/dotnet/api/azure.identity.clientsecretcredential) is used if `clientSecret` is given. You can also use Azure CLI to get the connection string: az signalr key list -g <resource_group> -n <resource_name> ``` -## Connect with an Azure AD application +## Connect with a Microsoft Entra application -You can use an [Azure AD application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed. +You can use a [Microsoft Entra application](../active-directory/develop/app-objects-and-service-principals.md) to connect to your SignalR service. As long as the application has the right permission to access SignalR service, no access key is needed. -To use Azure AD authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Azure AD application, including client ID, client secret and tenant ID. The connection string looks as follows: +To use Microsoft Entra authentication, you need to remove `AccessKey` from connection string and add `AuthType=azure.app`. You also need to specify the credentials of your Microsoft Entra application, including client ID, client secret and tenant ID. The connection string looks as follows: ```text Endpoint=https://<resource_name>.service.signalr.net;AuthType=azure.app;ClientId=<client_id>;ClientSecret=<client_secret>;TenantId=<tenant_id>;Version=1.0; ``` -For more information about how to authenticate using Azure AD application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md). +For more information about how to authenticate using Microsoft Entra application, see [Authorize from Azure Applications](signalr-howto-authorize-application.md). ## Authenticate with Managed identity -You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service. +You can also use a system assigned or user assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate with SignalR service. To use a system assigned identity, add `AuthType=azure.msi` to the connection string: For more information about how to configure managed identity, see [Authorize fro ### Use the connection string generator -It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Azure AD identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu. +It may be cumbersome and error-prone to build connection strings manually. To avoid making mistakes, SignalR provides a connection string generator to help you generate a connection string that includes Microsoft Entra identities like `clientId`, `tenantId`, etc. To use the tool open your SignalR instance in Azure portal, select **Connection strings** from the left side menu. :::image type="content" source="media/concept-connection-string/generator.png" alt-text="Screenshot showing connection string generator of SignalR service in Azure portal."::: -In this page you can choose different authentication types (access key, managed identity or Azure AD application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application. +In this page you can choose different authentication types (access key, managed identity or Microsoft Entra application) and input information like client endpoint, client ID, client secret, etc. Then connection string is automatically generated. You can copy and use it in your application. > [!NOTE] > Information you enter won't be saved after you leave the page. You will need to copy and save your connection string to use in your application. -For more information about how access tokens are generated and validated, see [Authenticate via Azure Active Directory Token](signalr-reference-data-plane-rest-api.md#authenticate-via-azure-active-directory-token-azure-ad-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) . +For more information about how access tokens are generated and validated, see [Authenticate via Microsoft Entra token](signalr-reference-data-plane-rest-api.md#authenticate-via-microsoft-entra-token) in [Azure SignalR service data plane REST API reference](signalr-reference-data-plane-rest-api.md) . ## Client and server endpoints A connection string contains the HTTP endpoint for app server to connect to SignalR service. The server returns the HTTP endpoint to the clients in a negotiate response, so the client can connect to the service. -In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security. +In some applications, there may be an extra component in front of SignalR service. All client connections need to go through that component first. For example, [Azure Application Gateway](../application-gateway/overview.md) is a common service that provides additional network security. In such case, the client needs to connect to an endpoint different than SignalR service. Instead of manually replacing the endpoint at the client side, you can add `ClientEndpoint` to connection string: services.AddSignalR().AddAzureSignalR("<connection_string>"); Or you can call `AddAzureSignalR()` without any arguments. The service SDK returns the connection string from a config named `Azure:SignalR:ConnectionString` in your [configuration provider](/dotnet/core/extensions/configuration-providers). -In a local development environment, the configuration is stored in a file (*appsettings.json* or *secrets.json*) or environment variables. You can use one of the following ways to configure connection string: +In a local development environment, the configuration is stored in a file (_appsettings.json_ or _secrets.json_) or environment variables. You can use one of the following ways to configure connection string: - Use .NET secret manager (`dotnet user-secrets set Azure:SignalR:ConnectionString "<connection_string>"`)-- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider).+- Set an environment variable named `Azure__SignalR__ConnectionString` to the connection string. The colons need to be replaced with double underscore in the [environment variable configuration provider](/dotnet/core/extensions/configuration-providers#environment-variable-configuration-provider). In a production environment, you can use other Azure services to manage config/secrets like Azure [Key Vault](../key-vault/general/overview.md) and [App Configuration](../azure-app-configuration/overview.md). See their documentation to learn how to set up configuration provider for those services. > [!NOTE]-> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`. +> Even when you're directly setting a connection string using code, it's not recommended to hardcode the connection string in source code You should read the connection string from a secret store like key vault and pass it to `AddAzureSignalR()`. ### Configure multiple connection strings There are also two ways to configure multiple instances: ```text Azure:SignalR:ConnectionString:<name>:<type>- ``` + ``` |
azure-signalr | Howto Disable Local Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-disable-local-auth.md | Title: Disable local (access key) authentication with Azure SignalR Service -description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure SignalR Service. +description: This article provides information about how to disable access key authentication and use only Microsoft Entra authorization with Azure SignalR Service. -There are two ways to authenticate to Azure SignalR Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure SignalR Service resources when possible. +There are two ways to authenticate to Azure SignalR Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID offers superior security and ease of use compared to the access key method. +With Microsoft Entra ID, you do not need to store tokens in your code, reducing the risk of potential security vulnerabilities. +We highly recommend using Microsoft Entra ID for your Azure SignalR Service resources whenever possible. > [!IMPORTANT]-> Disabling local authentication can have following influences. -> - The current set of access keys will be permanently deleted. -> - Tokens signed with current set of access keys will become unavailable. +> Disabling local authentication can have following consequences. +> +> - The current set of access keys will be permanently deleted. +> - Tokens signed with the current set of access keys will become unavailable. ## Use Azure portal You can disable local authentication by setting `disableLocalAuth` property to t ```json {- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "resource_name": { - "defaultValue": "test-for-disable-aad", - "type": "String" - } - }, - "variables": {}, - "resources": [ - { - "type": "Microsoft.SignalRService/SignalR", - "apiVersion": "2022-08-01-preview", - "name": "[parameters('resource_name')]", - "location": "eastus", - "sku": { - "name": "Premium_P1", - "tier": "Premium", - "size": "P1", - "capacity": 1 - }, - "kind": "SignalR", - "properties": { - "tls": { - "clientCertEnabled": false - }, - "features": [ - { - "flag": "ServiceMode", - "value": "Default", - "properties": {} - }, - { - "flag": "EnableConnectivityLogs", - "value": "True", - "properties": {} - } - ], - "cors": { - "allowedOrigins": [ - "*" - ] - }, - "serverless": { - "connectionTimeoutInSeconds": 30 - }, - "upstream": {}, - "networkACLs": { - "defaultAction": "Deny", - "publicNetwork": { - "allow": [ - "ServerConnection", - "ClientConnection", - "RESTAPI", - "Trace" - ] - }, - "privateEndpoints": [] - }, - "publicNetworkAccess": "Enabled", - "disableLocalAuth": true, - "disableAadAuth": false - } - } - ] + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "resource_name": { + "defaultValue": "test-for-disable-aad", + "type": "String" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.SignalRService/SignalR", + "apiVersion": "2022-08-01-preview", + "name": "[parameters('resource_name')]", + "location": "eastus", + "sku": { + "name": "Premium_P1", + "tier": "Premium", + "size": "P1", + "capacity": 1 + }, + "kind": "SignalR", + "properties": { + "tls": { + "clientCertEnabled": false + }, + "features": [ + { + "flag": "ServiceMode", + "value": "Default", + "properties": {} + }, + { + "flag": "EnableConnectivityLogs", + "value": "True", + "properties": {} + } + ], + "cors": { + "allowedOrigins": ["*"] + }, + "serverless": { + "connectionTimeoutInSeconds": 30 + }, + "upstream": {}, + "networkACLs": { + "defaultAction": "Deny", + "publicNetwork": { + "allow": [ + "ServerConnection", + "ClientConnection", + "RESTAPI", + "Trace" + ] + }, + "privateEndpoints": [] + }, + "publicNetworkAccess": "Enabled", + "disableLocalAuth": true, + "disableAadAuth": false + } + } + ] } ``` You can assign the [Azure SignalR Service should have local authentication metho See the following docs to learn about authentication methods. -- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)+- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md) - [Authenticate with Azure applications](./signalr-howto-authorize-application.md) - [Authenticate with managed identities](./signalr-howto-authorize-managed-identity.md) |
azure-signalr | Howto Enable Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-enable-geo-replication.md | Companies seeking local presence or requiring a robust failover system often cho ## Example use case Contoso is a social media company with its customer base spread across the US and Canada. To serve those customers and let them communicate with each other, Contoso runs its services in Central US. Azure SignalR Service is used to handle user connections and facilitate communication among users. Contoso's end users are mostly phone users. Due to the long geographical distances, end-users in Canada might experience high latency and poor network quality. -![Screenshot of using one Azure SignalR instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-single.png "Single SignalR Example") +![Diagram of using one Azure SignalR instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-single.png "Single SignalR Example") Before the advent of the geo-replication feature, Contoso could set up another Azure SignalR Service in Canada Central to serve its Canadian users. By setting up a geographically closer Azure SignalR Service, end users now have better network quality and lower latency. However, managing multiple Azure SignalR Services brings some challenges: 2. The development team would need to manage two separate Azure SignalR Services, each with distinct domain and connection string. 3. If a regional outage happens, the traffic needs to be switched to another region. -![Screenshot of using two Azure SignalR instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-multiple.png "Mutiple SignalR Example") +![Diagram of using two Azure SignalR instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/signalr-multiple.png "Mutiple SignalR Example") ## Harnessing geo-replication With the new geo-replication feature, Contoso can now establish a replica in Canada Central, effectively overcoming the above-mentioned hurdles. -![Screenshot of using one Azure SignalR instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/signalr-replica.png "Replica Example") +![Diagram of using one Azure SignalR instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/signalr-replica.png "Replica Example") ## Create a SignalR replica To create a replica, Navigate to the SignalR **Replicas** blade on the Azure por > [!NOTE] > * Geo-replication is a feature available in premium tier.-> * A replica is considered a separate resource when it comes to billing. See [Pricing](#pricing) for more details. +> * A replica is considered a separate resource when it comes to billing. See [Pricing and resource unit](#pricing-and-resource-unit) for more details. After creation, you would be able to view/edit your replica on the portal by clicking the replica name. ![Screenshot of overview blade of Azure SignalR replica resource. ](./media/howto-enable-geo-replication/signalr-replica-overview.png "Replica Overview") -## Pricing -Replica is a feature of [Premium tier](https://azure.microsoft.com/pricing/details/signalr-service/) of Azure SignalR Service. Each replica is billed **separately** according to its own units and outbound traffic. Free message quota is also calculated separately. +## Pricing and resource unit +Each replica has its **own** `unit` and `autoscale settings`. ++Replica is a feature of [Premium tier](https://azure.microsoft.com/pricing/details/signalr-service/) of Azure SignalR Service. Each replica is billed **separately** according to its own unit and outbound traffic. Free message quota is also calculated separately. In the preceding example, Contoso added one replica in Canada Central. Contoso would pay for the replica in Canada Central according to its unit and message in Premium Price. To delete a replica in the Azure portal: 1. Navigate to your Azure SignalR Service, and select **Replicas** blade. Click the replica you want to delete. 2. Click Delete button on the replica overview blade. -## Understanding how the SignalR replica works +## Understand how the SignalR replica works The diagram below provides a brief illustration of the SignalR Replicas' functionality: -![Screenshot of the arch of Azure SignalR replica. ](./media/howto-enable-geo-replication/signalr-replica-arch.png "Replica Arch") +![Diagram of the arch of Azure SignalR replica. ](./media/howto-enable-geo-replication/signalr-replica-arch.png "Replica Arch") -1. The client resolves the Fully Qualified Domain Name (FQDN) `contoso.service.signalr.net` of the SignalR service. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional SignalR instance. -2. With this CNAME, the client establishes a connection to the regional instance. +1. The client negotiates with the app server and receives a redirection to the Azure SignalR service. It then resolves the SignalR service's Fully Qualified Domain Name (FQDN) ΓÇö `contoso.service.signalr.net`. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional SignalR instance. +2. With this CNAME, the client establishes a connection to the regional instance (Replica). 3. The two replicas will synchronize data with each other. Messages sent to one replica would be transferred to other replicas if necessary.-4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution process. +4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution process. For details, refer to below [Resiliency and Disaster Recovery](#resiliency-and-disaster-recovery) > [!NOTE] > * In the data plane, a primary Azure SignalR resource functions identically to its replicas +## Resiliency and disaster recovery ++Azure SignalR Service utilizes a traffic manager for health checks and DNS resolution towards its replicas. Under normal circumstances, when all replicas are functioning properly, clients will be directed to the closest replica. For instance: ++- Clients close to `eastus` will be directed to the replica located in `eastus`. +- Similarly, clients close to `westus` will be directed to the replica in `westus`. ++In the event of a **regional outage** in eastus (illustrated below), the traffic manager will detect the health check failure for that region. Then, this faulty replica's DNS will be excluded from the traffic manager's DNS resolution results. After a DNS Time-to-Live (TTL) duration, which is set to 90 seconds, clients in `eastus` will be redirected to connect with the replica in `westus`. ++![Diagram of Azure SignalR replica failover. ](./media/howto-enable-geo-replication/signalr-replica-failover.png "Replica Failover") ++Once the issue in `eastus` is resolved and the region is back online, the health check will succeed. Clients in `eastus` will then, once again, be directed to the replica in their region. This transition is smooth as the connected clients will not be impacted until those existing connections are closed. ++![Diagram of Azure SignalR replica failover recovery. ](./media/howto-enable-geo-replication/signalr-replica-failover-recovery.png "Replica Failover Recover") +++This failover and recovery process is **automatic** and requires no manual intervention. ++For **server connections**, the failover and recovery work the same way as it does for client connections. +> [!NOTE] +> * This failover mechanism is for Azure SignalR service. Regional outages of app server are beyond the scope of this document. + ## Impact on performance after adding replicas -Post replica addition, your clients will be distributed across different locations based on their geographical locations. SignalR must synchronize data across these replicas. The cost for synchronization is negligible if your use case primarily involves sending to large groups (size >100) or broadcasting. However, the cost becomes more apparent when sending to smaller groups (size < 10) or a single user. +After replicas are enabled, clients will naturally distribute based on their geographical locations. While SignalR takes on the responsibility to synchronize data across these replicas, you'll be pleased to know that the associated overhead on [Server Load](signalr-concept-performance.md#quick-evaluation-using-metrics) is minimal for most common use cases. ++Specifically, if your application typically broadcasts to larger groups (size >10) or a single connection, the performance impact of synchronization is barely noticeable. If you're messaging small groups (size < 10) or individual users, you might notice a bit more synchronization overhead. To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](signalr-howto-scale-autoscale.md) to manage this. |
azure-signalr | Howto Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-use-managed-identity.md | -In Azure SignalR Service, you can use a managed identity from Azure Active Directory to: +In Azure SignalR Service, you can use a managed identity from Microsoft Entra ID to: - Obtain access tokens - Access secrets in Azure Key Vault This article shows you how to create a managed identity for Azure SignalR Servic To use a managed identity, you must have the following items: - An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- An Azure SignalR resource. +- An Azure SignalR resource. - Upstream resources that you want to access. For example, an Azure Key Vault resource. - An Azure Function app. - ## Add a managed identity to Azure SignalR Service -You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal. +You can add a managed identity to Azure SignalR Service in the Azure portal or the Azure CLI. This article shows you how to add a managed identity to Azure SignalR Service in the Azure portal. ### Add a system-assigned identity To add a system-managed identity to your SignalR instance: 1. Browse to your SignalR instance in the Azure portal. 1. Select **Identity**.-1. On the **System assigned** tab, switch **Status** to **On**. +1. On the **System assigned** tab, switch **Status** to **On**. 1. Select **Save**. - :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal"::: + :::image type="content" source="media/signalr-howto-use-managed-identity/system-identity-portal.png" alt-text="Screenshot showing Add a system-assigned identity in the portal."::: 1. Select **Yes** to confirm the change. To add a user-assigned identity to your SignalR instance, you need to create the 1. On the **User assigned** tab, select **Add**. 1. Select the identity from the **User assigned managed identities** drop down menu. 1. Select **Add**.- :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal"::: + :::image type="content" source="media/signalr-howto-use-managed-identity/user-identity-portal.png" alt-text="Screenshot showing Add a user-assigned identity in the portal."::: ## Use a managed identity in serverless scenarios -Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request. +Azure SignalR Service is a fully managed service. It uses a managed identity to obtain an access token. In serverless scenarios, the service adds the access token into the `Authorization` header in an upstream request. ### Enable managed identity authentication in upstream settings Once you've added a [system-assigned identity](#add-a-system-assigned-identity) 1. Browse to your SignalR instance. 1. Select **Settings** from the menu. 1. Select the **Serverless** service mode.-1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings) +1. Enter the upstream endpoint URL pattern in the **Add an upstream URL pattern** text box. See [URL template settings](concept-upstream.md#url-template-settings) 1. Select Add one Upstream Setting and select any asterisk go to **Upstream Settings**.- :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings."::: + :::image type="content" source="media/signalr-howto-use-managed-identity/pre-msi-settings.png" alt-text="Screenshot of Azure SignalR service Settings."::: -1. Configure your upstream endpoint settings. +1. Configure your upstream endpoint settings. - :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings."::: + :::image type="content" source="media/signalr-howto-use-managed-identity/msi-settings.png" alt-text="Screenshot of Azure SignalR service Upstream settings."::: 1. In the managed identity authentication settings, for **Resource**, you can specify the target resource. The resource will become an `aud` claim in the obtained access token, which can be used as a part of validation in your upstream endpoints. The resource can be one of the following formats:- - Empty - - Application (client) ID of the service principal - - Application ID URI of the service principal - - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).) - > [!NOTE] - > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests. + - Empty + - Application (client) ID of the service principal + - Application ID URI of the service principal + - Resource ID of an Azure service (For a list of Azure services that support managed identities, see [Azure services that support managed identities](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication).) ++ > [!NOTE] + > If you manually validate an access token your service, you can choose any one of the resource formats. Make sure that the **Resource** value in **Auth** settings and the validation are consistent. When you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource format that the service provider requests. ### Validate access tokens The token in the `Authorization` header is a [Microsoft identity platform access To validate access tokens, your app should also validate the audience and the signing tokens. These tokens need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration). -The Azure Active Directory (Azure AD) middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice. +The Microsoft Entra ID middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice. -Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). +Libraries and code samples that show how to handle token validation are available. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language. For more information about Microsoft Entra authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). #### Authentication in Function App You can easily set access validation for a Function App without code changes usi 1. Select **Authentication** from the menu. 1. Select **Add identity provider**. 1. In the **Basics** tab, select **Microsoft** from the **Identity provider** dropdown.-1. Select **Log in with Azure Active Directory** in **Action to take when request is not authenticated**. -1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Azure AD provider, see [Configure your App Service or Azure Functions app to use Azure AD login](../app-service/configure-authentication-provider-aad.md) - :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Aad"::: +1. Select **Log in with Microsoft Entra ID** in **Action to take when request is not authenticated**. +1. Select **Microsoft** in the identity provider dropdown. The option to create a new registration is selected by default. You can change the name of the registration. For more information on enabling Microsoft Entra ID provider, see [Configure your App Service or Azure Functions app to login with Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) + :::image type="content" source="media/signalr-howto-use-managed-identity/function-aad.png" alt-text="Function Microsoft Entra ID"::: 1. Navigate to SignalR Service and follow the [steps](howto-use-managed-identity.md#add-a-system-assigned-identity) to add a system-assigned identity or user-assigned identity. 1. go to **Upstream settings** in SignalR Service and choose **Use Managed Identity** and **Select from existing Applications**. Select the application you created previously. After you configure these settings, the Function App will reject requests without an access token in the header. > [!IMPORTANT]-> To pass the authentication, the *Issuer Url* must match the *iss* claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)). +> To pass the authentication, the _Issuer Url_ must match the _iss_ claim in token. Currently, we only support v1 endpoint (see [v1.0 and v2.0](../active-directory/develop/access-tokens.md)). -To verify the *Issuer Url* format in your Function app: +To verify the _Issuer Url_ format in your Function app: 1. Go to the Function app in the portal. 1. Select **Authentication**. 1. Select **Identity provider**. 1. Select **Edit**. 1. Select **Issuer Url**.-1. Verify that the *Issuer Url* has the format `https://sts.windows.net/<tenant-id>/`. +1. Verify that the _Issuer Url_ has the format `https://sts.windows.net/<tenant-id>/`. ## Use a managed identity for Key Vault reference |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
azure-signalr | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
azure-signalr | Signalr Concept Authenticate Oauth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md | -This tutorial builds on the chat room application introduced in the quickstart. If you have not completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first. +This tutorial builds on the chat room application introduced in the quickstart. If you haven't completed [Create a chat room with SignalR Service](signalr-quickstart-dotnet-core.md), complete that exercise first. -In this tutorial, you'll learn how to implement your own authentication and integrate it with the Microsoft Azure SignalR Service. +In this tutorial, you can discover the process of creating your own authentication method and integrate it with the Microsoft Azure SignalR Service. -The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach is not very useful in real-world applications where a rogue user would impersonate others to access sensitive data. +The authentication initially used in the quickstart's chat room application is too simple for real-world scenarios. The application allows each client to claim who they are, and the server simply accepts that. This approach lacks effectiveness in real-world, as it fails to prevent malicious users who might assume false identities from gaining access to sensitive data. -[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you will use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate. +[GitHub](https://github.com/) provides authentication APIs based on a popular industry-standard protocol called [OAuth](https://oauth.net/). These APIs allow third-party applications to authenticate GitHub accounts. In this tutorial, you can use these APIs to implement authentication through a GitHub account before allowing client logins to the chat room application. After authenticating a GitHub account, the account information will be added as a cookie to be used by the web client to authenticate. For more information on the OAuth authentication APIs provided through GitHub, see [Basics of Authentication](https://developer.github.com/v3/guides/basics-of-authentication/). The code for this tutorial is available for download in the [AzureSignalR-sample In this tutorial, you learn how to: > [!div class="checklist"]-> * Register a new OAuth app with your GitHub account -> * Add an authentication controller to support GitHub authentication -> * Deploy your ASP.NET Core web app to Azure +> +> - Register a new OAuth app with your GitHub account +> - Add an authentication controller to support GitHub authentication +> - Deploy your ASP.NET Core web app to Azure [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] To complete this tutorial, you must have the following prerequisites: 1. Open a web browser and navigate to `https://github.com` and sign into your account. -2. For your account, navigate to **Settings** > **Developer settings** and click **Register a new application**, or **New OAuth App** under *OAuth Apps*. +2. For your account, navigate to **Settings** > **Developer settings** and select **Register a new application**, or **New OAuth App** under _OAuth Apps_. -3. Use the following settings for the new OAuth App, then click **Register application**: +3. Use the following settings for the new OAuth App, then select **Register application**: - | Setting Name | Suggested Value | Description | - | | | -- | - | Application name | *Azure SignalR Chat* | The GitHub user should be able to recognize and trust the app they are authenticating with. | - | Homepage URL | `http://localhost:5000` | | - | Application description | *A chat room sample using the Azure SignalR Service with GitHub authentication* | A useful description of the application that will help your application users understand the context of the authentication being used. | - | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the *AspNet.Security.OAuth.GitHub* package, */signin-github*. | + | Setting Name | Suggested Value | Description | + | -- | - | | + | Application name | _Azure SignalR Chat_ | The GitHub user should be able to recognize and trust the app they're authenticating with. | + | Homepage URL | `http://localhost:5000` | | + | Application description | _A chat room sample using the Azure SignalR Service with GitHub authentication_ | A useful description of the application that will help your application users understand the context of the authentication being used. | + | Authorization callback URL | `http://localhost:5000/signin-github` | This setting is the most important setting for your OAuth application. It's the callback URL that GitHub returns the user to after successful authentication. In this tutorial, you must use the default callback URL for the _AspNet.Security.OAuth.GitHub_ package, _/signin-github_. | -4. Once the new OAuth app registration is complete, add the *Client ID* and *Client Secret* to Secret Manager using the following commands. Replace *Your_GitHub_Client_Id* and *Your_GitHub_Client_Secret* with the values for your OAuth app. +4. Once the new OAuth app registration is complete, add the _Client ID_ and _Client Secret_ to Secret Manager using the following commands. Replace _Your_GitHub_Client_Id_ and _Your_GitHub_Client_Secret_ with the values for your OAuth app. - ```dotnetcli - dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id - dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret - ``` + ```dotnetcli + dotnet user-secrets set GitHubClientId Your_GitHub_Client_Id + dotnet user-secrets set GitHubClientSecret Your_GitHub_Client_Secret + ``` ## Implement the OAuth flow ### Update the Startup class to support GitHub authentication -1. Add a reference to the latest *Microsoft.AspNetCore.Authentication.Cookies* and *AspNet.Security.OAuth.GitHub* packages and restore all packages. -- ```dotnetcli - dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656 - dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final - dotnet restore - ``` --1. Open *Startup.cs*, and add `using` statements for the following namespaces: -- ```csharp - using System.Net.Http; - using System.Net.Http.Headers; - using System.Security.Claims; - using Microsoft.AspNetCore.Authentication.Cookies; - using Microsoft.AspNetCore.Authentication.OAuth; - using Newtonsoft.Json.Linq; - ``` --2. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets. -- ```csharp - private const string GitHubClientId = "GitHubClientId"; - private const string GitHubClientSecret = "GitHubClientSecret"; - ``` --3. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app: -- ```csharp - services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) - .AddCookie() - .AddGitHub(options => - { - options.ClientId = Configuration[GitHubClientId]; - options.ClientSecret = Configuration[GitHubClientSecret]; - options.Scope.Add("user:email"); - options.Events = new OAuthEvents - { - OnCreatingTicket = GetUserCompanyInfoAsync - }; - }); - ``` --4. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class. -- ```csharp - private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context) - { - var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint); - request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); - request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken); -- var response = await context.Backchannel.SendAsync(request, - HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted); -- var user = JObject.Parse(await response.Content.ReadAsStringAsync()); - if (user.ContainsKey("company")) - { - var company = user["company"].ToString(); - var companyIdentity = new ClaimsIdentity(new[] - { - new Claim("Company", company) - }); - context.Principal.AddIdentity(companyIdentity); - } - } - ``` --5. Update the `Configure` method of the Startup class with the following line of code, and save the file. -- ```csharp - app.UseAuthentication(); - ``` +1. Add a reference to the latest _Microsoft.AspNetCore.Authentication.Cookies_ and _AspNet.Security.OAuth.GitHub_ packages and restore all packages. ++ ```dotnetcli + dotnet add package Microsoft.AspNetCore.Authentication.Cookies -v 2.1.0-rc1-30656 + dotnet add package AspNet.Security.OAuth.GitHub -v 2.0.0-rc2-final + dotnet restore + ``` ++1. Open _Startup.cs_, and add `using` statements for the following namespaces: ++ ```csharp + using System.Net.Http; + using System.Net.Http.Headers; + using System.Security.Claims; + using Microsoft.AspNetCore.Authentication.Cookies; + using Microsoft.AspNetCore.Authentication.OAuth; + using Newtonsoft.Json.Linq; + ``` ++1. At the top of the `Startup` class, add constants for the Secret Manager keys that hold the GitHub OAuth app secrets. ++ ```csharp + private const string GitHubClientId = "GitHubClientId"; + private const string GitHubClientSecret = "GitHubClientSecret"; + ``` ++1. Add the following code to the `ConfigureServices` method to support authentication with the GitHub OAuth app: ++ ```csharp + services.AddAuthentication(CookieAuthenticationDefaults.AuthenticationScheme) + .AddCookie() + .AddGitHub(options => + { + options.ClientId = Configuration[GitHubClientId]; + options.ClientSecret = Configuration[GitHubClientSecret]; + options.Scope.Add("user:email"); + options.Events = new OAuthEvents + { + OnCreatingTicket = GetUserCompanyInfoAsync + }; + }); + ``` ++1. Add the `GetUserCompanyInfoAsync` helper method to the `Startup` class. ++ ```csharp + private static async Task GetUserCompanyInfoAsync(OAuthCreatingTicketContext context) + { + var request = new HttpRequestMessage(HttpMethod.Get, context.Options.UserInformationEndpoint); + request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json")); + request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", context.AccessToken); ++ var response = await context.Backchannel.SendAsync(request, + HttpCompletionOption.ResponseHeadersRead, context.HttpContext.RequestAborted); ++ var user = JObject.Parse(await response.Content.ReadAsStringAsync()); + if (user.ContainsKey("company")) + { + var company = user["company"].ToString(); + var companyIdentity = new ClaimsIdentity(new[] + { + new Claim("Company", company) + }); + context.Principal.AddIdentity(companyIdentity); + } + } + ``` ++1. Update the `Configure` method of the Startup class with the following line of code, and save the file. ++ ```csharp + app.UseAuthentication(); + ``` ### Add an authentication controller In this section, you will implement a `Login` API that authenticates clients using the GitHub OAuth app. Once authenticated, the API will add a cookie to the web client response before redirecting the client back to the chat app. That cookie will then be used to identify the client. -1. Add a new controller code file to the *chattest\Controllers* directory. Name the file *AuthController.cs*. --2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory was not *chattest*: -- ```csharp - using AspNet.Security.OAuth.GitHub; - using Microsoft.AspNetCore.Authentication; - using Microsoft.AspNetCore.Mvc; -- namespace chattest.Controllers - { - [Route("/")] - public class AuthController : Controller - { - [HttpGet("login")] - public IActionResult Login() - { - if (!User.Identity.IsAuthenticated) - { - return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme); - } -- HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name); - HttpContext.SignInAsync(User); - return Redirect("/"); - } - } - } - ``` +1. Add a new controller code file to the _chattest\Controllers_ directory. Name the file _AuthController.cs_. ++2. Add the following code for the authentication controller. Make sure to update the namespace, if your project directory wasn't _chattest_: ++ ```csharp + using AspNet.Security.OAuth.GitHub; + using Microsoft.AspNetCore.Authentication; + using Microsoft.AspNetCore.Mvc; ++ namespace chattest.Controllers + { + [Route("/")] + public class AuthController : Controller + { + [HttpGet("login")] + public IActionResult Login() + { + if (!User.Identity.IsAuthenticated) + { + return Challenge(GitHubAuthenticationDefaults.AuthenticationScheme); + } ++ HttpContext.Response.Cookies.Append("githubchat_username", User.Identity.Name); + HttpContext.SignInAsync(User); + return Redirect("/"); + } + } + } + ``` 3. Save your changes. ### Update the Hub class -By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token is not associated with an authenticated identity. This access is actually anonymous access. +By default when a web client attempts to connect to SignalR Service, the connection is granted based on an access token that is provided internally. This access token isn't associated with an authenticated identity. +Basically, it's anonymous access. In this section, you will turn on real authentication by adding the `Authorize` attribute to the hub class, and updating the hub methods to read the username from the authenticated user's claim. -1. Open *Hub\Chat.cs* and add references to these namespaces: +1. Open _Hub\Chat.cs_ and add references to these namespaces: - ```csharp - using System.Threading.Tasks; - using Microsoft.AspNetCore.Authorization; - ``` + ```csharp + using System.Threading.Tasks; + using Microsoft.AspNetCore.Authorization; + ``` 2. Update the hub code as shown below. This code adds the `Authorize` attribute to the `Chat` class, and uses the user's authenticated identity in the hub methods. Also, the `OnConnectedAsync` method is added, which will log a system message to the chat room each time a new client connects. - ```csharp - [Authorize] - public class Chat : Hub - { - public override Task OnConnectedAsync() - { - return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED"); - } -- // Uncomment this line to only allow user in Microsoft to send message - //[Authorize(Policy = "Microsoft_Only")] - public void BroadcastMessage(string message) - { - Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message); - } -- public void Echo(string message) - { - var echoMessage = $"{message} (echo from server)"; - Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage); - } - } - ``` + ```csharp + [Authorize] + public class Chat : Hub + { + public override Task OnConnectedAsync() + { + return Clients.All.SendAsync("broadcastMessage", "_SYSTEM_", $"{Context.User.Identity.Name} JOINED"); + } ++ // Uncomment this line to only allow user in Microsoft to send message + //[Authorize(Policy = "Microsoft_Only")] + public void BroadcastMessage(string message) + { + Clients.All.SendAsync("broadcastMessage", Context.User.Identity.Name, message); + } ++ public void Echo(string message) + { + var echoMessage = $"{message} (echo from server)"; + Clients.Client(Context.ConnectionId).SendAsync("echo", Context.User.Identity.Name, echoMessage); + } + } + ``` 3. Save your changes. ### Update the web client code -1. Open *wwwroot\https://docsupdatetracker.net/index.html* and replace the code that prompts for the username with code to use the cookie returned by the authentication controller. -- Remove the following code from *https://docsupdatetracker.net/index.html*: -- ```javascript - // Get the user name and store it to prepend to messages. - var username = generateRandomName(); - var promptMessage = 'Enter your name:'; - do { - username = prompt(promptMessage, username); - if (!username || username.startsWith('_') || username.indexOf('<') > -1 || username.indexOf('>') > -1) { - username = ''; - promptMessage = 'Invalid input. Enter your name:'; - } - } while(!username) - ``` -- Add the following code in place of the code above to use the cookie: -- ```javascript - // Get the user name cookie. - function getCookie(key) { - var cookies = document.cookie.split(';').map(c => c.trim()); - for (var i = 0; i < cookies.length; i++) { - if (cookies[i].startsWith(key + '=')) return unescape(cookies[i].slice(key.length + 1)); - } - return ''; - } - var username = getCookie('githubchat_username'); - ``` +1. Open _wwwroot\https://docsupdatetracker.net/index.html_ and replace the code that prompts for the username with code to use the cookie returned by the authentication controller. ++ Remove the following code from _https://docsupdatetracker.net/index.html_: ++ ```javascript + // Get the user name and store it to prepend to messages. + var username = generateRandomName(); + var promptMessage = "Enter your name:"; + do { + username = prompt(promptMessage, username); + if ( + !username || + username.startsWith("_") || + username.indexOf("<") > -1 || + username.indexOf(">") > -1 + ) { + username = ""; + promptMessage = "Invalid input. Enter your name:"; + } + } while (!username); + ``` ++ Add the following code in place of the code above to use the cookie: ++ ```javascript + // Get the user name cookie. + function getCookie(key) { + var cookies = document.cookie.split(";").map((c) => c.trim()); + for (var i = 0; i < cookies.length; i++) { + if (cookies[i].startsWith(key + "=")) + return unescape(cookies[i].slice(key.length + 1)); + } + return ""; + } + var username = getCookie("githubchat_username"); + ``` 2. Just beneath the line of code you added to use the cookie, add the following definition for the `appendMessage` function: - ```javascript - function appendMessage(encodedName, encodedMsg) { - var messageEntry = createMessageEntry(encodedName, encodedMsg); - var messageBox = document.getElementById('messages'); - messageBox.appendChild(messageEntry); - messageBox.scrollTop = messageBox.scrollHeight; - } - ``` + ```javascript + function appendMessage(encodedName, encodedMsg) { + var messageEntry = createMessageEntry(encodedName, encodedMsg); + var messageBox = document.getElementById("messages"); + messageBox.appendChild(messageEntry); + messageBox.scrollTop = messageBox.scrollHeight; + } + ``` 3. Update the `bindConnectionMessage` and `onConnected` functions with the following code to use `appendMessage`. - ```javascript - function bindConnectionMessage(connection) { - var messageCallback = function(name, message) { - if (!message) return; - // Html encode display name and message. - var encodedName = name; - var encodedMsg = message.replace(/&/g, "&").replace(/</g, "<").replace(/>/g, ">"); - appendMessage(encodedName, encodedMsg); - }; - // Create a function that the hub can call to broadcast messages. - connection.on('broadcastMessage', messageCallback); - connection.on('echo', messageCallback); - connection.onclose(onConnectionError); - } -- function onConnected(connection) { - console.log('connection started'); - document.getElementById('sendmessage').addEventListener('click', function (event) { - // Call the broadcastMessage method on the hub. - if (messageInput.value) { - connection - .invoke('broadcastMessage', messageInput.value) - .catch(e => appendMessage('_BROADCAST_', e.message)); - } -- // Clear text box and reset focus for next comment. - messageInput.value = ''; - messageInput.focus(); - event.preventDefault(); - }); - document.getElementById('message').addEventListener('keypress', function (event) { - if (event.keyCode === 13) { - event.preventDefault(); - document.getElementById('sendmessage').click(); - return false; - } - }); - document.getElementById('echo').addEventListener('click', function (event) { - // Call the echo method on the hub. - connection.send('echo', messageInput.value); -- // Clear text box and reset focus for next comment. - messageInput.value = ''; - messageInput.focus(); - event.preventDefault(); - }); - } - ``` --4. At the bottom of *https://docsupdatetracker.net/index.html*, update the error handler for `connection.start()` as shown below to prompt the user to log in. -- ```javascript - connection.start() - .then(function () { - onConnected(connection); - }) - .catch(function (error) { - if (error) { - if (error.message) { - console.error(error.message); - } - if (error.statusCode && error.statusCode === 401) { - appendMessage('_BROADCAST_', 'You\'re not logged in. Click <a href="/login">here</a> to login with GitHub.'); - } - } - }); - ``` + ```javascript + function bindConnectionMessage(connection) { + var messageCallback = function (name, message) { + if (!message) return; + // Html encode display name and message. + var encodedName = name; + var encodedMsg = message + .replace(/&/g, "&") + .replace(/</g, "<") + .replace(/>/g, ">"); + appendMessage(encodedName, encodedMsg); + }; + // Create a function that the hub can call to broadcast messages. + connection.on("broadcastMessage", messageCallback); + connection.on("echo", messageCallback); + connection.onclose(onConnectionError); + } ++ function onConnected(connection) { + console.log("connection started"); + document + .getElementById("sendmessage") + .addEventListener("click", function (event) { + // Call the broadcastMessage method on the hub. + if (messageInput.value) { + connection + .invoke("broadcastMessage", messageInput.value) + .catch((e) => appendMessage("_BROADCAST_", e.message)); + } ++ // Clear text box and reset focus for next comment. + messageInput.value = ""; + messageInput.focus(); + event.preventDefault(); + }); + document + .getElementById("message") + .addEventListener("keypress", function (event) { + if (event.keyCode === 13) { + event.preventDefault(); + document.getElementById("sendmessage").click(); + return false; + } + }); + document + .getElementById("echo") + .addEventListener("click", function (event) { + // Call the echo method on the hub. + connection.send("echo", messageInput.value); ++ // Clear text box and reset focus for next comment. + messageInput.value = ""; + messageInput.focus(); + event.preventDefault(); + }); + } + ``` ++4. At the bottom of _https://docsupdatetracker.net/index.html_, update the error handler for `connection.start()` as shown below to prompt the user to sign in. ++ ```javascript + connection + .start() + .then(function () { + onConnected(connection); + }) + .catch(function (error) { + if (error) { + if (error.message) { + console.error(error.message); + } + if (error.statusCode && error.statusCode === 401) { + appendMessage( + "_BROADCAST_", + 'You\'re not logged in. Click <a href="/login">here</a> to login with GitHub.' + ); + } + } + }); + ``` 5. Save your changes. In this section, you will turn on real authentication by adding the `Authorize` 2. Build the app using the .NET Core CLI, execute the following command in the command shell: - ```dotnetcli - dotnet build - ``` + ```dotnetcli + dotnet build + ``` 3. Once the build successfully completes, execute the following command to run the web app locally: - ```dotnetcli - dotnet run - ``` + ```dotnetcli + dotnet run + ``` - By default, the app will be hosted locally on port 5000: + The app is hosted locally on port 5000 by default: - ```output - E:\Testing\chattest>dotnet run - Hosting environment: Production - Content root path: E:\Testing\chattest - Now listening on: http://localhost:5000 - Application started. Press Ctrl+C to shut down. - ``` + ```output + E:\Testing\chattest>dotnet run + Hosting environment: Production + Content root path: E:\Testing\chattest + Now listening on: http://localhost:5000 + Application started. Press Ctrl+C to shut down. + ``` -4. Launch a browser window and navigate to `http://localhost:5000`. Click the **here** link at the top to log in with GitHub. +4. Launch a browser window and navigate to `http://localhost:5000`. Select the **here** link at the top to sign in with GitHub. - ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png) + ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png) - You will be prompted to authorize the chat app's access to your GitHub account. Click the **Authorize** button. + You will be prompted to authorize the chat app's access to your GitHub account. Select the **Authorize** button. - ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png) + ![Authorize OAuth App](media/signalr-concept-authenticate-oauth/signalr-authorize-oauth-app.png) - You will be redirected back to the chat application and logged in with your GitHub account name. The web application determined you account name by authenticating you using the new authentication you added. + You will be redirected back to the chat application and logged in with your GitHub account name. The web application determined your account name by authenticating you using the new authentication you added. - ![Account identified](media/signalr-concept-authenticate-oauth/signalr-oauth-account-identified.png) + ![Account identified](media/signalr-concept-authenticate-oauth/signalr-oauth-account-identified.png) - Now that the chat app performs authentication with GitHub and stores the authentication information as cookies, you should deploy it to Azure so other users can authenticate with their accounts and communicate from other workstations. + With the chat app now performs authentication with GitHub and stores the authentication information as cookies, the next step involves deploying it to Azure. + This approach enables other users to authenticate using their respective accounts and communicate from various workstations. ## Deploy the app to Azure Prepare your environment for the Azure CLI: In this section, you will use the Azure CLI to create a new web app in [Azure App Service](../app-service/index.yml) to host your ASP.NET application in Azure. The web app will be configured to use local Git deployment. The web app will also be configured with your SignalR connection string, GitHub OAuth app secrets, and a deployment user. -When creating the following resources, make sure to use the same resource group that your SignalR Service resource resides in. This approach will make clean up a lot easier later when you want to remove all the resources. The examples given assume you used the group name recommended in previous tutorials, *SignalRTestResources*. +When creating the following resources, make sure to use the same resource group that your SignalR Service resource resides in. This approach will make clean up a lot easier later when you want to remove all the resources. The examples given assume you used the group name recommended in previous tutorials, _SignalRTestResources_. ### Create the web app and plan az webapp create --name $WebAppName --resource-group $ResourceGroupName \ --plan $WebAppPlan ``` -| Parameter | Description | -| -- | | -| ResourceGroupName | This resource group name was suggested in previous tutorials. It is a good idea to keep all tutorial resources grouped together. Use the same resource group you used in the previous tutorials. | -| WebAppPlan | Enter a new, unique, App Service Plan name. | -| WebAppName | This will be the name of the new web app and part of the URL. Use a unique name. For example, signalrtestwebapp22665120. | +| Parameter | Description | +| -- | -- | +| ResourceGroupName | This resource group name was suggested in previous tutorials. It's a good idea to keep all tutorial resources grouped together. Use the same resource group you used in the previous tutorials. | +| WebAppPlan | Enter a new, unique, App Service Plan name. | +| WebAppName | This parameter is the name of the new web app and part of the URL. Make it unique. For example, signalrtestwebapp22665120. | ### Add app settings to the web app In this section, you will add app settings for the following components: -* SignalR Service resource connection string -* GitHub OAuth app client ID -* GitHub OAuth app client secret +- SignalR Service resource connection string +- GitHub OAuth app client ID +- GitHub OAuth app client secret Copy the text for the commands below and update the parameters. Paste the updated script into the Azure Cloud Shell, and press **Enter** to add the app settings: ResourceGroupName=SignalRTestResources SignalRServiceResource=mySignalRresourcename WebAppName=myWebAppName -# Get the SignalR primary connection string +# Get the SignalR primary connection string primaryConnectionString=$(az signalr key list --name $SignalRServiceResource \ --resource-group $ResourceGroupName --query primaryConnectionString -o tsv) az webapp config appsettings set --name $WebAppName \ --settings "GitHubClientSecret=$GitHubClientSecret" ``` -| Parameter | Description | -| -- | | -| GitHubClientId | Assign this variable the secret Client Id for your GitHub OAuth App. | -| GitHubClientSecret | Assign this variable the secret password for your GitHub OAuth App. | -| ResourceGroupName | Update this variable to be the same resource group name you used in the previous section. | +| Parameter | Description | +| - | -- | +| GitHubClientId | Assign this variable the secret Client ID for your GitHub OAuth App. | +| GitHubClientSecret | Assign this variable the secret password for your GitHub OAuth App. | +| ResourceGroupName | Update this variable to be the same resource group name you used in the previous section. | | SignalRServiceResource | Update this variable with the name of the SignalR Service resource you created in the quickstart. For example, signalrtestsvc48778624. |-| WebAppName | Update this variable with the name of the new web app you created in the previous section. | +| WebAppName | Update this variable with the name of the new web app you created in the previous section. | ### Configure the web app for local Git deployment az webapp deployment source config-local-git --name $WebAppName \ --query [url] -o tsv ``` -| Parameter | Description | -| -- | | -| DeploymentUserName | Choose a new deployment user name. | -| DeploymentUserPassword | Choose a password for the new deployment user. | -| ResourceGroupName | Use the same resource group name you used in the previous section. | -| WebAppName | This will be the name of the new web app you created previously. | +| Parameter | Description | +| - | -- | +| DeploymentUserName | Choose a new deployment user name. | +| DeploymentUserPassword | Choose a password for the new deployment user. | +| ResourceGroupName | Use the same resource group name you used in the previous section. | +| WebAppName | This parameter will be the name of the new web app you created previously. | Make a note the Git deployment URL returned from this command. You will use this URL later. To deploy your code, execute the following commands in a Git shell. 1. Navigate to the root of your project directory. If you don't have the project initialized with a Git repository, execute following command: - ```bash - git init - ``` + ```bash + git init + ``` 2. Add a remote for the Git deployment URL you noted earlier: - ```bash - git remote add Azure <your git deployment url> - ``` + ```bash + git remote add Azure <your git deployment url> + ``` 3. Stage all files in the initialized repository and add a commit. - ```bash - git add -A - git commit -m "init commit" - ``` + ```bash + git add -A + git commit -m "init commit" + ``` 4. Deploy your code to the web app in Azure. - ```bash - git push Azure main - ``` + ```bash + git push Azure main + ``` - You will be prompted to authenticate in order to deploy the code to Azure. Enter the user name and password of the deployment user you created above. + You will be prompted to authenticate in order to deploy the code to Azure. Enter the user name and password of the deployment user you created above. ### Update the GitHub OAuth app The last thing you need to do is update the **Homepage URL** and **Authorization 1. Open [https://github.com](https://github.com) in a browser and navigate to your account's **Settings** > **Developer settings** > **Oauth Apps**. -2. Click on your authentication app and update the **Homepage URL** and **Authorization callback URL** as shown below: +2. Select on your authentication app and update the **Homepage URL** and **Authorization callback URL** as shown below: - | Setting | Example | - | - | - | - | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` | - | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` | + | Setting | Example | + | -- | - | + | Homepage URL | `https://signalrtestwebapp22665120.azurewebsites.net` | + | Authorization callback URL | `https://signalrtestwebapp22665120.azurewebsites.net/signin-github` | 3. Navigate to your web app URL and test the application. - ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png) + ![OAuth Complete hosted in Azure](media/signalr-concept-authenticate-oauth/signalr-oauth-complete-azure.png) ## Clean up resources Otherwise, if you are finished with the quickstart sample application, you can d > [!IMPORTANT] > Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually from their respective blades instead of deleting the resource group. -Sign in to the [Azure portal](https://portal.azure.com) and click **Resource groups**. +Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**. -In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named *SignalRTestResources*. On your resource group in the result list, click **...** then **Delete resource group**. +In the **Filter by name...** textbox, type the name of your resource group. The instructions for this article used a resource group named _SignalRTestResources_. On your resource group in the result list, click **...** then **Delete resource group**. ![Delete](./media/signalr-concept-authenticate-oauth/signalr-delete-resource-group.png) -You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and click **Delete**. +You will be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**. After a few moments, the resource group and all of its contained resources are deleted. After a few moments, the resource group and all of its contained resources are d In this tutorial, you added authentication with OAuth to provide a better approach to authentication with Azure SignalR Service. To learn more about using Azure SignalR Server, continue to the Azure CLI samples for SignalR Service. -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Azure SignalR CLI Samples](./signalr-reference-cli.md) |
azure-signalr | Signalr Concept Authorize Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authorize-azure-active-directory.md | Title: Authorize access with Azure Active Directory for Azure SignalR Service -description: This article provides information on authorizing access to Azure SignalR Service resources using Azure Active Directory. + Title: Authorize access with Microsoft Entra ID for Azure SignalR Service +description: This article provides information on authorizing access to Azure SignalR Service resources using Microsoft Entra ID. Last updated 09/06/2021-# Authorize access with Azure Active Directory for Azure SignalR Service +# Authorize access with Microsoft Entra ID for Azure SignalR Service -Azure SignalR Service supports Azure Active Directory (Azure AD) to authorize requests to SignalR resources. With Azure AD, you can use role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Azure AD, which returns an OAuth 2.0 token. The token is used to authorize a request against the SignalR resource. +Azure SignalR Service supports Microsoft Entra ID for authorizing requests to SignalR resources. With Microsoft Entra ID, you can utilize role-based access control (RBAC) to grant permissions to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. The security principal is authenticated by Microsoft Entra ID, which returns an OAuth 2.0 token. The token is then used to authorize a request against the SignalR resource. -Authorizing requests against SignalR with Azure AD provides superior security and ease of use over Access Key authorization. It's recommended using Azure AD authorization with your SignalR resources when possible to assure access with minimum required privileges. +Authorizing requests against SignalR with Microsoft Entra ID provides superior security and ease of use compared to Access Key authorization. It is highly recommended to use Microsoft Entra ID for authorizing whenever possible, as it ensures access with the minimum required privileges. <a id="security-principal"></a>-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.* +_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._ > [!IMPORTANT] > Disabling local authentication can have following influences.-> - The current set of access keys will be permanently deleted. -> - Tokens signed with access keys will no longer be available. +> +> - The current set of access keys will be permanently deleted. +> - Tokens signed with access keys will no longer be available. -## Overview of Azure AD for SignalR +## Overview of Microsoft Entra ID -When a security principal attempts to access a SignalR resource, the request must be authorized. With Azure AD, access to a resource requires 2 steps. +When a security principal attempts to access a SignalR resource, the request must be authorized. Get access to a resource requires 2 steps when using Microsoft Entra ID. -1. The security principal has to be authenticated by Azure, who will return an OAuth 2.0 token. -1. The token is passed as part of a request to the SignalR resource to authorize access to the resource. +1. The security principal has to be authenticated by Microsoft Entra ID, which will then return an OAuth 2.0 token. +1. The token is passed as part of a request to the SignalR resource for authorizing the request. -### Client-side authentication while using Azure AD +### Client-side authentication with Microsoft Entra ID -When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key. +When using Access Key, the key is shared between your app server (or Function App) and the SignalR resource. The SignalR service authenticates the client connection request with the shared key. -Using Azure AD there is no shared key. Instead SignalR uses a **temporary access key** to sign tokens for client connections. The workflow contains four steps. +When using Microsoft Entra ID, there is no shared key. Instead, SignalR uses a **temporary access key** for signing tokens used in client connections. The workflow contains four steps. -1. The security principal requires an OAuth 2.0 token from Azure to authenticate itself. +1. The security principal requires an OAuth 2.0 token from Microsoft Entra ID to authenticate itself. 2. The security principal calls SignalR Auth API to get a **temporary access key**. 3. The security principal signs a client token with the **temporary access key** for client connections during negotiation. 4. The client uses the client token to connect to Azure SignalR resources. -The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour. +The **temporary access key** expires in 90 minutes. It's recommend getting a new one and rotate the old one once an hour. The workflow is built in the [Azure SignalR SDK for app server](https://github.com/Azure/azure-signalr). ## Assign Azure roles for access rights -Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources. +Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure SignalR defines a set of Azure built-in roles that encompass common sets of permissions used to access SignalR resources. You can also define custom roles for access to SignalR resources. ### Resource scope You may have to determine the scope of access that the security principal should You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope: -| Scope | Description | -|-|-| -|**An individual resource.**| Applies to only the target resource.| -| **A resource group.** |Applies to all of the resources in a resource group.| -| **A subscription.** | Applies to all of the resources in a subscription.| -| **A management group.** |Applies to all of the resources in the subscriptions included in a management group.| -+| Scope | Description | +| | | +| **An individual resource.** | Applies to only the target resource. | +| **A resource group.** | Applies to all of the resources in a resource group. | +| **A subscription.** | Applies to all of the resources in a subscription. | +| **A management group.** | Applies to all of the resources in the subscriptions included in a management group. | ## Azure built-in roles for SignalR resources -|Role|Description|Use case| -|-|-|-| -|[SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server)|Access to Websocket connection creation API and Auth APIs.|Most commonly for an App Server.| -|[SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner)|Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs.|Use for **Serverless mode** for Authorization with Azure AD since it requires both REST APIs permissions and Auth API permissions.| -|[SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner)|Full access to data-plane REST APIs.|Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs.| -|[SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader)|Read-only access to data-plane REST APIs.| Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs.| +| Role | Description | Use case | +| - | | -- | +| [SignalR App Server](../role-based-access-control/built-in-roles.md#signalr-app-server) | Access to Websocket connection creation API and Auth APIs. | Most commonly for an App Server. | +| [SignalR Service Owner](../role-based-access-control/built-in-roles.md#signalr-service-owner) | Full access to all data-plane APIs, including REST APIs, WebSocket connection creation API and Auth APIs. | Use for **Serverless mode** for Authorization with Microsoft Entra ID since it requires both REST APIs permissions and Auth API permissions. | +| [SignalR REST API Owner](../role-based-access-control/built-in-roles.md#signalr-rest-api-owner) | Full access to data-plane REST APIs. | Often used to write a tool that manages connections and groups but does **NOT** make connections or call Auth APIs. | +| [SignalR REST API Reader](../role-based-access-control/built-in-roles.md#signalr-rest-api-reader) | Read-only access to data-plane REST APIs. | Commonly used to write a monitoring tool that calls **ONLY** SignalR data-plane **READONLY** REST APIs. | ## Next steps -To learn how to create an Azure application and use Azure AD Auth, see: +To learn how to create an Azure application and use Microsoft Entra authorization, see: -- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)+- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md) -To learn how to configure a managed identity and use Azure AD Auth, see: +To learn how to configure a managed identity and use Microsoft Entra authorization, see: -- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)+- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md) To learn more about roles and role assignments, see: To learn how to create custom roles, see: - [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role) -To learn how to use only Azure AD authentication, see -- [Disable local authentication](./howto-disable-local-auth.md)+To learn how to use only Microsoft Entra authentication, see: ++- [Disable local authentication](./howto-disable-local-auth.md) |
azure-signalr | Signalr Concept Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md | Title: Real-time apps with Azure SignalR Service and Azure Functions + Title: Real-time apps with Azure SignalR Service and Azure Functions description: Learn about how Azure SignalR Service and Azure Functions together allow you to create real-time serverless web applications. +Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together. -Azure SignalR Services combined with Azure Functions allows you to run real-time messaging web apps in a serverless environment. This article provides an overview of how the services work together. --Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment. --+Azure SignalR Service and Azure Functions are both fully managed, highly scalable services that allow you to focus on building applications instead of managing infrastructure. It's common to use the two services together to provide real-time communications in a [serverless](https://azure.microsoft.com/solutions/serverless/) environment. ## Integrate real-time communications with Azure services The Azure Functions service allows you to write code in [several languages](../azure-functions/supported-languages.md), including JavaScript, Python, C#, and Java that triggers whenever events occur in the cloud. Examples of these events include: -* HTTP and webhook requests -* Periodic timers -* Events from Azure services, such as: - - Event Grid - - Event Hubs - - Service Bus - - Azure Cosmos DB change feed - - Storage blobs and queues - - Logic Apps connectors such as Salesforce and SQL Server +- HTTP and webhook requests +- Periodic timers +- Events from Azure services, such as: + - Event Grid + - Event Hubs + - Service Bus + - Azure Cosmos DB change feed + - Storage blobs and queues + - Logic Apps connectors such as Salesforce and SQL Server By using Azure Functions to integrate these events with Azure SignalR Service, you have the ability to notify thousands of clients whenever events occur. Some common scenarios for real-time serverless messaging that you can implement with Azure Functions and SignalR Service include: -* Visualize IoT device telemetry on a real-time dashboard or map. -* Update data in an application when documents update in Azure Cosmos DB. -* Send in-app notifications when new orders are created in Salesforce. +- Visualize IoT device telemetry on a real-time dashboard or map. +- Update data in an application when documents update in Azure Cosmos DB. +- Send in-app notifications when new orders are created in Salesforce. ## SignalR Service bindings for Azure Functions The SignalR Service bindings for Azure Functions allow an Azure Function app to publish messages to clients connected to SignalR Service. Clients can connect to the service using a SignalR client SDK that is available in .NET, JavaScript, and Java, with more languages coming soon.+ <!-- Are there more lanaguages now? --> ### An example scenario An example of how to use the SignalR Service bindings is using Azure Functions t ### Authentication and users -SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Azure Active Directory, Facebook, and Twitter. You can then send messages directly to these authenticated users. +SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Microsoft Entra ID, Facebook, and Twitter. You can then send messages directly to these authenticated users. ## Next steps For full details on how to use Azure Functions and SignalR Service together visit the following resources: -* [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md) -* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr) +- [Azure Functions development and configuration with SignalR Service](signalr-concept-serverless-development-config.md) +- [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr) To try out the SignalR Service bindings for Azure Functions, see: -* [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md) -* [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md) -* [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr). +- [Azure SignalR Service Serverless Quickstart - C#](signalr-quickstart-azure-functions-csharp.md) +- [Azure SignalR Service Serverless Quickstart - JavaScript](signalr-quickstart-azure-functions-javascript.md) +- [Enable automatic updates in a web application using Azure Functions and SignalR Service](/training/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr). |
azure-signalr | Signalr Concept Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-disaster-recovery.md | -Instead, our service SDK provides a functionality to support multiple SignalR service instances and automatically switch to other instances when some of them aren't available. -With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so in this document. +For regional disaster recovery, we recommend the following two approaches: ++- **Enable Geo-Replication** (Easy way). This feature will handle regional failover for you automatically. When enabled, there remains just one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details. +- **Utilize Multiple Endpoints in Service SDK**. Our service SDK provides a functionality to support multiple SignalR service instances and automatically switch to other instances when some of them aren't available. With this feature, you're able to recover when a disaster takes place, but you need to set up the right system topology by yourself. You learn how to do so **in this document**. + ## High available architecture for SignalR service |
azure-signalr | Signalr Concept Serverless Development Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md | In the Azure portal, locate the **Settings** page of your SignalR Service resour A serverless real-time application built with Azure Functions and Azure SignalR Service requires at least two Azure Functions: -* A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL. -* One or more functions that handle messages sent from SignalR Service to clients. +- A `negotiate` function that the client calls to obtain a valid SignalR Service access token and endpoint URL. +- One or more functions that handle messages sent from SignalR Service to clients. ### negotiate function For more information, see the [`SignalR` output binding reference](../azure-func ### SignalR Hubs -SignalR has a concept of *hubs*. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces. +SignalR has a concept of _hubs_. Each client connection and each message sent from Azure Functions is scoped to a specific hub. You can use hubs as a way to separate your connections and messages into logical namespaces. ## Class-based model The class-based model is dedicated for C#. The class-based model provides a consistent SignalR server-side programming experience, with the following features: -* Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name. -* Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order. -* Convenient output and negotiate experience. +- Less configuration work: The class name is used as `HubName`, the method name is used as `Event` and the `Category` is decided automatically according to method name. +- Auto parameter binding: `ParameterNames` and attribute `[SignalRParameter]` aren't needed. Parameters are automatically bound to arguments of Azure Function methods in order. +- Convenient output and negotiate experience. The following code demonstrates these features: All the hub methods **must** have an argument of `InvocationContext` decorated b By default, `category=messages` except the method name is one of the following names: -* `OnConnected`: Treated as `category=connections, event=connected` -* `OnDisconnected`: Treated as `category=connections, event=disconnected` +- `OnConnected`: Treated as `category=connections, event=connected` +- `OnDisconnected`: Treated as `category=connections, event=disconnected` ### Parameter binding experience In class based model, `[SignalRParameter]` is unnecessary because all the arguments are marked as `[SignalRParameter]` by default except in one of the following situations: -* The argument is decorated by a binding attribute -* The argument's type is `ILogger` or `CancellationToken` -* The argument is decorated by attribute `[SignalRIgnore]` +- The argument is decorated by a binding attribute +- The argument's type is `ILogger` or `CancellationToken` +- The argument is decorated by attribute `[SignalRIgnore]` ### Negotiate experience in class-based model SignalR client SDKs already contain the logic required to perform the negotiatio ```javascript const connection = new signalR.HubConnectionBuilder()- .withUrl('https://my-signalr-function-app.azurewebsites.net/api') - .build() + .withUrl("https://my-signalr-function-app.azurewebsites.net/api") + .build(); ``` By convention, the SDK automatically appends `/negotiate` to the URL and uses it to begin the negotiation. By convention, the SDK automatically appends `/negotiate` to the URL and uses it For more information on how to use the SignalR client SDK, see the documentation for your language: -* [.NET Standard](/aspnet/core/signalr/dotnet-client) -* [JavaScript](/aspnet/core/signalr/javascript-client) -* [Java](/aspnet/core/signalr/java-client) +- [.NET Standard](/aspnet/core/signalr/dotnet-client) +- [JavaScript](/aspnet/core/signalr/javascript-client) +- [Java](/aspnet/core/signalr/java-client) ### Sending messages from a client to the service If you've [upstream](concept-upstream.md) configured for your SignalR resource, you can send messages from a client to your Azure Functions using any SignalR client. Here's an example in JavaScript: ```javascript-connection.send('method1', 'arg1', 'arg2'); +connection.send("method1", "arg1", "arg2"); ``` ## Azure Functions configuration The JavaScript/TypeScript client makes HTTP request to the negotiate function to #### Localhost -When running the Function app on your local computer, you can add a `Host` section to *local.settings.json* to enable CORS. In the `Host` section, add two properties: +When running the Function app on your local computer, you can add a `Host` section to _local.settings.json_ to enable CORS. In the `Host` section, add two properties: -* `CORS` - enter the base URL that is the origin the client application -* `CORSCredentials` - set it to `true` to allow "withCredentials" requests +- `CORS` - enter the base URL that is the origin the client application +- `CORSCredentials` - set it to `true` to allow "withCredentials" requests Example: Configure your SignalR clients to use the API Management URL. ### Using App Service Authentication -Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Azure Active Directory. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID. +Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that have been authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID. -In the Azure portal, in your Function app's *Platform features* tab, open the *Authentication/authorization* settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice. +In the Azure portal, in your Function app's _Platform features_ tab, open the _Authentication/authorization_ settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice. Once configured, authenticated HTTP requests will include `x-ms-client-principal-name` and `x-ms-client-principal-id` headers containing the authenticated identity's username and user ID, respectively. |
azure-signalr | Signalr Howto Authorize Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-application.md | Title: Authorize request to SignalR resources with Azure AD from Azure applications -description: This article provides information about authorizing request to SignalR resources with Azure AD from Azure applications + Title: Authorize requests to SignalR resources with Microsoft Entra applications +description: This article provides information about authorizing request to SignalR resources with Microsoft Entra applications Last updated 02/03/2023 ms.devlang: csharp -# Authorize request to SignalR resources with Azure AD from Azure applications +# Authorize requests to SignalR resources with Microsoft Entra applications -Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md). +Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra applications](../active-directory/develop/app-objects-and-service-principals.md). -This article shows how to configure your SignalR resource and codes to authorize the request to a SignalR resource from an Azure application. +This article shows how to configure your SignalR resource and codes to authorize requests to a SignalR resource from a Microsoft Entra application. ## Register an application -The first step is to register an Azure application. +The first step is to register a Microsoft Entra application. -1. On the [Azure portal](https://portal.azure.com/), search for and select **Azure Active Directory** +1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID** 2. Under **Manage** section, select **App registrations**. 3. Select **New registration**.-- ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png) -+ ![Screenshot of registering an application.](./media/signalr-howto-authorize-application/register-an-application.png) 4. Enter a display **Name** for your application. 5. Select **Register** to confirm the register. Once you have your application registered, you can find the **Application (clien ![Screenshot of an application.](./media/signalr-howto-authorize-application/application-overview.png) To learn more about registering an application, see-- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). +- [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). ## Add credentials The application requires a client secret to prove its identity when requesting a 1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, select **New client secret**.-![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png) + ![Screenshot of creating a client secret.](./media/signalr-howto-authorize-application/new-client-secret.png) 1. Enter a **description** for the client secret, and choose a **expire time**.-1. Copy the value of the **client secret** and then paste it to a secure location. - > [!NOTE] - > The secret will display only once. +1. Copy the value of the **client secret** and then paste it to a secure location. + > [!NOTE] + > The secret will display only once. ### Certificate To learn more about adding credentials, see The following steps describe how to assign a `SignalR App Server` role to a service principal (application) over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). -> [!Note] +> [!NOTE] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) 1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource. The following steps describe how to assign a `SignalR App Server` role to a serv > Azure role assignments may take up to 30 minutes to propagate. To learn more about how to assign and manage Azure role assignments, see these articles:+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md) To learn more about how to assign and manage Azure role assignments, see these a The best practice is to configure identity and credentials in your environment variables: -| Variable | Description | -|| -| `AZURE_TENANT_ID` | The Azure Active Directory tenant(directory) ID. | -| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. | -| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. | +| Variable | Description | +| - | | +| `AZURE_TENANT_ID` | The Microsoft Entra tenant ID. | +| `AZURE_CLIENT_ID` | The client(application) ID of an App Registration in the tenant. | +| `AZURE_CLIENT_SECRET` | A client secret that was generated for the App Registration. | | `AZURE_CLIENT_CERTIFICATE_PATH` | A path to a certificate and private key pair in PEM or PFX format, which can authenticate the App Registration. |-| `AZURE_USERNAME` | The username, also known as upn, of an Azure Active Directory user account. | -| `AZURE_PASSWORD` | The password for the Azure Active Directory user account. Password isn't supported for accounts with MFA enabled. | +| `AZURE_USERNAME` | The username, also known as upn, of a Microsoft Entra user account. | +| `AZURE_PASSWORD` | The password of the Microsoft Entra user account. Password isn't supported for accounts with MFA enabled. | You can use either [DefaultAzureCredential](/dotnet/api/azure.identity.defaultazurecredential) or [EnvironmentCredential](/dotnet/api/azure.identity.environmentcredential) to configure your SignalR endpoints. services.AddSignalR().AddAzureSignalR(option => To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential Class](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). -#### Use different credentials while using multiple endpoints. +#### Use different credentials while using multiple endpoints For some reason, you may want to use different credentials for different endpoints. services.AddSignalR().AddAzureSignalR(option => ### Azure Functions SignalR bindings -Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Azure application identities to access your SignalR resources. +Azure Functions SignalR bindings use [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) on portal or [`local.settings.json`](../azure-functions/functions-develop-local.md#local-settings-file) at local to configure Microsoft Entra application identities to access your SignalR resources. -Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample. +Firstly, you need to specify the service URI of the SignalR Service, whose key is `serviceUri` starting with a **connection name prefix** (defaults to `AzureSignalRConnectionString`) and a separator (`__` on Azure portal and `:` in the local.settings.json file). The connection name can be customized with the binding property [`ConnectionStringSetting`](../azure-functions/functions-bindings-signalr-service.md). Continue reading to find the sample. -Then you choose to configure your Azure application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables). +Then you choose to configure your Microsoft Entra application identity in [pre-defined environment variables](#configure-identity-in-pre-defined-environment-variables) or [in SignalR specified variables](#configure-identity-in-signalr-specified-variables). #### Configure identity in pre-defined environment variables See [Environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables) for the list of pre-defined environment variables. When you have multiple services, we recommend that you use the same application identity, so that you don't need to configure the identity for each service. These environment variables might also be used by other services according to the settings of other services. -For example, to use client secret credentials, configure as follows in the `local.settings.json` file. +For example, to use client secret credentials, configure as follows in the `local.settings.json` file. + ```json { "Values": { "<CONNECTION_NAME_PREFIX>:serviceUri": "https://<SIGNALR_RESOURCE_NAME>.service.signalr.net",- "AZURE_CLIENT_ID":"...", - "AZURE_CLIENT_SECRET":"...", - "AZURE_TENANT_ID":"..." + "AZURE_CLIENT_ID": "...", + "AZURE_CLIENT_SECRET": "...", + "AZURE_TENANT_ID": "..." } } ```+ On Azure portal, add settings as follows:-``` ++```bash <CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net AZURE_CLIENT_ID = ... AZURE_TENANT_ID = ... AZURE_CLIENT_SECRET = ...- ``` +``` #### Configure identity in SignalR specified variables The SignalR specified variables share the same key prefix with `serviceUri` key. Here's the list of variables you might use:-* clientId -* clientSecret -* tenantId ++- clientId +- clientSecret +- tenantId Here are the samples to use client secret credentials: In the `local.settings.json` file:+ ```json { "Values": { In the `local.settings.json` file: ``` On Azure portal, add settings as follows:-``` ++```bash <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__clientId = ... <CONNECTION_NAME_PREFIX>__clientSecret = ... <CONNECTION_NAME_PREFIX>__tenantId = ... ```+ ## Next steps See the following related articles:-- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from managed identities](signalr-howto-authorize-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)++- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md) +- [Authorize request to SignalR resources with Microsoft Entra managed identities](signalr-howto-authorize-managed-identity.md) +- [Disable local authentication](./howto-disable-local-auth.md) |
azure-signalr | Signalr Howto Authorize Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-authorize-managed-identity.md | Title: Authorize managed identity requests to a SignalR resource -description: This article provides information about authorizing request to SignalR resources with Azure AD from managed identities + Title: Authorize requests to SignalR resources with Microsoft Entra managed identities +description: This article provides information about authorizing request to SignalR resources with Microsoft Entra managed identities Last updated 03/28/2023 ms.devlang: csharp -# Authorize managed identity requests to a SignalR resource +# Authorize requests to SignalR resources with Microsoft Entra managed identities -Azure SignalR Service supports Azure Active Directory (Azure AD) authorizing requests from Azure resources using [managed identities for Azure resources +Azure SignalR Service supports Microsoft Entra ID for authorizing requests from [Microsoft Entra managed identities ](../active-directory/managed-identities-azure-resources/overview.md). -This article shows how to configure your SignalR resource and code to authorize a managed identity request to a SignalR resource. +This article shows how to configure your SignalR resource and code to authorize requests to a SignalR resource from a managed identity. ## Configure managed identities This example shows you how to configure `System-assigned managed identity` on a ![Screenshot of an application.](./media/signalr-howto-authorize-managed-identity/identity-virtual-machine.png) 1. Select the **Save** button to confirm the change. - To learn how to create user-assigned managed identities, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) To learn more about configuring managed identities, see one of these articles: See [How to use managed identities for App Service and Azure Functions](../app-s The following steps describe how to assign a `SignalR App Server` role to a system-assigned identity over a SignalR resource. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). -> [!Note] +> [!NOTE] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md) 1. From the [Azure portal](https://portal.azure.com/), navigate to your SignalR resource. The following steps describe how to assign a `SignalR App Server` role to a syst > Azure role assignments may take up to 30 minutes to propagate. To learn more about how to assign and manage Azure role assignments, see these articles:+ - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md) services.AddSignalR().AddAzureSignalR(option => #### Using user-assigned identity -Provide `ClientId` while creating the `ManagedIdentityCredential` object. +Provide `ClientId` while creating the `ManagedIdentityCredential` object. > [!IMPORTANT] > Use **Client Id**, not the Object (principal) ID even if they are both GUID! You might need a group of key-value pairs to configure an identity. The keys of If you only configure the service URI, then the `DefaultAzureCredential` is used. This class is useful when you want to share the same configuration on Azure and local development environments. To learn how `DefaultAzureCredential` works, see [DefaultAzureCredential](/dotnet/api/overview/azure/identity-readme#defaultazurecredential). In the Azure portal, use the following example to configure a `DefaultAzureCredential`. If you don't configure any [environment variables listed here](/dotnet/api/overview/azure/identity-readme#environment-variables), then the system-assigned identity is used to authenticate.-``` ++```bash <CONNECTION_NAME_PREFIX>__serviceUri=https://<SIGNALR_RESOURCE_NAME>.service.signalr.net ``` Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` file. At the local scope there's no managed identity, and the authentication via Visual Studio, Azure CLI, and Azure PowerShell accounts are attempted in order.+ ```json { "Values": { Here's a config sample of `DefaultAzureCredential` in the `local.settings.json` If you want to use system-assigned identity independently and without the influence of [other environment variables](/dotnet/api/overview/azure/identity-readme#environment-variables), you should set the `credential` key with the connection name prefix to `managedidentity`. Here's an application settings sample: -``` +```bash <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity ``` If you want to use system-assigned identity independently and without the influe If you want to use user-assigned identity, you need to assign `clientId`in addition to the `serviceUri` and `credential` keys with the connection name prefix. Here's the application settings sample: -``` +```bash <CONNECTION_NAME_PREFIX>__serviceUri = https://<SIGNALR_RESOURCE_NAME>.service.signalr.net <CONNECTION_NAME_PREFIX>__credential = managedidentity <CONNECTION_NAME_PREFIX>__clientId = <CLIENT_ID> If you want to use user-assigned identity, you need to assign `clientId`in addit ## Next steps See the following related articles:-- [Overview of Azure AD for SignalR](signalr-concept-authorize-azure-active-directory.md)-- [Authorize request to SignalR resources with Azure AD from Azure applications](signalr-howto-authorize-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)++- [Overview of Microsoft Entra ID for SignalR](signalr-concept-authorize-azure-active-directory.md) +- [Authorize request to SignalR resources with Microsoft Entra applications](signalr-howto-authorize-application.md) +- [Disable local authentication](./howto-disable-local-auth.md) |
azure-signalr | Signalr Howto Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-azure-policy.md | The following built-in policy definitions are specific to Azure SignalR Service: ## Assign policy definitions -* Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs. -* Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). SignalR policy assignments apply to existing and new SignalR resources within the scope. -* Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time. +- Assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs. +- Scope a policy assignment to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). SignalR policy assignments apply to existing and new SignalR resources within the scope. +- Enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time. > [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). When a resource is non-compliant, there are many possible reasons. To determine 1. Select **All services**, and search for **Policy**. 1. Select **Compliance**. 1. Use the filters to limit compliance states or to search for policies- - [ ![Policy compliance in portal](./media/signalr-howto-azure-policy/azure-policy-compliance.png) ](./media/signalr-howto-azure-policy/azure-policy-compliance.png#lightbox) -2. Select a policy to review aggregate compliance details and events. If desired, then select a specific SignalR for resource compliance. ++ [ ![Screenshot showing policy compliance in portal.](./media/signalr-howto-azure-policy/azure-policy-compliance.png) ](./media/signalr-howto-azure-policy/azure-policy-compliance.png#lightbox) ++1. Select a policy to review aggregate compliance details and events. If desired, then select a specific SignalR for resource compliance. ### Policy compliance in the Azure CLI az policy state list \ ## Next steps -* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md) +- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md) -* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md) --* Learn more about [governance capabilities](../governance/index.yml) in Azure +- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md) +- Learn more about [governance capabilities](../governance/index.yml) in Azure <!-- LINKS - External -->-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ ++[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ |
azure-signalr | Signalr Howto Diagnostic Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-diagnostic-logs.md | Platform metrics and the Activity log are collected and stored automatically, bu Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. -See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. +Resource Logs are grouped into Category groups. Category groups are a collection of different logs to help you achieve different monitoring goals. These groups are defined dynamically and may change over time as new resource logs become available and are added to the category group. Note that this may incur additionally charges. The audit resource log category group allows you to select the resource logs that are necessary for auditing your resource. For more information, see [Diagnostic settings in Azure Monitor: Resource logs](../azure-monitor/essentials/diagnostic-settings.md?tabs=portal#resource-logs). ++For the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect, see [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md). The metrics and logs you can collect are discussed in the following sections. |
azure-signalr | Signalr Howto Troubleshoot Live Trace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-troubleshoot-live-trace.md | Title: How to use live trace tool for Azure SignalR service description: Learn how to use live trace tool for Azure SignalR service--++ Last updated 07/14/2022 Live trace tool is a single web application for capturing and displaying live tr > [!NOTE] > Note that the live traces will be counted as outbound messages.-> Azure Active Directory access to live trace tool is not supported. You will need to enable **Access Key** in **Keys** settings. +> Using Microsoft Entra ID to access the live trace tool is not supported. You have to enable **Access Key** in **Keys** settings. ## Launch the live trace tool +> [!NOTE] +> When enable access key, you'll use access token to authenticate live trace tool. +> Otherwise, you'll use Microsoft Entra ID to authenticate live trace tool. +> You can check whether you enable access key or not in your SignalR Service's Keys page in Azure portal. ++### Steps for access key enabled ++1. Go to the Azure portal and your SignalR Service page. +1. From the menu on the left, under **Monitoring** select **Live trace settings**. +1. Select **Enable Live Trace**. +1. Select **Save** button. It will take a moment for the changes to take effect. +1. When updating is complete, select **Open Live Trace Tool**. ++ :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool."::: ++### Steps for access key disabled ++#### Assign live trace tool API permission to yourself +1. Go to the Azure portal and your SignalR Service page. +1. Select **Access control (IAM)**. +1. In the new page, Click **+Add**, then click **Role assignment**. +1. In the new page, focus on **Job function roles** tab, Select **SignalR Service Owner** role, and then click **Next**. +1. In **Members** page, click **+Select members**. +1. In the new panel, search and select members, and then click **Select**. +1. Click **Review + assign**, and wait for the completion notification. ++#### Visit live trace tool 1. Go to the Azure portal and your SignalR Service page. 1. From the menu on the left, under **Monitoring** select **Live trace settings**. 1. Select **Enable Live Trace**. Live trace tool is a single web application for capturing and displaying live tr :::image type="content" source="media/signalr-howto-troubleshoot-live-trace/signalr-enable-live-trace.png" alt-text="Screenshot of launching the live trace tool."::: +#### Sign in with your Microsoft account ++1. The live trace tool will pop up a Microsoft sign in window. If no window is pop up, check and allow pop up windows in your browser. +1. Wait for **Ready** showing in the status bar. + ## Capture live traces The live trace tool provides functionality to help you capture the live traces for troubleshooting. The real time live traces captured by live trace tool contain detailed informati In this guide, you learned about how to use live trace tool. Next, learn how to handle the common issues: * Troubleshooting guides: How to troubleshoot typical issues based on live traces, see [troubleshooting guide](./signalr-howto-troubleshoot-guide.md).-* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md). +* Troubleshooting methods: For self-diagnosis to find the root cause directly or narrow down the issue, see [troubleshooting methods introduction](./signalr-howto-troubleshoot-method.md). |
azure-signalr | Signalr Howto Work With Apim | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-apim.md | Azure API Management service provides a hybrid, multicloud management platform f :::image type="content" source="./media/signalr-howto-work-with-apim/architecture.png" alt-text="Diagram that shows the architecture of using SignalR Service with API Management."::: - ## Create resources -* Follow [Quickstart: Use an ARM template to deploy Azure SignalR](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_** +- Follow [Quickstart: Use an ARM template to deploy Azure SignalR](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_** -* Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance **_APIM1_** +- Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance **_APIM1_** ## Configure APIs Azure API Management service provides a hybrid, multicloud management platform f There are two types of requests for a SignalR client: -* **negotiate request**: HTTP `POST` request to `<APIM-URL>/client/negotiate/` -* **connect request**: request to `<APIM-URL>/client/`, it could be `WebSocket` or `ServerSentEvent` or `LongPolling` depends on the transport type of your SignalR client +- **negotiate request**: HTTP `POST` request to `<APIM-URL>/client/negotiate/` +- **connect request**: request to `<APIM-URL>/client/`, it could be `WebSocket` or `ServerSentEvent` or `LongPolling` depends on the transport type of your SignalR client The type of **connect request** varies depending on the transport type of the SignalR clients. As for now, API Management doesn't yet support different types of APIs for the same suffix. With this limitation, when using API Management, your SignalR client doesn't support fallback from `WebSocket` transport type to other transport types. Fallback from `ServerSentEvent` to `LongPolling` could be supported. Below sections describe the detailed configurations for different transport types. ### Configure APIs when client connects with `WebSocket` transport This section describes the steps to configure API Management when the SignalR clients connect with `WebSocket` transport. When SignalR clients connect with `WebSocket` transport, three types of requests are involved:+ 1. **OPTIONS** preflight HTTP request for negotiate 1. **POST** HTTP request for negotiate 1. WebSocket request for connect Let's configure API Management from the portal.+ 1. Go to **APIs** tab in the portal for API Management instance **_APIM1_**, select **Add API** and choose **HTTP**, **Create** with the following parameters:- * Display name: `SignalR negotiate` - * Web service URL: `https://<your-signalr-service-url>/client/negotiate/` - * API URL suffix: `client/negotiate/` + - Display name: `SignalR negotiate` + - Web service URL: `https://<your-signalr-service-url>/client/negotiate/` + - API URL suffix: `client/negotiate/` 1. Select the created `SignalR negotiate` API, **Save** with below settings:- 1. In **Design** tab - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `negotiate preflight` - * URL: `OPTIONS` `/` - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `negotiate` - * URL: `POST` `/` - 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose + 1. In **Design** tab + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `negotiate preflight` + - URL: `OPTIONS` `/` + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `negotiate` + - URL: `POST` `/` + 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose 1. Select **Add API** and choose **WebSocket**, **Create** with the following parameters:- * Display name: `SignalR connect` - * WebSocket URL: `wss://<your-signalr-service-url>/client/` - * API URL suffix: `client/` + - Display name: `SignalR connect` + - WebSocket URL: `wss://<your-signalr-service-url>/client/` + - API URL suffix: `client/` 1. Select the created `SignalR connect` API, **Save** with below settings:- 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose + 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose Now API Management is successfully configured to support SignalR client with `WebSocket` transport. ### Configure APIs when client connects with `ServerSentEvents` or `LongPolling` transport This section describes the steps to configure API Management when the SignalR clients connect with `ServerSentEvents` or `LongPolling` transport type. When SignalR clients connect with `ServerSentEvents` or `LongPolling` transport, five types of requests are involved:+ 1. **OPTIONS** preflight HTTP request for negotiate 1. **POST** HTTP request for negotiate 1. **OPTIONS** preflight HTTP request for connect This section describes the steps to configure API Management when the SignalR cl Now let's configure API Management from the portal. 1. Go to **APIs** tab in the portal for API Management instance **_APIM1_**, select **Add API** and choose **HTTP**, **Create** with the following parameters:- * Display name: `SignalR` - * Web service URL: `https://<your-signalr-service-url>/client` - * API URL suffix: `client` + - Display name: `SignalR` + - Web service URL: `https://<your-signalr-service-url>/client` + - API URL suffix: `client` 1. Select the created `SignalR` API, **Save** with below settings:- 1. In **Design** tab - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `negotiate preflight` - * URL: `OPTIONS` `/negotiate` - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `negotiate` - * URL: `POST` `/negotiate` - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `connect preflight` - * URL: `OPTIONS` `/` - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `connect` - * URL: `POST` `/` - 1. Select **Add operation**, and **Save** with the following parameters: - * Display name: `connect get` - * URL: `GET` `/` - 1. Select the newly added **connect get** operation, and edit the Backend policy to disable buffering for `ServerSentEvents`, [check here](../api-management/how-to-server-sent-events.md) for more details. - ```xml - <backend> - <forward-request buffer-response="false" /> - </backend> - ``` - 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose + 1. In **Design** tab + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `negotiate preflight` + - URL: `OPTIONS` `/negotiate` + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `negotiate` + - URL: `POST` `/negotiate` + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `connect preflight` + - URL: `OPTIONS` `/` + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `connect` + - URL: `POST` `/` + 1. Select **Add operation**, and **Save** with the following parameters: + - Display name: `connect get` + - URL: `GET` `/` + 1. Select the newly added **connect get** operation, and edit the Backend policy to disable buffering for `ServerSentEvents`, [check here](../api-management/how-to-server-sent-events.md) for more details. + ```xml + <backend> + <forward-request buffer-response="false" /> + </backend> + ``` + 1. Switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose Now API Management is successfully configured to support SignalR client with `ServerSentEvents` or `LongPolling` transport. ### Run chat+ Now, the traffic can reach SignalR Service through API Management. Let’s use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally. -* First let's get the connection string of **_ASRS1_** - * On the **Connection strings** tab of **_ASRS1_** - * **Client endpoint**: Enter the URL using **Gateway URL** of **_APIM1_**, for example `https://apim1.azure-api.net`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section. - * Copy the Connection string --* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples -* Go to samples/Chatroom folder: -* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString. -- ```bash - cd samples/Chatroom - dotnet restore - dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>" - dotnet run - ``` -* Configure transport type for the client -- Open `https://docsupdatetracker.net/index.html` under folder `wwwroot` and find the code when `connection` is created, update it to specify the transport type. -- For example, to specify the connection to use server-sent-events or long polling, update the code to: -- ```javascript - const connection = new signalR.HubConnectionBuilder() - .withUrl('/chat', signalR.HttpTransportType.ServerSentEvents | signalR.HttpTransportType.LongPolling) - .build(); - ``` - To specify the connection to use WebSockets, update the code to: - - ```javascript - const connection = new signalR.HubConnectionBuilder() - .withUrl('/chat', signalR.HttpTransportType.WebSockets) - .build(); - ``` --* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the connection is established through **_APIM1_** +- First let's get the connection string of **_ASRS1_** ++ - On the **Connection strings** tab of **_ASRS1_** + - **Client endpoint**: Enter the URL using **Gateway URL** of **_APIM1_**, for example `https://apim1.azure-api.net`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section. + - Copy the Connection string ++- Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples +- Go to samples/Chatroom folder: +- Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString. ++ ```bash + cd samples/Chatroom + dotnet restore + dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>" + dotnet run + ``` ++- Configure transport type for the client ++ Open `https://docsupdatetracker.net/index.html` under folder `wwwroot` and find the code when `connection` is created, update it to specify the transport type. ++ For example, to specify the connection to use server-sent-events or long polling, update the code to: ++ ```javascript + const connection = new signalR.HubConnectionBuilder() + .withUrl( + "/chat", + signalR.HttpTransportType.ServerSentEvents | + signalR.HttpTransportType.LongPolling + ) + .build(); + ``` ++ To specify the connection to use WebSockets, update the code to: ++ ```javascript + const connection = new signalR.HubConnectionBuilder() + .withUrl("/chat", signalR.HttpTransportType.WebSockets) + .build(); + ``` ++- Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the connection is established through **_APIM1_** ## Next steps |
azure-signalr | Signalr Howto Work With App Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-work-with-app-gateway.md | -Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following: +Application Gateway is a web traffic load balancer that enables you to manage traffic to your web applications. Using Application Gateway with SignalR Service enables you to do the following: -* Protect your applications from common web vulnerabilities. -* Get application-level load-balancing for your scalable and highly available applications. -* Set up end-to-end security. -* Customize the domain name. +- Protect your applications from common web vulnerabilities. +- Get application-level load-balancing for your scalable and highly available applications. +- Set up end-to-end security. +- Customize the domain name. -This article contains two parts, -* [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway. -* [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway. +This article contains two parts, ++- [The first part](#set-up-and-configure-application-gateway) shows how to configure Application Gateway so that the clients can access SignalR through Application Gateway. +- [The second part](#secure-signalr-service) shows how to secure SignalR Service by adding access control to SignalR Service and only allow traffic from Application Gateway. :::image type="content" source="./media/signalr-howto-work-with-app-gateway/architecture.png" alt-text="Diagram that shows the architecture of using SignalR Service with Application Gateway."::: ## Set up and configure Application Gateway ### Create a SignalR Service instance-* Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_** ++- Follow [the article](./signalr-quickstart-azure-signalr-service-arm-template.md) and create a SignalR Service instance **_ASRS1_** ### Create an Application Gateway instance+ Create from the portal an Application Gateway instance **_AG1_**:-* On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**. -* On the **Basics** tab, use these values for the following application gateway settings: - - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service - - **Application gateway name**: **_AG1_** - - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers. - - **Name**: Enter **_VN1_** for the name of the virtual network. - - **Subnets**: Update the **Subnets** grid with below 2 subnets -- | Subnet name | Address range| Note| - |--|--|--| - | *myAGSubnet* | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed. - | *myBackendSubnet* | (another address range) | Subnet for the Azure SignalR instance. -- - Accept the default values for the other settings and then select **Next: Frontends** -- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab."::: --* On the **Frontends** tab: - - **Frontend IP address type**: **Public**. - - Select **Add new** for the **Public IP address** and enter *myAGPublicIPAddress* for the public IP address name, and then select **OK**. - - Select **Next: Backends** - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab."::: --* On the **Backends** tab, select **Add a backend pool**: - - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool. - - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net` - - Select **Next: Configuration** -- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service."::: --* On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column: - - **Rule name**: **_myRoutingRule_** - - **Priority**: 1 - - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener: - - **Listener name**: Enter *myListener* for the name of the listener. - - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend. - - **Protocol**: HTTP - * We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario. - - Accept the default values for the other settings on the **Listener** tab - - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service."::: - - On the **Backend targets** tab, use the following values: - * **Target type**: Backend pool - * **Backend target**: select **signalr** we previously created - * **Backend settings**: select **Add new** to add a new setting. - * **Backend settings name**: *mySetting* - * **Backend protocol**: **HTTPS** - * **Use well known CA certificate**: **Yes** - * **Override with new host name**: **Yes** - * **Host name override**: **Pick host name from backend target** - * Others keep the default values -- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service."::: -- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway."::: --* Review and create the **_AG1_** - - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance."::: ++- On the [Azure portal](https://portal.azure.com/), search for **Application Gateway** and **Create**. +- On the **Basics** tab, use these values for the following application gateway settings: ++ - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service + - **Application gateway name**: **_AG1_** + - **Virtual network**, select **Create new**, and in the **Create virtual network** window that opens, enter the following values to create the virtual network and two subnets, one for the application gateway, and another for the backend servers. ++ - **Name**: Enter **_VN1_** for the name of the virtual network. + - **Subnets**: Update the **Subnets** grid with below 2 subnets ++ | Subnet name | Address range | Note | + | -- | -- | -- | + | _myAGSubnet_ | (address range) | Subnet for the application gateway. The application gateway subnet can contain only application gateways. No other resources are allowed. | + | _myBackendSubnet_ | (another address range) | Subnet for the Azure SignalR instance. | ++ - Accept the default values for the other settings and then select **Next: Frontends** ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/basics.png" alt-text="Screenshot of creating Application Gateway instance with Basics tab."::: ++- On the **Frontends** tab: ++ - **Frontend IP address type**: **Public**. + - Select **Add new** for the **Public IP address** and enter _myAGPublicIPAddress_ for the public IP address name, and then select **OK**. + - Select **Next: Backends** + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-frontends.png" alt-text="Screenshot of creating Application Gateway instance with Frontends tab."::: ++- On the **Backends** tab, select **Add a backend pool**: ++ - **Name**: Enter **_signalr_** for the SignalR Service resource backend pool. + - Backend targets **Target**: the **host name** of your SignalR Service instance **_ASRS1_**, for example `asrs1.service.signalr.net` + - Select **Next: Configuration** ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-backends.png" alt-text="Screenshot of setting up the application gateway backend pool for the SignalR Service."::: ++- On the **Configuration** tab, select **Add a routing rule** in the **Routing rules** column: ++ - **Rule name**: **_myRoutingRule_** + - **Priority**: 1 + - On the **Listener** tab within the **Add a routing rule** window, enter the following values for the listener: + - **Listener name**: Enter _myListener_ for the name of the listener. + - **Frontend IP**: Select **Public** to choose the public IP you created for the frontend. + - **Protocol**: HTTP + - We use the HTTP frontend protocol on Application Gateway in this article to simplify the demo and help you get started easier. But in reality, you may need to enable HTTPs and Customer Domain on it with production scenario. + - Accept the default values for the other settings on the **Listener** tab + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-listener.png" alt-text="Screenshot of setting up the application gateway routing rule listener tab for the SignalR Service."::: + - On the **Backend targets** tab, use the following values: ++ - **Target type**: Backend pool + - **Backend target**: select **signalr** we previously created + - **Backend settings**: select **Add new** to add a new setting. ++ - **Backend settings name**: _mySetting_ + - **Backend protocol**: **HTTPS** + - **Use well known CA certificate**: **Yes** + - **Override with new host name**: **Yes** + - **Host name override**: **Pick host name from backend target** + - Others keep the default values ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-backend.png" alt-text="Screenshot of setting up the application gateway backend setting for the SignalR Service."::: ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-create-rule-backends.png" alt-text="Screenshot of creating backend targets for application gateway."::: ++- Review and create the **_AG1_** + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-review.png" alt-text="Screenshot of reviewing and creating the application gateway instance."::: ### Configure Application Gateway health probe When **_AG1_** is created, go to **Health probes** tab under **Settings** sectio ### Quick test -* Try with an invalid client request `https://asrs1.service.signalr.net/client` and it returns *400* with error message *'hub' query parameter is required.* It means the request arrived at the SignalR Service and did the request validation. - ```bash - curl -v https://asrs1.service.signalr.net/client - ``` - returns - ``` - < HTTP/1.1 400 Bad Request - < ... - < - 'hub' query parameter is required. - ``` -* Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address -- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway."::: --* Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns *400* with error message *'hub' query parameter is required.* It means the request successfully went through Application Gateway to SignalR Service and did the request validation. -- ```bash - curl -I http://<frontend-public-IP-address>/client - ``` - returns - ``` - < HTTP/1.1 400 Bad Request - < ... - < - 'hub' query parameter is required. - ``` +- Try with an invalid client request `https://asrs1.service.signalr.net/client` and it returns _400_ with error message _'hub' query parameter is required._ It means the request arrived at the SignalR Service and did the request validation. + ```bash + curl -v https://asrs1.service.signalr.net/client + ``` + returns + ``` + < HTTP/1.1 400 Bad Request + < ... + < + 'hub' query parameter is required. + ``` +- Go to the Overview tab of **_AG1_**, and find out the Frontend public IP address ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/quick-test.png" alt-text="Screenshot of quick testing SignalR Service health endpoint through Application Gateway."::: ++- Visit the health endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it also returns _400_ with error message _'hub' query parameter is required._ It means the request successfully went through Application Gateway to SignalR Service and did the request validation. ++ ```bash + curl -I http://<frontend-public-IP-address>/client + ``` ++ returns ++ ``` + < HTTP/1.1 400 Bad Request + < ... + < + 'hub' query parameter is required. + ``` ### Run chat through Application Gateway Now, the traffic can reach SignalR Service through the Application Gateway. The customer could use the Application Gateway public IP address or custom domain name to access the resource. Let’s use [this chat application](https://github.com/aspnet/AzureSignalR-samples/tree/main/samples/ChatRoom) as an example. Let's start with running it locally. -* First let's get the connection string of **_ASRS1_** - * On the **Connection strings** tab of **_ASRS1_** - * **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section. - * Copy the Connection string - - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint."::: +- First let's get the connection string of **_ASRS1_** ++ - On the **Connection strings** tab of **_ASRS1_** + - **Client endpoint**: Enter the URL using frontend public IP address of **_AG1_**, for example `http://20.88.8.8`. It's a connection string generator when using reverse proxies, and the value isn't preserved when next time you come back to this tab. When value entered, the connection string appends a `ClientEndpoint` section. + - Copy the Connection string + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/connection-string.png" alt-text="Screenshot of getting the connection string for SignalR Service with client endpoint."::: ++- Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples +- Go to samples/Chatroom folder: +- Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString. -* Clone the GitHub repo https://github.com/aspnet/AzureSignalR-samples -* Go to samples/Chatroom folder: -* Set the copied connection string and run the application locally, you can see that there's a `ClientEndpoint` section in the ConnectionString. + ```bash + cd samples/Chatroom + dotnet restore + dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>" + dotnet run + ``` - ```bash - cd samples/Chatroom - dotnet restore - dotnet user-secrets set Azure:SignalR:ConnectionString "<copied-onnection-string-with-client-endpoint>" - dotnet run - ``` -* Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_**  +- Open http://localhost:5000 from the browser and use F12 to view the network traces, you can see that the WebSocket connection is established through **_AG1_** - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service."::: + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/chat-local-run.png" alt-text="Screenshot of running chat application locally with App Gateway and SignalR Service."::: ## Secure SignalR Service In this section, let's configure SignalR Service to deny all the traffic from pu Let's configure SignalR Service to only allow private access. You can find more details in [use private endpoint for SignalR Service](howto-private-endpoints.md). -* Go to the SignalR Service instance **_ASRS1_** in the portal. -* Go the **Networking** tab: - * On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network - - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service."::: -- * On **Private access** tab, select **+ Private endpoint**: - * On **Basics** tab: - * **Name**: **_PE1_** - * **Network Interface Name**: **_PE1-nic_** - * **Region**: make sure to choose the same region as your Application Gateway - * Select **Next: Resources** - * On **Resources** tab - * Keep default values - * Select **Next: Virtual Network** - * On **Virtual Network** tab - * **Virtual network**: Select previously created **_VN1_** - * **Subnet**: Select previously created **_VN1/myBackendSubnet_** - * Others keep the default settings - * Select **Next: DNS** - * On **DNS** tab - * **Integration with private DNS zone**: **Yes** - * Review and create the private endpoint -- :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service."::: - +- Go to the SignalR Service instance **_ASRS1_** in the portal. +- Go the **Networking** tab: ++ - On **Public access** tab: **Public network access** change to **Disabled** and **Save**, now you're no longer able to access SignalR Service from public network ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/disable-public-access.png" alt-text="Screenshot of disabling public access for SignalR Service."::: ++ - On **Private access** tab, select **+ Private endpoint**: + - On **Basics** tab: + - **Name**: **_PE1_** + - **Network Interface Name**: **_PE1-nic_** + - **Region**: make sure to choose the same region as your Application Gateway + - Select **Next: Resources** + - On **Resources** tab + - Keep default values + - Select **Next: Virtual Network** + - On **Virtual Network** tab + - **Virtual network**: Select previously created **_VN1_** + - **Subnet**: Select previously created **_VN1/myBackendSubnet_** + - Others keep the default settings + - Select **Next: DNS** + - On **DNS** tab + - **Integration with private DNS zone**: **Yes** + - Review and create the private endpoint ++ :::image type="content" source="./media/signalr-howto-work-with-app-gateway/application-gateway-setup-private-endpoint.png" alt-text="Screenshot of setting up the private endpoint resource for the SignalR Service."::: ++ ### Refresh Application Gateway backend pool+ Since Application Gateway was set up before there was a private endpoint for it to use, we need to **refresh** the backend pool for it to look at the Private DNS Zone and figure out that it should route the traffic to the private endpoint instead of the public address. We do the **refresh** by setting the backend FQDN to some other value and then changing it back. Go to the **Backend pools** tab for **_AG1_**, and select **signalr**:-* Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save** -* Step2: change Target back to `asrs1.service.signalr.net` ++- Step1: change Target `asrs1.service.signalr.net` to some other value, for example, `x.service.signalr.net`, and select **Save** +- Step2: change Target back to `asrs1.service.signalr.net` ### Quick test -* Now let's visit `https://asrs1.service.signalr.net/client` again. With public access disabled, it returns *403* instead. - ```bash - curl -v https://asrs1.service.signalr.net/client - ``` - returns - ``` - < HTTP/1.1 403 Forbidden -* Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns *400* with error message *'hub' query parameter is required*. It means the request successfully went through the Application Gateway to SignalR Service. -- ```bash - curl -I http://<frontend-public-IP-address>/client - ``` - returns - ``` - < HTTP/1.1 400 Bad Request - < ... - < - 'hub' query parameter is required. - ``` +- Now let's visit `https://asrs1.service.signalr.net/client` again. With public access disabled, it returns _403_ instead. + ```bash + curl -v https://asrs1.service.signalr.net/client + ``` + returns + ``` + < HTTP/1.1 403 Forbidden + ``` +- Visit the endpoint through **_AG1_** `http://<frontend-public-IP-address>/client`, and it returns _400_ with error message _'hub' query parameter is required_. It means the request successfully went through the Application Gateway to SignalR Service. ++ ```bash + curl -I http://<frontend-public-IP-address>/client + ``` ++ returns ++ ``` + < HTTP/1.1 400 Bad Request + < ... + < + 'hub' query parameter is required. + ``` Now if you run the Chat application locally again, you'll see error messages `Failed to connect to .... The server returned status code '403' when status code '101' was expected.`, it is because public access is disabled so that localhost server connections are longer able to connect to the SignalR service. Let's deploy the Chat application into the same VNet with **_ASRS1_** so that the chat can talk with **_ASRS1_**. -### Deploy the chat application to Azure -* On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**. --* On the **Basics** tab, use these values for the following application gateway settings: - - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service - - **Name**: **_WA1_** - * **Publish**: **Code** - * **Runtime stack**: **.NET 6 (LTS)** - * **Operating System**: **Linux** - * **Region**: Make sure it's the same as what you choose for SignalR Service - * Select **Next: Docker** -* On the **Networking** tab - * **Enable network injection**: select **On** - * **Virtual Network**: select **_VN1_** we previously created - * **Enable VNet integration**: **On** - * **Outbound subnet**: create a new subnet - * Select **Review + create** +### Deploy the chat application to Azure ++- On the [Azure portal](https://portal.azure.com/), search for **App services** and **Create**. ++- On the **Basics** tab, use these values for the following application gateway settings: + - **Subscription** and **Resource group** and **Region**: the same as what you choose for SignalR Service + - **Name**: **_WA1_** + * **Publish**: **Code** + * **Runtime stack**: **.NET 6 (LTS)** + * **Operating System**: **Linux** + * **Region**: Make sure it's the same as what you choose for SignalR Service + * Select **Next: Docker** +- On the **Networking** tab + - **Enable network injection**: select **On** + - **Virtual Network**: select **_VN1_** we previously created + - **Enable VNet integration**: **On** + - **Outbound subnet**: create a new subnet + - Select **Review + create** Now let's deploy our chat application to Azure. Below we use Azure CLI to deploy the web app, you can also choose other deployment environments following [publish your web app section](/azure/app-service/quickstart-dotnetcore#publish-your-web-app). cd publish zip -r app.zip . # use az CLI to deploy app.zip to our webapp az login-az account set -s <your-subscription-name-used-to-create-WA1> -az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip +az account set -s <your-subscription-name-used-to-create-WA1> +az webapp deployment source config-zip -n WA1 -g <resource-group-of-WA1> --src app.zip ``` Now the web app is deployed, let's go to the portal for **_WA1_** and make the following updates:-* On the **Configuration** tab: - * New application settings: - | Name | Value | - | --| | - |**WEBSITE_DNS_SERVER**| **168.63.129.16** | - |**WEBSITE_VNET_ROUTE_ALL**| **1**| +- On the **Configuration** tab: ++ - New application settings: - * New connection string: + | Name | Value | + | -- | -- | + | **WEBSITE_DNS_SERVER** | **168.63.129.16** | + | **WEBSITE_VNET_ROUTE_ALL** | **1** | - | Name | Value | Type| - | --| || - |**Azure__SignalR__ConnectionString**| The copied connection string with ClientEndpoint value| select **Custom**| + - New connection string: + | Name | Value | Type | + | | | -- | + | **Azure**SignalR**ConnectionString** | The copied connection string with ClientEndpoint value | select **Custom** | + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string."::: - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-settings.png" alt-text="Screenshot of configuring web app connection string."::: +- On the **TLS/SSL settings** tab: -* On the **TLS/SSL settings** tab: - * **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically. + - **HTTPS Only**: **Off**. To Simplify the demo, we used the HTTP frontend protocol on Application Gateway. Therefore, we need to turn off this option to avoid changing the HTTP URL to HTTPs automatically. -* Go to the **Overview** tab and get the URL of **_WA1_**. -* Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**. - > [!NOTE] - > - > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically. +- Go to the **Overview** tab and get the URL of **_WA1_**. +- Get the URL, and replace scheme https with http, for example, `http://wa1.azurewebsites.net`, open the URL in the browser, now you can start chatting! Use F12 to open network traces, and you can see the SignalR connection is established through **_AG1_**. + > [!NOTE] + > + > Sometimes you need to disable browser's auto https redirection and browser cache to prevent the URL from redirecting to HTTPS automatically. - :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service."::: + :::image type="content" source="./media/signalr-howto-work-with-app-gateway/web-app-run.png" alt-text="Screenshot of running chat application in Azure with App Gateway and SignalR Service."::: ## Next steps |
azure-signalr | Signalr Reference Data Plane Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-reference-data-plane-rest-api.md | You can find a complete sample of using SignalR Service with Azure Functions at The following table shows all supported versions of REST API. You can also find the swagger file for each version of REST API. -API Version | Status | Port | Doc | Spec -|||| -`20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) -`1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) -`1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) +| API Version | Status | Port | Doc | Spec | +| - | -- | -- | | | +| `20220601` | Latest | Standard | [Doc](./swagger/signalr-data-plane-rest-v20220601.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/V20220601.json) | +| `1.0` | Stable | Standard | [Doc](./swagger/signalr-data-plane-rest-v1.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1.json) | +| `1.0-preview` | Obsolete | Standard | [Doc](./swagger/signalr-data-plane-rest-v1-preview.md) | [swagger](https://github.com/Azure/azure-signalr/blob/dev/docs/swagger/v1-preview.json) | The available APIs are listed as following. -| API | Path | -| - | - | -| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` | -| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` | -| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` | -| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` | -| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` | -| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` | -| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | -| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | -| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` | -| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` | +| API | Path | +| -- | | +| [Broadcast a message to all clients connected to target hub.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-connected-to-target-hub) | `POST /api/v1/hubs/{hub}` | +| [Broadcast a message to all clients belong to the target user.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-belong-to-the-target-user) | `POST /api/v1/hubs/{hub}/users/{id}` | +| [Send message to the specific connection.](./swagger/signalr-data-plane-rest-v1.md#send-message-to-the-specific-connection) | `POST /api/v1/hubs/{hub}/connections/{connectionId}` | +| [Check if the connection with the given connectionId exists.](./swagger/signalr-data-plane-rest-v1.md#check-if-the-connection-with-the-given-connectionid-exists) | `GET /api/v1/hubs/{hub}/connections/{connectionId}` | +| [Close the client connection.](./swagger/signalr-data-plane-rest-v1.md#close-the-client-connection) | `DELETE /api/v1/hubs/{hub}/connections/{connectionId}` | +| [Broadcast a message to all clients within the target group.](./swagger/signalr-data-plane-rest-v1.md#broadcast-a-message-to-all-clients-within-the-target-group) | `POST /api/v1/hubs/{hub}/groups/{group}` | +| [Check if there are any client connections inside the given group.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-inside-the-given-group) | `GET /api/v1/hubs/{hub}/groups/{group}` | +| [Check if there are any client connections connected for the given user.](./swagger/signalr-data-plane-rest-v1.md#check-if-there-are-any-client-connections-connected-for-the-given-user) | `GET /api/v1/hubs/{hub}/users/{user}` | +| [Add a connection to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-connection-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Remove a connection from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-connection-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/connections/{connectionId}` | +| [Check whether a user exists in the target group.](./swagger/signalr-data-plane-rest-v1.md#check-whether-a-user-exists-in-the-target-group) | `GET /api/v1/hubs/{hub}/groups/{group}/users/{user}` | +| [Add a user to the target group.](./swagger/signalr-data-plane-rest-v1.md#add-a-user-to-the-target-group) | `PUT /api/v1/hubs/{hub}/groups/{group}/users/{user}` | +| [Remove a user from the target group.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-the-target-group) | `DELETE /api/v1/hubs/{hub}/groups/{group}/users/{user}` | +| [Remove a user from all groups.](./swagger/signalr-data-plane-rest-v1.md#remove-a-user-from-all-groups) | `DELETE /api/v1/hubs/{hub}/users/{user}/groups` | ## Using REST API Use the `AccessKey` in Azure SignalR Service instance's connection string to sig The following claims are required to be included in the JWT token. -Claim Type | Is Required | Description -|| -`aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`. -`exp` | true | Epoch time when this token expires. +| Claim Type | Is Required | Description | +| - | -- | -- | +| `aud` | true | Needs to be the same as your HTTP request URL, trailing slash and query parameters not included. For example, a broadcast request's audience should look like: `https://example.service.signalr.net/api/v1/hubs/myhub`. | +| `exp` | true | Epoch time when this token expires. | -### Authenticate via Azure Active Directory Token (Azure AD Token) +### Authenticate via Microsoft Entra token -Similar to authenticating using `AccessKey`, when authenticating using Azure AD Token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request. +Similar to authenticating using `AccessKey`, when authenticating using Microsoft Entra token, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request. -The difference is, in this scenario, the JWT Token is generated by Azure Active Directory. For more information, see [Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md) +The difference is, in this scenario, the JWT Token is generated by Microsoft Entra ID. For more information, see [Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md) -You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Azure Active Directory for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md) +You could also use **Role Based Access Control (RBAC)** to authorize the request from your client/server to SignalR Service. For more information, see [Authorize access with Microsoft Entra ID for Azure SignalR Service](./signalr-concept-authorize-azure-active-directory.md) ### Implement Negotiate Endpoint A typical negotiation response looks as follows: ```json {- "url":"https://<service_name>.service.signalr.net/client/?hub=<hub_name>", - "accessToken":"<a typical JWT token>" + "url": "https://<service_name>.service.signalr.net/client/?hub=<hub_name>", + "accessToken": "<a typical JWT token>" } ``` Then SignalR Service uses the value of `nameid` claim as the user ID of each cli You can find a complete console app to demonstrate how to manually build a REST API HTTP request in SignalR Service [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Serverless). -You can also use [Microsoft.Azure.SignalR.Management](<https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management>) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](<https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management>). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). -+You can also use [Microsoft.Azure.SignalR.Management](https://www.nuget.org/packages/Microsoft.Azure.SignalR.Management) to publish messages to SignalR Service using the similar interfaces of `IHubContext`. Samples can be found [here](https://github.com/aspnet/AzureSignalR-samples/tree/master/samples/Management). For more information, see [How to use Management SDK](https://github.com/Azure/azure-signalr/blob/dev/docs/management-sdk-guide.md). ## Limitation Currently, we have the following limitation for REST API requests: -* Header size is a maximum of 16 KB. -* Body size is a maximum of 1 MB. +- Header size is a maximum of 16 KB. +- Body size is a maximum of 1 MB. If you want to send messages larger than 1 MB, use the Management SDK with `persistent` mode. |
azure-signalr | Signalr Tutorial Authenticate Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md | A step by step tutorial to build a chat room with authentication and private mes ### Technologies used -* [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages -* [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients -* [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions +- [Azure Functions](https://azure.microsoft.com/services/functions/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Backend API for authenticating users and sending chat messages +- [Azure SignalR Service](https://azure.microsoft.com/services/signalr-service/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Broadcast new messages to connected chat clients +- [Azure Storage](https://azure.microsoft.com/services/storage/?WT.mc_id=serverlesschatlab-tutorial-antchu) - Required by Azure Functions ### Prerequisites -* An Azure account with an active subscription. - * If you don't have one, you can [create one for free](https://azure.microsoft.com/free/). -* [Node.js](https://nodejs.org/en/download/) (Version 18.x) -* [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4) +- An Azure account with an active subscription. + - If you don't have one, you can [create one for free](https://azure.microsoft.com/free/). +- [Node.js](https://nodejs.org/en/download/) (Version 18.x) +- [Azure Functions Core Tools](../azure-functions/functions-run-local.md?#install-the-azure-functions-core-tools) (Version 4) [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Create essential resources on Azure+ ### Create an Azure SignalR service resource -Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal. +Your application will access a SignalR Service instance. Use the following steps to create a SignalR Service instance using the Azure portal. 1. Select on the **Create a resource** (**+**) button for creating a new Azure resource. Your application will access a SignalR Service instance. Use the following step 1. Enter the following information. - | Name | Value | - ||| - | **Resource group** | Create a new resource group with a unique name | - | **Resource name** | A unique name for the SignalR Service instance | - | **Region** | Select a region close to you | - | **Pricing Tier** | Free | - | **Service mode** | Serverless | + | Name | Value | + | | - | + | **Resource group** | Create a new resource group with a unique name | + | **Resource name** | A unique name for the SignalR Service instance | + | **Region** | Select a region close to you | + | **Pricing Tier** | Free | + | **Service mode** | Serverless | 1. Select **Review + Create**. 1. Select **Create**. - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create an Azure Function App and an Azure Storage account Your application will access a SignalR Service instance. Use the following step 1. Enter the following information. - | Name | Value | - ||| - | **Resource group** | Use the same resource group with your SignalR Service instance | - | **Function App name** | A unique name for the Function app instance | - | **Runtime stack** | Node.js | - | **Region** | Select a region close to you | + | Name | Value | + | | -- | + | **Resource group** | Use the same resource group with your SignalR Service instance | + | **Function App name** | A unique name for the Function app instance | + | **Runtime stack** | Node.js | + | **Region** | Select a region close to you | 1. By default, a new Azure Storage account will also be created in the same resource group together with your function app. If you want to use another storage account in the function app, switch to **Hosting** tab to choose an account. 1. Select **Review + Create**, then select **Create**. ## Create an Azure Functions project locally+ ### Initialize a function app -1. From a command line, create a root folder for your project and change to the folder. +1. From a command line, create a root folder for your project and change to the folder. 1. Execute the following command in your terminal to create a new JavaScript Functions project.-``` ++```bash func init --worker-runtime node --language javascript --name my-app ```-By default, the generated project includes a *host.json* file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles). ++By default, the generated project includes a _host.json_ file containing the extension bundles which include the SignalR extension. For more information about extension bundles, see [Register Azure Functions binding extensions](../azure-functions/functions-bindings-register.md#extension-bundles). ### Configure application settings -When running and debugging the Azure Functions runtime locally, application settings are read by the function app from *local.settings.json*. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier. +When running and debugging the Azure Functions runtime locally, application settings are read by the function app from _local.settings.json_. Update this file with the connection strings of the SignalR Service instance and the storage account that you created earlier. -1. Replace the content of *local.settings.json* with the following code: +1. Replace the content of _local.settings.json_ with the following code: - ```json - { - "IsEncrypted": false, - "Values": { - "FUNCTIONS_WORKER_RUNTIME": "node", - "AzureWebJobsStorage": "<your-storage-account-connection-string>", - "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>" - } - } - ``` + ```json + { + "IsEncrypted": false, + "Values": { + "FUNCTIONS_WORKER_RUNTIME": "node", + "AzureWebJobsStorage": "<your-storage-account-connection-string>", + "AzureSignalRConnectionString": "<your-Azure-SignalR-connection-string>" + } + } + ``` - * Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting. + - Enter the Azure SignalR Service connection string into the `AzureSignalRConnectionString` setting. - Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. + Navigate to your SignalR Service in the Azure portal. In the **Settings** section, locate the **Keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. - * Enter the storage account connection string into the `AzureWebJobsStorage` setting. + - Enter the storage account connection string into the `AzureWebJobsStorage` setting. Navigate to your storage account in the Azure portal. In the **Security + networking** section, locate the **Access keys** setting. Select the **Copy** button to the right of the connection string to copy it to your clipboard. You can use either the primary or secondary connection string. - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ### Create a function to authenticate users to SignalR Service When the chat app first opens in the browser, it requires valid connection crede > This function must be named `negotiate` as the SignalR client requires an endpoint that ends in `/negotiate`. 1. From the root project folder, create the `negotiate` function from a built-in template with the following command.- ```bash - func new --template "SignalR negotiate HTTP trigger" --name negotiate - ``` -1. Open *negotiate/function.json* to view the function binding configuration. + ```bash + func new --template "SignalR negotiate HTTP trigger" --name negotiate + ``` ++1. Open _negotiate/function.json_ to view the function binding configuration. The function contains an HTTP trigger binding to receive requests from SignalR clients and a SignalR input binding to generate valid credentials for a client to connect to an Azure SignalR Service hub named `default`. - ```json - { - "disabled": false, - "bindings": [ - { - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "methods": ["post"], - "name": "req", - "route": "negotiate" - }, - { - "type": "http", - "direction": "out", - "name": "res" - }, - { - "type": "signalRConnectionInfo", - "name": "connectionInfo", - "hubName": "default", - "connectionStringSetting": "AzureSignalRConnectionString", - "direction": "in" - } - ] - } - ``` -- There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure. --1. Close the *negotiate/function.json* file. -----1. Open *negotiate/index.js* to view the body of the function. -- ```javascript - module.exports = async function (context, req, connectionInfo) { - context.res.body = connectionInfo; - }; - ``` -- This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance. + ```json + { + "disabled": false, + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "methods": ["post"], + "name": "req", + "route": "negotiate" + }, + { + "type": "http", + "direction": "out", + "name": "res" + }, + { + "type": "signalRConnectionInfo", + "name": "connectionInfo", + "hubName": "default", + "connectionStringSetting": "AzureSignalRConnectionString", + "direction": "in" + } + ] + } + ``` ++ There's no `userId` property in the `signalRConnectionInfo` binding for local development, but you'll add it later to set the user name of a SignalR connection when you deploy the function app to Azure. ++1. Close the _negotiate/function.json_ file. ++1. Open _negotiate/index.js_ to view the body of the function. ++ ```javascript + module.exports = async function (context, req, connectionInfo) { + context.res.body = connectionInfo; + }; + ``` ++ This function takes the SignalR connection information from the input binding and returns it to the client in the HTTP response body. The SignalR client uses this information to connect to the SignalR Service instance. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) When the chat app first opens in the browser, it requires valid connection crede The web app also requires an HTTP API to send chat messages. You'll create an HTTP triggered function named `sendMessage` that sends messages to all connected clients using SignalR Service. 1. From the root project folder, create an HTTP trigger function named `sendMessage` from the template with the command:- ```bash - func new --name sendMessage --template "Http trigger" - ``` --1. To configure bindings for the function, replace the content of *sendMessage/function.json* with the following code: - ```json - { - "disabled": false, - "bindings": [ - { - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "route": "messages", - "methods": ["post"] - }, - { - "type": "http", - "direction": "out", - "name": "res" - }, - { - "type": "signalR", - "name": "$return", - "hubName": "default", - "direction": "out" - } - ] - } - ``` - Two changes are made to the original file: - * Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method. - * Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`. --1. Replace the content of *sendMessage/index.js* with the following code: -- ```javascript - module.exports = async function (context, req) { - const message = req.body; - message.sender = req.headers && req.headers['x-ms-client-principal-name'] || ''; -- let recipientUserId = ''; - if (message.recipient) { - recipientUserId = message.recipient; - message.isPrivate = true; - } -- return { - 'userId': recipientUserId, - 'target': 'newMessage', - 'arguments': [ message ] - }; - }; - ``` -- This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client. -- The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial. ++ ```bash + func new --name sendMessage --template "Http trigger" + ``` ++1. To configure bindings for the function, replace the content of _sendMessage/function.json_ with the following code: ++ ```json + { + "disabled": false, + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "route": "messages", + "methods": ["post"] + }, + { + "type": "http", + "direction": "out", + "name": "res" + }, + { + "type": "signalR", + "name": "$return", + "hubName": "default", + "direction": "out" + } + ] + } + ``` ++ Two changes are made to the original file: ++ - Changes the route to `messages` and restricts the HTTP trigger to the `POST` HTTP method. + - Adds a SignalR Service output binding that sends a message returned by the function to all clients connected to a SignalR Service hub named `default`. ++1. Replace the content of _sendMessage/index.js_ with the following code: ++ ```javascript + module.exports = async function (context, req) { + const message = req.body; + message.sender = + (req.headers && req.headers["x-ms-client-principal-name"]) || ""; ++ let recipientUserId = ""; + if (message.recipient) { + recipientUserId = message.recipient; + message.isPrivate = true; + } ++ return { + userId: recipientUserId, + target: "newMessage", + arguments: [message], + }; + }; + ``` ++ This function takes the body from the HTTP request and sends it to clients connected to SignalR Service, invoking a function named `newMessage` on each client. ++ The function can read the sender's identity and can accept a `recipient` value in the message body to allow you to send a message privately to a single user. You'll use these functionalities later in the tutorial. 1. Save the file. The web app also requires an HTTP API to send chat messages. You'll create an HT The chat application's UI is a simple single-page application (SPA) created with the Vue JavaScript framework using [ASP.NET Core SignalR JavaScript client](/aspnet/core/signalr/javascript-client). For simplicity, the function app hosts the web page. In a production environment, you can use [Static Web Apps](https://azure.microsoft.com/products/app-service/static) to host the web page. -1. Create a new folder named *content* in the root directory of your function project. -1. In the *content* folder, create a new file named *https://docsupdatetracker.net/index.html*. +1. Create a new folder named _content_ in the root directory of your function project. +1. In the _content_ folder, create a new file named _https://docsupdatetracker.net/index.html_. 1. Copy and paste the content of [https://docsupdatetracker.net/index.html](https://github.com/aspnet/AzureSignalR-samples/blob/da0aca70f490f3d8f4c220d0c88466b6048ebf65/samples/ServerlessChatWithAuth/content/https://docsupdatetracker.net/index.html) to your file. Save the file. 1. From the root project folder, create an HTTP trigger function named `index` from the template with the command:- ```bash - func new --name index --template "Http trigger" - ``` ++ ```bash + func new --name index --template "Http trigger" + ``` 1. Modify the content of `index/index.js` to the following:- ```js - const fs = require('fs'); -- module.exports = async function (context, req) { - const fileContent = fs.readFileSync('content/https://docsupdatetracker.net/index.html', 'utf8'); -- context.res = { - // status: 200, /* Defaults to 200 */ - body: fileContent, - headers: { - 'Content-Type': 'text/html' - }, - }; - } - ``` - The function reads the static web page and returns it to the user. --1. Open *index/function.json*, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this: - ```json - { - "bindings": [ - { - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "methods": ["get", "post"] - }, - { - "type": "http", - "direction": "out", - "name": "res" - } - ] - } - ``` ++ ```js + const fs = require("fs"); ++ module.exports = async function (context, req) { + const fileContent = fs.readFileSync("content/https://docsupdatetracker.net/index.html", "utf8"); ++ context.res = { + // status: 200, /* Defaults to 200 */ + body: fileContent, + headers: { + "Content-Type": "text/html", + }, + }; + }; + ``` ++ The function reads the static web page and returns it to the user. ++1. Open _index/function.json_, change the `authLevel` of the bindings to `anonymous`. Now the whole file looks like this: ++ ```json + { + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": ["get", "post"] + }, + { + "type": "http", + "direction": "out", + "name": "res" + } + ] + } + ``` 1. Now you can test your app locally. Start the function app with the command:- ```bash - func start - ``` ++ ```bash + func start + ``` 1. Open **http://localhost:7071/api/index** in your web browser. You should be able to see a web page as follows: - :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface."::: + :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/local-chat-client-ui.png" alt-text="Screenshot of local chat client web user interface."::: 1. Enter a message in the chat box and press enter. The message is displayed on the web page. Because the user name of the SignalR client isn't set, we send all messages as "anonymous". - [Having issues? Let us know.](https://aka.ms/asrs/qsauth) ## Deploy to Azure and enable authentication You have been running the function app and chat application locally. You'll now So far, the chat app works anonymously. In Azure, you'll use [App Service Authentication](../app-service/overview-authentication-authorization.md) to authenticate the user. The user ID or username of the authenticated user is passed to the `SignalRConnectionInfo` binding to generate connection information authenticated as the user. -1. Open *negotiate/function.json*. +1. Open _negotiate/function.json_. 1. Insert a `userId` property to the `SignalRConnectionInfo` binding with value `{headers.x-ms-client-principal-name}`. This value is a [binding expression](../azure-functions/functions-triggers-bindings.md) that sets the user name of the SignalR client to the name of the authenticated user. The binding should now look like this. - ```json - { - "type": "signalRConnectionInfo", - "name": "connectionInfo", - "userId": "{headers.x-ms-client-principal-name}", - "hubName": "default", - "direction": "in" - } - ``` + ```json + { + "type": "signalRConnectionInfo", + "name": "connectionInfo", + "userId": "{headers.x-ms-client-principal-name}", + "hubName": "default", + "direction": "in" + } + ``` 1. Save the file. - ### Deploy function app to Azure+ Deploy the function app to Azure with the following command: ```bash func azure functionapp publish <your-function-app-name> --publish-local-settings ``` -The `--publish-local-settings` option publishes your local settings from the *local.settings.json* file to Azure, so you don't need to configure them in Azure again. -+The `--publish-local-settings` option publishes your local settings from the _local.settings.json_ file to Azure, so you don't need to configure them in Azure again. ### Enable App Service Authentication -Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial. +Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. You will use **Microsoft** as the identity provider for this tutorial. 1. Go to the resource page of your function app on Azure portal. 1. Select **Settings** -> **Authentication**.-1. Select **Add identity provider**. - :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page."::: +1. Select **Add identity provider**. + :::image type="content" source="./media/signalr-tutorial-authenticate-azure-functions/function-app-authentication.png" alt-text="Screenshot of the Function App Authentication page."::: 1. Select **Microsoft** from the **Identity provider** list.- :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page."::: + :::image type="content" source="media/signalr-tutorial-authenticate-azure-functions/function-app-select-identity-provider.png" alt-text="Screenshot of 'Add an identity provider' page."::: - Azure Functions supports authentication with Azure Active Directory, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles: + Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. For more information about the supported identity providers, see the following articles: - - [Azure Active Directory](../app-service/configure-authentication-provider-aad.md) - - [Facebook](../app-service/configure-authentication-provider-facebook.md) - - [Twitter](../app-service/configure-authentication-provider-twitter.md) - - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md) - - [Google](../app-service/configure-authentication-provider-google.md) + - [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) + - [Facebook](../app-service/configure-authentication-provider-facebook.md) + - [Twitter](../app-service/configure-authentication-provider-twitter.md) + - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md) + - [Google](../app-service/configure-authentication-provider-google.md) 1. Select **Add** to complete the settings. An app registration will be created, which associates your identity provider with your function app. Congratulations! You've deployed a real-time, serverless chat app! To clean up the resources created in this tutorial, delete the resource group using the Azure portal. ->[!CAUTION] +> [!CAUTION] > Deleting the resource group deletes all resources contained within it. If the resource group contains resources outside the scope of this tutorial, they will also be deleted. [Having issues? Let us know.](https://aka.ms/asrs/qsauth) To clean up the resources created in this tutorial, delete the resource group us In this tutorial, you learned how to use Azure Functions with Azure SignalR Service. Read more about building real-time serverless applications with SignalR Service bindings for Azure Functions. -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Real-time apps with Azure SignalR Service and Azure Functions](signalr-concept-azure-functions.md) [Having issues? Let us know.](https://aka.ms/asrs/qsauth) |
azure-video-indexer | Deploy With Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md | You need an Azure Media Services account. You can create one for free through [C ### Option 1: Select the button for deploying to Azure, and fill in the missing parameters -[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Quick-Start%2Favam.template.json) +[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FDeploy-Samples%2FArmTemplates%2Favam.template.json) - |
azure-video-indexer | Language Identification Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/language-identification-model.md | Title: Use Azure AI Video Indexer to auto identify spoken languages description: This article describes how the Azure AI Video Indexer language identification model is used to automatically identifying the spoken language in a video. Previously updated : 04/12/2020 Last updated : 08/28/2023 # Automatically identify the spoken language with language identification model -Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language content from audio and sending the media file to be transcribed in the dominant identified language. +Azure AI Video Indexer supports automatic language identification (LID), which is the process of automatically identifying the spoken language from audio content. The media file is transcribed in the dominant identified language. -See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md). +See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md). -Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section below. +Make sure to review the [Guidelines and limitations](#guidelines-and-limitations) section. ## Choosing auto language identification on indexing -When indexing or [re-indexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter. +When indexing or [reindexing](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Re-Index-Video) a video using the API, choose the `auto detect` option in the `sourceLanguage` parameter. -When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to re-index. On the right-bottom corner click the re-index button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box. +When using portal, go to your **Account videos** on the [Azure AI Video Indexer](https://www.videoindexer.ai/) home page and hover over the name of the video that you want to reindex. On the right-bottom corner, select the **Re-index** button. In the **Re-index video** dialog, choose *Auto detect* from the **Video source language** drop-down box. -![auto detect](./media/language-identification-model/auto-detect.png) ## Model output Model dominant language is available in the insights JSON as the `sourceLanguage "transcript": [...], . . . "sourceLanguageConfidence": 0.8563- }, + } ``` ## Guidelines and limitations -* Automatic language identification (LID) supports the following languages: +Automatic language identification (LID) supports the following languages: - See the list of supported by Azure AI Video Indexer languages in [supported langues](language-support.md). -* Even though Azure AI Video Indexer supports Arabic (Modern Standard and Levantine), Hindi, and Korean, these languages are not supported in LID. -* If the audio contains languages other than the supported list above, the result is unexpected. -* If Azure AI Video Indexer can't identify the language with a high enough confidence (`>0.6`), the fallback language is English. -* Currently, there isn't support for file with mixed languages audio. If the audio contains mixed languages, the result is unexpected. -* Low-quality audio may impact the model results. -* The model requires at least one minute of speech in the audio. -* The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, etc.). + See the list of supported by Azure AI Video Indexer languages in [supported languages](language-support.md). ++- If the audio contains languages other than the [supported list](language-support.md), the result is unexpected. +- If Azure AI Video Indexer can't identify the language with a high enough confidence (greater than 0.6), the fallback language is English. +- Currently, there isn't support for files with mixed language audio. If the audio contains mixed languages, the result is unexpected. +- Low-quality audio may affect the model results. +- The model requires at least one minute of speech in the audio. +- The model is designed to recognize a spontaneous conversational speech (not voice commands, singing, and so on). ## Next steps -* [Overview](video-indexer-overview.md) -* [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md) +- [Overview](video-indexer-overview.md) +- [Automatically identify and transcribe multi-language content](multi-language-identification-transcription.md) |
azure-video-indexer | Ocr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/ocr.md | When used responsibly and carefully, Azure AI Video Indexer is a valuable tool f ## Learn more about OCR -- [Cognitive Services documentation](/azure/ai-services/computer-vision/overview-ocr)+- [Azure AI services documentation](/azure/ai-services/computer-vision/overview-ocr) - [Transparency note](/legal/cognitive-services/computer-vision/ocr-transparency-note) - [Use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note#example-use-cases) - [Capabilities and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations) |
azure-video-indexer | Video Indexer Output Json V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-output-json-v2.md | Videos that contain adult or racy content might be available for private view on ## Learn more about visualContentModeration -- [Cognitive services documentation](/azure/ai-services/computer-vision/concept-detecting-adult-content)+- [Azure AI services documentation](/azure/ai-services/computer-vision/concept-detecting-adult-content) - [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#features) - [Use cases](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#use-cases) - [Capabilities and limitations](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=%2Fazure%2Fai-services%2Fcomputer-vision%2Fcontext%2Fcontext#system-performance-and-limitations-for-image-analysis) Videos that contain adult or racy content might be available for private view on ##### Learn more about textualContentModeration -- [Cognitive Services documentation](/azure/ai-services/content-moderator/text-moderation-api)+- [Azure AI services documentation](/azure/ai-services/content-moderator/text-moderation-api) - [Supported languages](/azure/ai-services/content-moderator/language-support) - [Capabilities and limitations](/azure/ai-services/content-moderator/text-moderation-api) - [Data, privacy and security](/azure/ai-services/content-moderator/overview#data-privacy-and-security) Azure AI Video Indexer makes an inference of main topics from transcripts. When Explore the [Azure AI Video Indexer API developer portal](https://api-portal.videoindexer.ai). For information about how to embed widgets in your application, see [Embed Azure AI Video Indexer widgets into your applications](video-indexer-embed-widgets.md). - |
azure-vmware | Bitnami Appliances Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/bitnami-appliances-deployment.md | In this article, you'll learn how to install and configure the following virtual -## Step 1. Download the Bitnami virtual appliance OVA/OVF file +## Step 1: Download the Bitnami virtual appliance OVA/OVF file 1. Go to the [VMware Marketplace](https://marketplace.cloud.vmware.com/) and download the virtual appliance you want to install on your Azure VMware Solution private cloud: In this article, you'll learn how to install and configure the following virtual >[!NOTE] >Make sure the file is accessible from the virtual machine. -## Step 2. Access the local vCenter Server of your private cloud +## Step 2: Access the local vCenter Server of your private cloud 1. Sign in to the [Azure portal](https://portal.azure.com). In this article, you'll learn how to install and configure the following virtual :::image type="content" source="media/tutorial-access-private-cloud/ss5-vcenter-login.png" alt-text="Screenshot showing the VMware vSphere sign in page." border="true"::: -## Step 3. Install the Bitnami OVA/OVF file in vCenter Server +## Step 3: Install the Bitnami OVA/OVF file in vCenter Server 1. Right-click the cluster that you want to install the LAMP virtual appliance and select **Deploy OVF Template**. In this article, you'll learn how to install and configure the following virtual -## Step 4. Assign a static IP to the virtual appliance +## Step 4: Assign a static IP to the virtual appliance In this step, you'll modify the *bootproto* and *onboot* parameters and assign a static IP address to the Bitnami virtual appliance. In this step, you'll modify the *bootproto* and *onboot* parameters and assign a -## Step 5. Enable SSH access to the virtual appliance +## Step 5: Enable SSH access to the virtual appliance In this step, you'll enable SSH on your virtual appliance for remote access control. The SSH service is disabled by default. You'll also use an OpenSSH client to connect to the host console. |
azure-vmware | Configure Vmware Cloud Director Service Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-cloud-director-service-azure-vmware-solution.md | + + Title: Configure VMware Cloud Director Service in Azure VMware Solution +description: How to configure VMware Cloud Director Service in Azure VMware Solution ++++ Last updated : 06/12/2023+++# Configure VMware Cloud Director Service in Azure VMware Solution ++In this article, learn how to configure [VMware Cloud Director](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html) service in Azure VMware Solution. ++## Prerequisites +- Plan and deploy a VMware Cloud Director Service Instance in your preferred region using the process described here. [How Do I Create a VMware Cloud Director Instance](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-26D98BA1-CF4B-4A57-971E-E58A0B482EBB.html#GUID-26D98BA1-CF4B-4A57-971E-E58A0B482EBB) ++ >[!Note] + > VMware Cloud Director Instances can establish connections to AVS SDDC in regions where latency remains under 150 ms. ++- Plan and deploy Azure VMware solution private cloud using the following links: + - [Plan Azure VMware solution private cloud SDDC.](plan-private-cloud-deployment.md) + - [Deploy and configure Azure VMware Solution - Azure VMware Solution.](deploy-azure-vmware-solution.md) +- After successfully gaining access to both your VMware Cloud Director instance and Azure VMware Solution SDDC, you can then proceed to the next section. ++## Plan and prepare Azure VMware solution private cloud for VMware Reverse proxy ++- VMware Reverse proxy VM is deployed within the Azure VMware solution SDDC and requires outbound connectivity to your VMware Cloud director Service Instance. [Plan how you would provide this internet connectivity.](concepts-design-public-internet-access.md) ++- Public IP on NSX-T edge can be used to provide outbound access for the VMware Reverse proxy VM as shown in this article. Learn more on, [How to configure a public IP in the Azure portal](enable-public-ip-nsx-edge.md#configure-a-public-ip-in-the-azure-portal) and [Outbound Internet access for VMs](enable-public-ip-nsx-edge.md#outbound-internet-access-for-vms) + +- VMware Reverse proxy can acquire an IP address through either DHCP or manual IP configuration. +- Optionally create a dedicated Tier-1 router for the reverse proxy VM segment. ++### Prepare your Azure VMware Solution SDDC for deploying VMware Reverse proxy VM OVA ++1. Obtain NSX-T cloud admin credentials from Azure portal under VMware credentials. Then, Log in to NSX-T manager. +1. Create a dedicated Tier-1 router (optional) for VMware Reverse proxy VM. + 1. Log in to Azure VMware solution NSX-T manage and select **ADD Tier-1 Gateway** + 1. Provide name, Linked Tier-0 gateway and then select save. + 1. Configure appropriate settings under Route Advertisements. + + :::image type="content" source="./media/vmware-cloud-director-service/pic-create-gateway.png" alt-text="Screenshot showing how to create a Tier-1 Gateway." lightbox="./media/vmware-cloud-director-service/pic-create-gateway.png"::: + +1. Create a segment for VMware Reverse proxy VM. + 1. Log in to Azure VMware solution NSX-T manage and under segments, select **ADD SEGMENT** + 1. Provide name, Connected Gateway, Transport Zone and Subnet information and then select save. + + :::image type="content" source="./media/vmware-cloud-director-service/pic-create-reverse-proxy.png" alt-text="Screenshot showing how to create a NSX-T segment for reverse proxy VM." lightbox="./media/vmware-cloud-director-service/pic-create-reverse-proxy.png"::: + +1. Optionally enable segment for DHCP by creating a DHCP profile and setting DHCP config. You can skip this step if you use static IPs. +1. Add two NAT rules to provide an outbound access to VMware Reverse proxy VM to reach VMware cloud director service. You can also reach the management components of Azure VMware solution SDDC such as vCenter and NSX-T that are deployed in the management plane. + 1. Create **NOSNAT** rule, + - Provide name of the rule and select source IP. You can use CIDR format or specific IP address. + - Under destination port, use private cloud network CIDR. + 1. Create **SNAT** rule + - Provide name and select source IP. + - Under translated IP, provide a public IP address. + - Set priority of this rule higher as compared to the NOSNAT rule. + 1. Click **Save**. + + :::image type="content" source="./media/vmware-cloud-director-service/pic-verify-nat-rules.png" alt-text="Screenshot showing how to verify the NAT rules have been created." lightbox="./media/vmware-cloud-director-service/pic-verify-nat-rules.png"::: + ++1. Ensure on Tier-1 gateway, NAT is enabled under router advertisement. +1. Configure gateway firewall rules to enhance security. ++## Generate and Download VMware Reverse proxy OVA ++- What follows is a step-by-step procedure and how to obtain the required information on Azure portal and how to use it to generate VMware Reverse proxy VM. ++### Prerequisites on VMware cloud service ++- Verify that you're assigned the network administrator service role. See [Managing Roles and Permissions](https://docs.vmware.com/en/VMware-Cloud-services/services/Using-VMware-Cloud-Services/GUID-84E54AD5-A53F-416C-AEBE-783927CD66C1.html) and make changes using VMware Cloud Services Console. +- If you're accessing VMware Cloud Director service through VMware Cloud Partner Navigator, verify that you're a Provider Service Manager user and that you have been assigned the provider:**admin** and provider:**network service** roles. +- See [How do I change the roles of users in my organization](https://docs.vmware.com/en/VMware-Cloud-Partner-Navigator/services/Cloud-Partner-Navigator-Using-Provider/GUID-BF0ED645-1124-4828-9842-18F5C71019AE.html) in the VMware Cloud Partner Navigator documentation. ++### Procedure +1. Log in to VMware Cloud Director service. +1. Click Cloud Director Instances. +1. In the card of the VMware Cloud Director instance for which you want to configure a reverse proxy service, click **Actions** > **Generate VMware Reverse Proxy OVА**. +1. The **Generate VMware Reverse proxy OVA** wizard opens. Fill in the required information. +1. Enter Network Name + - Network name is the name of the NSX-T segment you created in previous section for reverse proxy VM. +1. Enter the required information such as vCenter FQDN, Management IP for vCenter, NSX FQDN or IP and more hosts within the SDDC to proxy. +1. vCenter and NSX-T IP address of your Azure VMware solution private cloud can be found under **Azure portal** -> **manage**-> **VMware credentials** ++ :::image type="content" source="./media/vmware-cloud-director-service/pic-obtain-vmware-credential.png" alt-text="Screenshot showing how to obtain VMware credentials using Azure portal." lightbox="./media/vmware-cloud-director-service/pic-obtain-vmware-credential.png"::: ++1. To find FQDN of vCenter of your Azure VMware solution private cloud, login to the vCenter using VMware credential provided on Azure portal. +1. In vSphere Client, select vCenter, which displays FQDN of the vCenter server. +1. To obtain FQDN of NSX-T, replace vc with nsx. NSX-T FQDN in this example would be, “nsx.f31ca07da35f4b42abe08e.uksouth.avs.azure.com” + + :::image type="content" source="./media/vmware-cloud-director-service/pic-vcenter-vmware.png" alt-text="Screenshot showing how to obtain vCenter and NSX-T FQDN in Azure VMware solution private cloud." lightbox="./media/vmware-cloud-director-service/pic-vcenter-vmware.png"::: ++1. Obtain ESXi management IP addresses and CIDR for adding IP addresses in allowlist when generating reverse proxy VM OVA. ++ :::image type="content" source="./media/vmware-cloud-director-service/pic-manage-ip-address.png" alt-text="Screenshot showing how to obtain management IP address and CIDR for ESXi hosts in Azure VMware solution private cloud." lightbox="./media/vmware-cloud-director-service/pic-manage-ip-address.png"::: +++1. Enter a list of any other IP addresses that VMware Cloud Director must be able to access through the proxy, such as ESXi hosts to use for console proxy connection. Use new lines to separate list entries. ++ > [!TIP] + > To ensure that future additions of ESXi hosts don't require updates to the allowed targets, use a CIDR notation to enter the ESXi hosts in the allow list. This way, you can provide any new host with an IP address that is already allocated as part of the CIDR block. ++1. Once you have gathered all the required information, add the information in the VMware Reverse proxy OVA generation wizard in the following diagram. +1. Click **Generate VMware Reverse Proxy OVА**. ++ :::image type="content" source="./media/vmware-cloud-director-service/pic-reverse-proxy.png" alt-text="Screenshot showing how to generate a reverse proxy VM OVA." lightbox="./media/vmware-cloud-director-service/pic-reverse-proxy.png"::: ++1. On the **Activity log** tab, locate the task for generating an OVА and check its status. If the status of the task is **Success**, click the vertical ellipsis icon and select **View files**. +1. Download the reverse proxy OVA. ++## Deploy VMware Reverse proxy VM +1. Transfer reverse proxy VM OVA you generated in the previous section to a location from where you can access your private cloud. +1. Deploy reverse proxy VM using OVA. +1. Select appropriate parameters for OVA deployment for folder, computer resources, and storage. + - For network, select appropriate segment for reverse proxy. + - Under customize template, use DHCP or provide static IP if you aren't planning to use DHCP. + - Enable SSH to log in to reverse proxy VM. + - Provide root password. +1. Once VM is deployed, power it on and then log in using the root credentials provided during OVA deployment. +1. Log in to the VMware Reverse proxy VM and use the command **transporter-status.sh** to verify that the connection between CDs instance and Transporter VM is established. + - The status should indicate "UP." The command channel should display "Connected," and the allowed targets should be listed as "reachable." +1. Next step is to associate Azure VMware Solution SDDC with the VMware Cloud Director Instance. +++## Associate Azure solution private cloud SDDC with VMware Cloud Director Instance via VMware Reverse proxy ++This process pools all the resources from Azure private solution SDDC and creates a provider virtual datacenter (PVDC) in CDs. ++1. Log in to VMware Cloud Director service. +1. Click **Cloud Director Instances**. +1. In the card of the VMware Cloud Director instance for which you want to associate your Azure VMware solution SDDC, select **Actions** and then click +**Associate datacenter via VMware reverse proxy**. +1. Review datacenter information. +1. Select a proxy network for the reverse proxy appliance to use. Ensure correct NSX-T segment is selected where reverse proxy VM is deployed. ++ :::image type="content" source="./media/vmware-cloud-director-service/pic-proxy-network.png" alt-text="Screenshot showing how to review a proxy network information." lightbox="./media/vmware-cloud-director-service/pic-proxy-network.png"::: ++6. In the **Data center name** text box, enter a name for the SDDC that you want to associate with datacenter. +This name is only used to identify the data center in the VMware Cloud Director inventory, so it doesn't need to match the SDDC name entered when you generated the reverse proxy appliance OVA. +7. Enter the FQDN for your vCenter Server instance. +8. Enter the URL for the NSX Manager instance and wait for a connection to establish. +9. Click **Next**. +10. Under **Credentials**, enter your user name and password for the vCenter Server endpoint. +11. Enter your user name and password for NSX Manager. +12. To create infrastructure resources for your VMware Cloud Director instance, such as a network pool, an external network and a provider VDC, select **Create Infrastructure**. +13. Click **Validate Credentials**. Ensure that validation is successful. +14. Confirm that you acknowledge the costs associated with your instance, and click Submit. +15. Check activity log to note the progress. +16. Once this process is completed, you should see that your VMware Azure solution SDDC is securely associated with your VMware Cloud Director instance. +17. When you open the VMware Cloud Director instance, the vCenter Server and the NSX Manager instances that you associated are visible in Infrastructure Resources. ++ :::image type="content" source="./media/vmware-cloud-director-service/pic-connect-vcenter-server.png" alt-text="Screenshot showing how the vcenter server is connected and enabled." lightbox="./media/vmware-cloud-director-service/pic-connect-vcenter-server.png"::: ++18. A newly created Provider VDC is visible in Cloud Resources. +19. In your Azure VMware solution private cloud, when logged into vCenter you see that a Resource Pool is created as a result of this association. ++ :::image type="content" source="./media/vmware-cloud-director-service/pic-resource-pool.png" alt-text="Screenshot showing how resource pools are created for CDs." lightbox="./media/vmware-cloud-director-service/pic-resource-pool.png"::: ++You can use your VMware cloud director instance provider portal to configure tenants such as organizations and virtual data center. ++## What’s next ++- Configure tenant networking on VMware Cloud director service on Azure VMware solution using link [Enable VMware Cloud Director service with Azure VMware Solution](enable-vmware-cds-with-azure.md). ++- Learn more about VMware cloud director service using [VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html) ++- To learn about Cloud director Service provider admin portal, Visit [VMware Cloud Director™ Service Provider Admin Portal Guide](https://docs.vmware.com/en/VMware-Cloud-Director/10.4/VMware-Cloud-Director-Service-Provider-Admin-Portal-Guide/GUID-F8F4B534-49B2-43B2-AEEE-7BAEE8CE1844.html). |
azure-vmware | Deploy Arc For Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md | Title: Deploy Arc for Azure VMware Solution (Preview) description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. Previously updated : 04/11/2022 Last updated : 08/28/2023 Before you begin checking off the prerequisites, verify the following actions ha - You deployed an Azure VMware Solution private cluster. - You have a connection to the Azure VMware Solution private cloud through your on-premises environment or your native Azure Virtual Network. -- There should be an isolated NSX-T Data Center segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center segment doesn't exist, one will be created.+- There should be an isolated NSX-T Data Center network segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center network segment doesn't exist, one will be created. ## Prerequisites The following items are needed to ensure you're set up to begin the onboarding p - Verify that your vCenter Server version is 6.7 or higher. - A resource pool with minimum-free capacity of 16 GB of RAM, 4 vCPUs. - A datastore with minimum 100 GB of free disk space that is available through the resource pool. -- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.+- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server. - Please validate the regional support before starting the onboarding. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview). - The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. [Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md) az feature show --name AzureArcForAVS --namespace Microsoft.AVS ## Onboard process to deploy Azure Arc -Use the following steps to guide you through the process to onboard in Arc for Azure VMware Solution (Preview). +Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution (Preview). 1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software. 1. Open the 'config_avs.json' file and populate all the variables. Use the following steps to guide you through the process to onboard in Arc for A > [!IMPORTANT] > You can't create the resources in a separate resource group. Make sure you use the same resource group from where the Azure VMware Solution private cloud was created to create the resources. -## Discover and project your VMware infrastructure resources to Azure +## Discover and project your VMware vSphere infrastructure resources to Azure When Arc appliance is successfully deployed on your private cloud, you can do the following actions. After the private cloud is Arc-enabled, vCenter resources should appear under ** ### Manage access to VMware resources through Azure Role-Based Access Control -After your Azure VMware Solution vCenter resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs. +After your Azure VMware Solution vCenter Server resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs. This section will demonstrate how to use custom roles to manage granular access to VMware vSphere resources through Azure. When the extension installation steps are completed, they trigger deployment and ## Change Arc appliance credential -When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store. +When **cloudadmin** credentials are updated, use the following steps to update the credentials in the appliance store. 1. Log in to the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**. 1. Run the following command for Windows-based jumpbox VM. When you activate Arc-enabled Azure VMware Solution resources in Azure, a repres 1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**. 1. When the deletion completes, select **Overview**. 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section.-1. Select **Remove from Azure** to remove the vCenter resource from Azure. +1. Select **Remove from Azure** to remove the vCenter Server resource from Azure. 1. Go to vCenter Server resource in Azure and delete it. 1. Go to the Custom location resource and select **Delete**. 1. Go to the Azure Arc Resource bridge resources and select **Delete**. At this point, all of your Arc-enabled VMware vSphere resources have been remove ## Delete Arc resources from vCenter Server -For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution SDDC. When you delete Arc resources from vCenter, it won't affect the Azure VMware Solution private cloud for the customer. +For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter Server and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it won't affect the Azure VMware Solution private cloud for the customer. ## Preview FAQ Arc for Azure VMware Solution is supported in all regions where Arc for VMware v Standard support process for Azure VMware Solution has been enabled to support customers. -**Does Arc for Azure VMware Solution support private end point?** +**Does Arc for Azure VMware Solution support private endpoint?** -Yes. Arc for Azure VMware Solution will support private end point for general audience. However, it's not currently supported. +Private endpoint is currently not supported. **Is enabling internet the only option to enable Arc for Azure VMware Solution?** -Yes +Yes, the Azure VMware Solution private cloud and jumpbox VM must have internet access for Arc to function. **Is DHCP support available?** -DHCP support isn't available to customers at this time, we only support static IP. -->[!NOTE] -> This is Azure VMware Solution 2.0 only. It's not available for Azure VMware Solution by Cloudsimple. +DHCP support isn't available to customers at this time, we only support static IP addresses. ## Debugging tips for known issues Use the following tips as a self-help guide. **I'm unable to install extensions on my virtual machine.** - Check that **guest management** has been successfully installed.-- **VMtools** should be installed on the VM.+- **VMware Tools** should be installed on the VM. **I'm facing Network related issues during on-boarding.** |
azure-vmware | Deploy Vsan Stretched Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-vsan-stretched-clusters.md | Title: Deploy vSAN stretched clusters description: Learn how to deploy vSAN stretched clusters. Previously updated : 06/24/2023 Last updated : 08/16/2023 It should be noted that these types of failures, although rare, fall outside the Azure VMware Solution stretched clusters are available in the following regions: - UK South (on AV36) -- West Europe (on AV36) +- West Europe (on AV36, and AV36P) - Germany West Central (on AV36) - Australia East (on AV36P) No. A stretched cluster is created between two availability zones, while the thi ### What are the limitations I should be aware of? - Once a private cloud has been created with a stretched cluster, it can't be changed to a standard cluster private cloud. Similarly, a standard cluster private cloud can't be changed to a stretched cluster private cloud after creation.-- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment.+- Scale out and scale-in of stretched clusters can only happen in pairs. A minimum of 6 nodes and a maximum of 16 nodes are supported in a stretched cluster environment. For more details, refer to [Azure subscription and service limits, quotas, and constraints](https://learn.microsoft.com/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-vmware-solution-limits). - Customer workload VMs are restarted with a medium vSphere HA priority. Management VMs have the highest restart priority. - The solution relies on vSphere HA and vSAN for restarts and replication. Recovery time objective (RTO) is determined by the amount of time it takes vSphere HA to restart a VM on the surviving AZ after the failure of a single AZ. - Currently not supported in a stretched cluster environment: |
azure-vmware | Ecosystem External Storage Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/ecosystem-external-storage-solutions.md | + + Title: External storage solutions for Azure VMware Solution (preview) +description: Learn about external storage solutions for Azure VMware Solution private cloud. ++++ Last updated : 08/07/2023+ ++# External storage solutions (preview) ++> [!NOTE] +> By using Pure Cloud Block Store, you agree to the following [Microsoft supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It is advised NOT to run production workloads with preview features. ++## External storage solutions for Azure VMware Solution (preview) ++Azure VMware Solution is a Hyperconverged Infrastructure (HCI) service that offers VMware vSAN as the primary storage option. However, a significant requirement with on-premises VMware deployments is external storage, especially block storage. Providing the same consistent external block storage architecture in the cloud is crucial for some customers. Some workloads can't be migrated or deployed to the cloud without consistent external block storage. As a key principle of Azure VMware Solution is to enable customers to continue to use their investments and their favorite VMware solutions running on Azure, we engaged storage providers with similar goals. ++Pure Cloud Block Store, offered by Pure Storage, is one such solution. It helps bridge the gap by allowing customers to provision external block storage as needed to make full use of an Azure VMware Solution deployment without the need to scale out compute resources, while helping customers migrate their on-premises workloads to Azure. Pure Cloud Block Store is a 100% software-delivered product running entirely on native Azure infrastructure that brings all the relevant Purity features and capabilities to Azure. ++## Onboarding and support ++During preview, Pure Storage manages onboarding of Pure Cloud Block Store for Azure VMware Solution. You can join the preview by emailing [avs@purestorage.com](mailto:avs@purestorage.com). As Pure Cloud Block Store is a customer deployed and managed solution, please reach out to Pure Storage for Customer Support. ++For more information, see the following resources: ++- [Azure VMware Solution + CBS Implementation Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Azure_VMware_Solution_and_Cloud_Block_Store_Implementation_Guide) +- [CBS Deployment Guide](https://support.purestorage.com/Pure_Cloud_Block_Store/Pure_Cloud_Block_Store_on_Azure_Implementation_Guide) +- [Troubleshooting CBS Deployment](https://support.purestorage.com/Pure_Cloud_Block_Store/Pure_Cloud_Block_Store_on_Azure_-_Troubleshooting_Guide) +- [Videos](https://support.purestorage.com/Pure_Cloud_Block_Store/Azure_VMware_Solution_and_Cloud_Block_Store_Video_Demos) |
azure-vmware | Enable Public Ip Nsx Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md | Title: Enable Public IP on the NSX-T Data Center Edge for Azure VMware Solution description: This article shows how to enable internet access for your Azure VMware Solution. Previously updated : 7/6/2023 Last updated : 8/18/2023 With this capability, you have the following features: ## Reference architecture The architecture shows internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX-T Data Center Edge. >[!IMPORTANT] >The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup. This includes not being able to support hosting a mail server in Azure VMware Solution. |
azure-vmware | Enable Vmware Cds With Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-vmware-cds-with-azure.md | Title: Enable VMware Cloud Director service with Azure VMware Solution (Public Preview) + Title: Enable VMware Cloud Director service with Azure VMware Solution description: This article explains how to use Azure VMware Solution to enable enterprise customers to use Azure VMware Solution for private clouds underlying resources for virtual datacenters. Last updated 08/30/2022 -# Enable VMware Cloud Director service with Azure VMware Solution (Preview) +# Enable VMware Cloud Director service with Azure VMware Solution [VMware Cloud Director service (CDs)](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/getting-started-with-vmware-cloud-director-service/GUID-149EF3CD-700A-4B9F-B58B-8EA5776A7A92.html) with Azure VMware Solution enables enterprise customers to use APIs or the Cloud Director services portal to self-service provision and manage virtual datacenters through multi-tenancy with reduced time and complexity. VMware Cloud Director Availability can be used to migrate VMware Cloud Director For more information about VMware Cloud Director Availability, see [VMware Cloud Director Availability | Disaster Recovery & Migration](https://www.vmware.com/products/cloud-director-availability.html) ## FAQs-**Question**: What are the supported Azure regions for the VMware Cloud Director service? -**Answer**: This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service. +### What are the supported Azure regions for the VMware Cloud Director service? -**Question**: How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions? +This offering is supported in all Azure regions where Azure VMware Solution is available except for Brazil South and South Africa. Ensure that the region you wish to connect to VMware Cloud Director service is within a 150-milliseconds round trip time for latency with VMware Cloud Director service. ++### How do I configure VMware Cloud Director service on Microsoft Azure VMware Solutions? ++[Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html) ++### How is VMware Cloud Director service supported? ++VMware Cloud director service (CDs) is VMware owned and supported product connected to Azure VMware solution. For any support queries on CDs, please contact VMware support for assistance. Both VMware and Microsoft support teams collaborate as necessary to address and resolve Cloud Director Service issues within Azure VMware Solution. -**Answer** [Learn about how to configure CDs on Azure VMware Solutions](https://docs.vmware.com/en/VMware-Cloud-Director-service/services/using-vmware-cloud-director-service/GUID-602DE9DD-E7F6-4114-BD89-347F9720A831.html) ## Next steps [VMware Cloud Director Service Documentation](https://docs.vmware.com/en/VMware-Cloud-Director-service/https://docsupdatetracker.net/index.html) [Migration to Azure VMware Solutions with Cloud Director service](https://cloudsolutions.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/docs/migration-to-azure-vmware-solution-with-cloud-director-service.pdf)++ |
azure-vmware | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md | The following table provides a detailed list of roles and responsibilities betwe | -- | - | | Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX | | Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |-| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI | +| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy - VMware Cloud director service (CDs), VMware Cloud director availability(VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI | ## Next steps The next step is to learn key [private cloud and cluster concepts](concepts-priv <!-- LINKS - external --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md-- |
azure-vmware | Rotate Cloudadmin Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md | description: Learn how to rotate the vCenter Server credentials for your Azure V Previously updated : 12/22/2022-#Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials. Last updated : 8/16/2023+# Customer intent: As an Azure service administrator, I want to rotate my cloudadmin credentials so that the HCX Connector has the latest vCenter Server CloudAdmin credentials. # Rotate the cloudadmin credentials for Azure VMware Solution ->[!IMPORTANT] ->Currently, rotating your NSX-T Manager *cloudadmin* credentials isn't supported. To rotate your NSX-T Manager password, submit a [support request](https://rc.portal.azure.com/#create/Microsoft.Support). This process might impact running HCX services. -In this article, you'll rotate the cloudadmin credentials (vCenter Server *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time. +In this article, you'll rotate the cloudadmin credentials (vCenter Server and NSX-T *CloudAdmin* credentials) for your Azure VMware Solution private cloud. Although the password for this account doesn't expire, you can generate a new one at any time. >[!CAUTION]->If you use your cloudadmin credentials to connect services to vCenter Server in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password. +>If you use your cloudadmin credentials to connect services to vCenter Server or NSX-T in your private cloud, those connections will stop working once you rotate your password. Those connections will also lock out the cloudadmin account unless you stop those services before rotating the password. ## Prerequisites -Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning. +Consider and determine which services connect to vCenter Server as *cloudadmin@vsphere.local* or NSX-T as cloudadmin before you rotate the password. These services may include VMware services such as HCX, vRealize Orchestrator, vRealize Operations Manager, VMware Horizon, or other third-party tools used for monitoring or provisioning. One way to determine which services authenticate to vCenter Server with the cloudadmin user is to inspect vSphere events using the vSphere Client for your private cloud. After you identify such services, and before rotating the password, you must stop these services. Otherwise, the services won't work after you rotate the password. You'll also experience temporary locks on your vCenter Server CloudAdmin account, as these services continuously attempt to authenticate using a cached version of the old credentials. -Instead of using the cloudadmin user to connect services to vCenter Server, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md). +Instead of using the cloudadmin user to connect services to vCenter Server or NSX-T Data Center, we recommend individual accounts for each service. For more information about setting up separate accounts for connected services, see [Access and Identity Concepts](./concepts-identity.md). ## Reset your vCenter Server credentials ### [Portal](#tab/azure-portal) -1. In your Azure VMware Solution private cloud, select **VMWare credentials**. -1. Select **Generate new password**. +1. In your Azure VMware Solution private cloud, select **VMware credentials**. +1. Select **Generate new password** under vCenter Server credentials. 1. Select the confirmation checkbox and then select **Generate password**. To begin using Azure CLI: ``` ------ -- - + - -## Update HCX Connector +### Update HCX Connector 1. Go to the on-premises HCX Connector at https://{ip of the HCX connector appliance}:443 and sign in using the new credentials. To begin using Azure CLI: 4. Provide the new vCenter Server user credentials and select **Edit**, which saves the credentials. Save should show successful. +## Reset your NSX-T Manager credentials ++1. In your Azure VMware Solution private cloud, select **VMware credentials**. +1. Select **Generate new password** under NSX-T Manager credentials. +1. Select the confirmation checkbox and then select **Generate password**. + ## Next steps -Now that you've covered resetting your vCenter Server credentials for Azure VMware Solution, you may want to learn about: +Now that you've covered resetting your vCenter Server and NSX-T Manager credentials for Azure VMware Solution, you may want to learn about: - [Integrating Azure native services in Azure VMware Solution](integrate-azure-native-services.md) - [Deploying disaster recovery for Azure VMware Solution workloads using VMware HCX](deploy-disaster-recovery-using-vmware-hcx.md) - |
azure-vmware | Sql Server Hybrid Benefit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/sql-server-hybrid-benefit.md | Title: Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptio description: Learn about Azure Hybrid benefit for Windows server, SQL server, or Linux subscriptions. Previously updated : 3/20/2023 Last updated : 8/28/2023 Microsoft SQL server is a core component of many business-critical applications Azure VMware Solution is an ideal solution for customers looking to migrate and modernize their vSphere-based applications to the cloud, including their Microsoft SQL databases. +Microsoft SQL Enterprise licenses are required for each Azure VMware Solution ESXi host core that is used by SQL server workloads running in a cluster. This can further be reduced by configuring the [Azure Hybrid Benefit](enable-sql-azure-hybrid-benefit.md) feature within Azure VMware Solution, using placement policies to limit the scope of ESXi host cores that need to be licensed within a cluster. + ## Next steps Now that you've covered Azure Hybrid benefit, you may want to learn about: Now that you've covered Azure Hybrid benefit, you may want to learn about: - [Migrate Microsoft SQL Server Standalone to Azure VMware Solution](migrate-sql-server-standalone-cluster.md) - [Migrate SQL Server failover cluster to Azure VMware Solution](migrate-sql-server-failover-cluster.md) - [Migrate Microsoft SQL Server Always-On cluster to Azure VMware Solution](migrate-sql-server-always-on-cluster.md)-- [Enable SQL Azure hybrid benefit for Azure VMware Solution](migrate-sql-server-standalone-cluster.md)+- [Enable SQL Azure hybrid benefit for Azure VMware Solution](enable-sql-azure-hybrid-benefit.md) - [Configure Windows Server Failover Cluster on Azure VMware Solution vSAN](configure-windows-server-failover-cluster.md) |
azure-web-pubsub | Concept Azure Ad Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-azure-ad-authorization.md | Title: Authorize access with Azure Active Directory for Azure Web PubSub -description: This article provides information on authorizing access to Azure Web PubSub Service resources using Azure Active Directory. + Title: Authorize access with Microsoft Entra ID for Azure Web PubSub +description: This article provides information on authorizing access to Azure Web PubSub Service resources using Microsoft Entra ID. -# Authorize access to Web PubSub resources using Azure Active Directory +# Authorize access to Web PubSub resources using Microsoft Entra ID -The Azure Web PubSub Service allows for the authorization of requests to Web PubSub resources by using Azure Active Directory (Azure AD). +The Azure Web PubSub Service enables the authorization of requests to Azure Web PubSub resources by utilizing Microsoft Entra ID. -By utilizing role-based access control (RBAC) within Azure AD, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Azure AD authenticates this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request. +By utilizing role-based access control (RBAC) with Microsoft Entra ID, permissions can be granted to a security principal<sup>[<a href="#security-principal">1</a>]</sup>. Microsoft Entra authorizes this security principal and returns an OAuth 2.0 token, which Web PubSub resources can then use to authorize a request. -Using Azure AD for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Azure AD authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges. +Using Microsoft Entra ID for authorization of Web PubSub requests offers improved security and ease of use compared to Access Key authorization. Microsoft recommends utilizing Microsoft Entra ID authorization with Web PubSub resources when possible to ensure access with the minimum necessary privileges. <a id="security-principal"></a>-*[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities.* +_[1] security principal: a user/resource group, an application, or a service principal such as system-assigned identities and user-assigned identities._ -## Overview of Azure AD for Web PubSub +## Overview of Microsoft Entra ID for Web PubSub -Authentication is necessary to access a Web PubSub resource when using Azure AD. This authentication involves two steps: +Authentication is necessary to access a Web PubSub resource when using Microsoft Entra ID. This authentication involves two steps: 1. First, Azure authenticates the security principal and issues an OAuth 2.0 token. 2. Second, the token is added to the request to the Web PubSub resource. The Web PubSub service uses the token to check if the service principal has the access to the resource. -### Client-side authentication while using Azure AD +### Client-side authentication while using Microsoft Entra ID The negotiation server/Function App shares an access key with the Web PubSub resource, enabling the Web PubSub service to authenticate client connection requests using client tokens generated by the access key. -However, access key is often disabled when using Azure AD to improve security. +However, access key is often disabled when using Microsoft Entra ID to improve security. To address this issue, we have developed a REST API that generates a client token. This token can be used to connect to the Azure Web PubSub service. -To use this API, the negotiation server must first obtain an **Azure AD Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Azure AD Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service. +To use this API, the negotiation server must first obtain an **Microsoft Entra Token** from Azure to authenticate itself. The server can then call the Web PubSub Auth API with the **Microsoft Entra Token** to retrieve a **Client Token**. The **Client Token** is then returned to the client, who can use it to connect to the Azure Web PubSub service. We provided helper functions (for example `GenerateClientAccessUri) for supported programming languages. ## Assign Azure roles for access rights -Azure Active Directory (Azure AD) authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure Web PubSub defines a set of Azure built-in roles that encompass common sets of permissions used to access Web PubSub resources. You can also define custom roles for access to Web PubSub resources. +Microsoft Entra ID authorizes access rights to secured resources through [Azure role-based access control](../role-based-access-control/overview.md). Azure Web PubSub defines a set of Azure built-in roles that encompass common sets of permissions used to access Web PubSub resources. You can also define custom roles for access to Web PubSub resources. ### Resource scope Before assigning an Azure RBAC role to a security principal, it's important to i You can scope access to Azure SignalR resources at the following levels, beginning with the narrowest scope: -- **An individual resource.** +- **An individual resource.** At this scope, a role assignment applies to only the target resource. -- **A resource group.** +- **A resource group.** At this scope, a role assignment applies to all of the resources in the resource group. You can scope access to Azure SignalR resources at the following levels, beginni At this scope, a role assignment applies to all of the resources in all of the resource groups in the subscription. -- **A management group.** +- **A management group.** At this scope, a role assignment applies to all of the resources in all of the resource groups in all of the subscriptions in the management group. -## Azure built-in roles for Web PubSub resources. +## Azure built-in roles for Web PubSub resources - `Web PubSub Service Owner` - Full access to data-plane permissions, including read/write REST APIs and Auth APIs. + Full access to data-plane permissions, including read/write REST APIs and Auth APIs. - This role is the most common used for building an upstream server. + This role is the most common used for building an upstream server. - `Web PubSub Service Reader` - Use to grant read-only REST APIs permissions to Web PubSub resources. + Use to grant read-only REST APIs permissions to Web PubSub resources. - It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs. + It's used when you'd like to write a monitoring tool that calling **ONLY** Web PubSub data-plane **READONLY** REST APIs. ## Next steps -To learn how to create an Azure application and use Azure AD auth, see -- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)+To learn how to create an Azure application and use Microsoft Entra authorization, see -To learn how to configure a managed identity and use Azure AD auth, see -- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)+- [Authorize request to Web PubSub resources with Microsoft Entra ID from applications](howto-authorize-from-application.md) ++To learn how to configure a managed identity and use Microsoft Entra ID auth, see ++- [Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities](howto-authorize-from-managed-identity.md) ++To learn more about roles and role assignments, see -To learn more about roles and role assignments, see - [What is Azure role-based access control](../role-based-access-control/overview.md) -To learn how to create custom roles, see +To learn how to create custom roles, see + - [Steps to create a custom role](../role-based-access-control/custom-roles.md#steps-to-create-a-custom-role) -To learn how to use only Azure AD authentication, see -- [Disable local authentication](./howto-disable-local-auth.md)+To learn how to use only Microsoft Entra authorization, see ++- [Disable local authentication](./howto-disable-local-auth.md) |
azure-web-pubsub | Concept Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-disaster-recovery.md | -Resiliency and disaster recovery is a common need for online systems. Azure Web PubSub Service already guarantees 99.9% availability, but it's still a regional service. +Resiliency and disaster recovery is a common need for online systems. Azure Web PubSub Service already guarantees 99.9% availability, but it's still a regional service. When there is a region-wide outage, it is critical for the service to continue processing real-time messages in a different region. -Your service instance is a regional service and the instance is running in one region. When there is a region-wide outage, it is critical for the service to continue processing real-time messages in a different region. This article will explain some of the strategies you can use to deploy the service to allow for disaster recovery. +For regional disaster recovery, we recommend the following two approaches: ++- **Enable Geo-Replication** (Easy way). This feature will handle regional failover for you automatically. When enabled, there remains just one Azure SignalR instance and no code changes are introduced. Check [geo-replication](howto-enable-geo-replication.md) for details. +- **Utilize Multiple Endpoints**. You learn how to do so **in this document** ## High available architecture for Web PubSub service We haven't integrated the strategy into the SDK yet, so for now the application In summary, what the application side needs to implement is: 1. Health check. Application can either check if the service is healthy using [service health check API](/rest/api/webpubsub/dataplane/health-api/get-service-status) periodically in the background or on demand for every **negotiate** call. 1. Negotiate logic. Application returns healthy **primary** endpoint by default. When **primary** endpoint is down, application returns healthy **secondary** endpoint.-1. Broadcast logic. When sending messages to multiple clients, application needs to make sure it broadcasts messages to all the **healthy** endpoints. +1. Broadcast logic. When messages are sent to multiple clients, application needs to make sure it broadcasts messages to all the **healthy** endpoints. Below is a diagram that illustrates such topology: You'll need to handle such cases at client side to make it transparent to your e ### High available architecture for client-client pattern -For client-client pattern, currently it is not yet possible to support a zero-down-time disaster recovery. If you have high availability requirements, please consider using client-server pattern, or sending a copy of messages to the server as well. --Clients connected to one Web PubSub service are not yet able to communicate with clients connected to another Web PubSub service using client-client pattern. So when using client-client pattern, the general principles are: -1. All the app server instances return the same Web PubSub endpoint to the client **negotiate** calls. One way is to have a source-of-truth storing, checking the health status, and managing these endpoints, and returning one healthy endpoint in your primary regions. -2. Make sure there is no active client connected to other endpoints. [Close All Connections](/rest/api/webpubsub/dataplane/web-pub-sub/close-all-connections) could be used to close all the connected clients. +For client-client pattern, currently it is not yet possible to support a zero-down-time disaster recovery using multiple instances. If you have high availability requirements, please consider using [geo-replication](howto-enable-geo-replication.md). ## How to test a failover |
azure-web-pubsub | Concept Service Internals | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-service-internals.md | Title: Azure Web PubSub service internals -description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted. +description: Learn about Azure Web PubSub Service internals, the architecture, the connections and how data is transmitted. -# Azure Web PubSub service internals +# Azure Web PubSub service internals Azure Web PubSub Service provides an easy way to publish/subscribe messages using simple [WebSocket](https://tools.ietf.org/html/rfc6455) connections. Azure Web PubSub Service provides an easy way to publish/subscribe messages usin - The service manages the WebSocket connections for you. ## Terms-* **Service**: Azure Web PubSub Service. ++- **Service**: Azure Web PubSub Service. [!INCLUDE [Terms](includes/terms.md)] Azure Web PubSub Service provides an easy way to publish/subscribe messages usin ![Diagram showing the Web PubSub service workflow.](./media/concept-service-internals/workflow.png) Workflow as shown in the above graph:-1. A *client* connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol). ++1. A _client_ connects to the service `/client` endpoint using WebSocket transport. Service forward every WebSocket frame to the configured upstream(server). The WebSocket connection can connect with any custom subprotocol for the server to handle, or it can connect with the service-supported subprotocol `json.webpubsub.azure.v1`, which empowers the clients to do pub/sub directly. Details are described in [client protocol](#client-protocol). 2. On different client events, the service invokes the server using **CloudEvents protocol**. [**CloudEvents**](https://github.com/cloudevents/spec/tree/v1.0.1) is a standardized and protocol-agnostic definition of the structure and metadata description of events hosted by the Cloud Native Computing Foundation (CNCF). Detailed implementation of CloudEvents protocol relies on the server role, described in [server protocol](#server-protocol). 3. The Web PubSub server can invoke the service using the REST API to send messages to clients or to manage the connected clients. Details are described in [server protocol](#server-protocol) Workflow as shown in the above graph: A client connection connects to the `/client` endpoint of the service using [WebSocket protocol](https://tools.ietf.org/html/rfc6455). The WebSocket protocol provides full-duplex communication channels over a single TCP connection and was standardized by the IETF as RFC 6455 in 2011. Most languages have native support to start WebSocket connections. Our service supports two kinds of clients:+ - One is called [the simple WebSocket client](#the-simple-websocket-client) - The other is called [the PubSub WebSocket client](#the-pubsub-websocket-client) ### The simple WebSocket client+ A simple WebSocket client, as the naming indicates, is a simple WebSocket connection. It can also have its custom subprotocol. For example, in JS, a simple WebSocket client can be created using the following code.+ ```js // simple WebSocket client1-var client1 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1'); +var client1 = new WebSocket("wss://test.webpubsub.azure.com/client/hubs/hub1"); // simple WebSocket client2 with some custom subprotocol-var client2 = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'custom.subprotocol') -+var client2 = new WebSocket( + "wss://test.webpubsub.azure.com/client/hubs/hub1", + "custom.subprotocol" +); ``` A simple WebSocket client follows a client<->server architecture, as the below sequence diagram shows: ![Diagram showing the sequence for a client connection.](./media/concept-service-internals/simple-client-sequence.png) - 1. When the client starts a WebSocket handshake, the service tries to invoke the `connect` event handler for WebSocket handshake. Developers can use this handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. 2. When the client is successfully connected, the service invokes a `connected` event handler. It works as a notification and doesn't block the client from sending messages. Developers can use this handler to do data storage and can respond with messages to the client. The service also pushes a `connected` event to all concerning event listeners, if any. 3. When the client sends messages, the service triggers a `message` event to the event handler to handle the messages sent. This event is a general event containing the messages sent in a WebSocket frame. Your code needs to dispatch the messages inside this event handler. If the event handler returns non-successful response code for, the service drops the client connection. The service also pushes a `message` event to all concerning event listeners, if any. If the service can't find any registered servers to receive the messages, the service also drops the connection. 4. When the client disconnects, the service tries to trigger the `disconnected` event to the event handler once it detects the disconnect. The service also pushes a `disconnected` event to all concerning event listeners, if any. #### Scenarios+ These connections can be used in a typical client-server architecture where the client sends messages to the server and the server handles incoming messages using [Event Handlers](#event-handler). It can also be used when customers apply existing [subprotocols](https://www.iana.org/assignments/websocket/websocket.xml) in their application logic. ### The PubSub WebSocket client+ The service also supports a specific subprotocol called `json.webpubsub.azure.v1`, which empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. We call the WebSocket connection with `json.webpubsub.azure.v1` subprotocol a PubSub WebSocket client. For more information, see the [Web PubSub client specification](https://github.com/Azure/azure-webpubsub/blob/main/protocols/client/client-spec.md) on GitHub. For example, in JS, a PubSub WebSocket client can be created using the following code.+ ```js // PubSub WebSocket client-var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.webpubsub.azure.v1'); +var pubsub = new WebSocket( + "wss://test.webpubsub.azure.com/client/hubs/hub1", + "json.webpubsub.azure.v1" +); ``` A PubSub WebSocket client can:-* Join a group, for example: - ```json - { - "type": "joinGroup", - "group": "<group_name>" - } - ``` -* Leave a group, for example: - ```json - { - "type": "leaveGroup", - "group": "<group_name>" - } - ``` -* Publish messages to a group, for example: - ```json - { - "type": "sendToGroup", - "group": "<group_name>", - "data": { "hello": "world" } - } - ``` -* Send custom events to the upstream server, for example: -- ```json - { - "type": "event", - "event": "<event_name>", - "data": { "hello": "world" } - } - ``` ++- Join a group, for example: ++ ```json + { + "type": "joinGroup", + "group": "<group_name>" + } + ``` ++- Leave a group, for example: ++ ```json + { + "type": "leaveGroup", + "group": "<group_name>" + } + ``` ++- Publish messages to a group, for example: ++ ```json + { + "type": "sendToGroup", + "group": "<group_name>", + "data": { "hello": "world" } + } + ``` ++- Send custom events to the upstream server, for example: ++ ```json + { + "type": "event", + "event": "<event_name>", + "data": { "hello": "world" } + } + ``` [PubSub WebSocket Subprotocol](./reference-json-webpubsub-subprotocol.md) contains the details of the `json.webpubsub.azure.v1` subprotocol. -You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the *server* is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the *event* the message belongs. +You may have noticed that for a [simple WebSocket client](#the-simple-websocket-client), the _server_ is a **must have** role to receive the `message` events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group and publish messages to a group directly. It can also route messages to different event handlers / event listeners by customizing the _event_ the message belongs. ++#### Scenarios -#### Scenarios: Such clients can be used when clients want to talk to each other. Messages are sent from `client2` to the service and the service delivers the message directly to `client1` if the clients are authorized to do so. Client1: ```js-var client1 = new WebSocket("wss://xxx.webpubsub.azure.com/client/hubs/hub1", "json.webpubsub.azure.v1"); -client1.onmessage = e => { - if (e.data) { - var message = JSON.parse(e.data); - if (message.type === "message" - && message.group === "Group1"){ - // Only print messages from Group1 - console.log(message.data); - } +var client1 = new WebSocket( + "wss://xxx.webpubsub.azure.com/client/hubs/hub1", + "json.webpubsub.azure.v1" +); +client1.onmessage = (e) => { + if (e.data) { + var message = JSON.parse(e.data); + if (message.type === "message" && message.group === "Group1") { + // Only print messages from Group1 + console.log(message.data); }+ } }; -client1.onopen = e => { - client1.send(JSON.stringify({ - type: "joinGroup", - group: "Group1" - })); +client1.onopen = (e) => { + client1.send( + JSON.stringify({ + type: "joinGroup", + group: "Group1", + }) + ); }; ``` As the above example shows, `client2` sends data directly to `client1` by publis ### Client events summary Client events fall into two categories:-* Synchronous events (blocking) - Synchronous events block the client workflow. - * `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. - * `message`: This event is triggered when a client sends a message. -* Asynchronous events (non-blocking) - Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail. - * `connected`: This event is triggered when a client connects to the service successfully. - * `disconnected`: This event is triggered when a client disconnected with the service. ++- Synchronous events (blocking) + Synchronous events block the client workflow. + - `connect`: This event is for event handler only. When the client starts a WebSocket handshake, the event is triggered and developers can use `connect` event handler to handle the WebSocket handshake, determine the subprotocol to use, authenticate the client, and join the client to groups. + - `message`: This event is triggered when a client sends a message. +- Asynchronous events (non-blocking) + Asynchronous events don't block the client workflow, it acts as some notification to server. When such an event trigger fails, the service logs the error detail. + - `connected`: This event is triggered when a client connects to the service successfully. + - `disconnected`: This event is triggered when a client disconnected with the service. ### Client message limit+ The maximum allowed message size for one WebSocket frame is **1MB**. ### Client authentication The following graph describes the workflow. ![Diagram showing the client authentication workflow.](./media/concept-service-internals/client-connect-workflow.png) -As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's *authorized* to. The `role`s of the client determines the *initial* permissions the client have: +As you may have noticed when we describe the PubSub WebSocket clients, that a client can publish to other clients only when it's _authorized_ to. The `role`s of the client determines the _initial_ permissions the client have: -| Role | Permission | -||| -| Not specified | The client can send events. -| `webpubsub.joinLeaveGroup` | The client can join/leave any group. -| `webpubsub.sendToGroup` | The client can publish messages to any group. -| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`. -| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. +| Role | Permission | +| - | | +| Not specified | The client can send events. | +| `webpubsub.joinLeaveGroup` | The client can join/leave any group. | +| `webpubsub.sendToGroup` | The client can publish messages to any group. | +| `webpubsub.joinLeaveGroup.<group>` | The client can join/leave group `<group>`. | +| `webpubsub.sendToGroup.<group>` | The client can publish messages to group `<group>`. | The server-side can also grant or revoke permissions of the client dynamically through [server protocol](#connection-manager) as to be illustrated in a later section. The server-side can also grant or revoke permissions of the client dynamically t Server protocol provides the functionality for the server to handle client events and manage the client connections and the groups. In general, server protocol contains three roles:+ 1. [Event handler](#event-handler) 1. [Connection manager](#connection-manager) 1. [Event listener](#event-listener) ### Event handler+ The event handler handles the incoming client events. Event handlers are registered and configured in the service through the portal or Azure CLI. When a client event is triggered, the service can identify if the event is to be handled or not. Now we use `PUSH` mode to invoke the event handler. The event handler on the server side exposes a publicly accessible endpoint for the service to invoke when the event is triggered. It acts as a **webhook**. Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md). When doing the validation, the `{event}` parameter is resolved to `validate`. Fo For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#414-webhook-request-rate) and [WebHook-Request-Callback](https://github.com/cloudevents/spec/blob/v1.0/http-webhook.md#413-webhook-request-callback). -#### Authentication between service and webhook +#### Authentication/Authorization between service and webhook + - Anonymous mode - Simple authentication that `code` is provided through the configured Webhook URL.-- Use Azure Active Directory (Azure AD) authentication. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details.- - Step1: Enable Identity for the Web PubSub service - - Step2: Select from existing Azure AD application that stands for your webhook web app +- Use Microsoft Entra authorization. For more information, see [how to use managed identity](howto-use-managed-identity.md) for details. + - Step1: Enable Identity for the Web PubSub service + - Step2: Select from existing Microsoft Entra application that stands for your webhook web app ### Connection manager -The server is by nature an authorized user. With the help of the *event handler role*, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can: - - Close a client connection - - Send messages to a client - - Send messages to clients that belong to the same user - - Add a client to a group - - Add clients authenticated as the same user to a group - - Remove a client from a group - - Remove clients authenticated as the same user from a group - - Publish messages to a group +The server is by nature an authorized user. With the help of the _event handler role_, the server knows the metadata of the clients, for example, `connectionId` and `userId`, so it can: ++- Close a client connection +- Send messages to a client +- Send messages to clients that belong to the same user +- Add a client to a group +- Add clients authenticated as the same user to a group +- Remove a client from a group +- Remove clients authenticated as the same user from a group +- Publish messages to a group It can also grant or revoke publish/join permissions for a PubSub client:- - Grant publish/join permissions to some specific group or to all groups - - Revoke publish/join permissions for some specific group or for all groups - - Check if the client has permission to join or publish to some specific group or to all groups ++- Grant publish/join permissions to some specific group or to all groups +- Revoke publish/join permissions for some specific group or for all groups +- Check if the client has permission to join or publish to some specific group or to all groups The service provides REST APIs for the server to do connection management. You can combine an [event handler](#event-handler) and event listeners for the s Web PubSub service delivers client events to event listeners using [CloudEvents AMQP extension for Azure Web PubSub](reference-cloud-events-amqp.md). ### Summary-You may have noticed that the *event handler role* handles communication from the service to the server while *the manager role* handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol. ++You may have noticed that the _event handler role_ handles communication from the service to the server while _the manager role_ handles communication from the server to the service. After combining the two roles, the data flow between service and server looks similar to the following diagram using HTTP protocol. ![Diagram showing the Web PubSub service bi-directional workflow.](./media/concept-service-internals/http-service-server.png) |
azure-web-pubsub | Howto Authorize From Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-application.md | Title: Authorize request to Web PubSub resources with Azure AD from Azure applications -description: This article provides information about authorizing request to Web PubSub resources with Azure AD from Azure applications + Title: Authorize request to Web PubSub resources with Microsoft Entra ID from applications +description: This article provides information about authorizing request to Web PubSub resources with Microsoft Entra ID from applications -# Authorize request to Web PubSub resources with Azure AD from Azure applications +# Authorize request to Web PubSub resources with Microsoft Entra ID from Azure applications -Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Azure applications](../active-directory/develop/app-objects-and-service-principals.md). +Azure Web PubSub Service supports Microsoft Entra ID for authorizing requests from [applications](../active-directory/develop/app-objects-and-service-principals.md). This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from an Azure application. This article shows how to configure your Web PubSub resource and codes to author The first step is to register an Azure application. -1. On the [Azure portal](https://portal.azure.com/), search for and select **Azure Active Directory** +1. On the [Azure portal](https://portal.azure.com/), search for and select **Microsoft Entra ID** 2. Under **Manage** section, select **App registrations**. 3. Click **New registration**. - ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png) + ![Screenshot of registering an application.](./media/howto-authorize-from-application/register-an-application.png) 4. Enter a display **Name** for your application. 5. Click **Register** to confirm the register. Once you have your application registered, you can find the **Application (clien ![Screenshot of an application.](./media/howto-authorize-from-application/application-overview.png) To learn more about registering an application, see+ - [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). ## Add credentials The application requires a client secret to prove its identity when requesting a 1. Under **Manage** section, select **Certificates & secrets** 1. On the **Client secrets** tab, click **New client secret**.-![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png) + ![Screenshot of creating a client secret.](./media/howto-authorize-from-application/new-client-secret.png) 1. Enter a **description** for the client secret, and choose a **expire time**.-1. Copy the value of the **client secret** and then paste it to a secure location. - > [!NOTE] - > The secret will display only once. +1. Copy the value of the **client secret** and then paste it to a secure location. + > [!NOTE] + > The secret will display only once. + ### Certificate You can also upload a certification instead of creating a client secret. To learn more about adding credentials, see ## Add role assignments on Azure portal -This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource. +This sample shows how to assign a `Web PubSub Service Owner` role to a service principal (application) over a Web PubSub resource. -> [!Note] +> [!NOTE] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. On the [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub. This sample shows how to assign a `Web PubSub Service Owner` role to a service p 1. Click **Select Members** -3. Search for and select the application that you would like to assign the role to. +1. Search for and select the application that you would like to assign the role to. 1. Click **Select** to confirm the selection. -4. Click **Next**. +1. Click **Next**. ![Screenshot of assigning role to service principals.](./media/howto-authorize-from-application/assign-role-to-service-principals.png) -5. Click **Review + assign** to confirm the change. +1. Click **Review + assign** to confirm the change. > [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.-To learn more about how to assign and manage Azure role assignments, see these articles: +> To learn more about how to assign and manage Azure role assignments, see these articles: + - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md) - [Assign Azure roles using Azure CLI](../role-based-access-control/role-assignments-cli.md) - [Assign Azure roles using Azure Resource Manager templates](../role-based-access-control/role-assignments-template.md) -## Use Postman to get the Azure AD token +## Use Postman to get the Microsoft Entra token + 1. Launch Postman 2. For the method, select **GET**. To learn more about how to assign and manage Azure role assignments, see these a 4. On the **Headers** tab, add **Content-Type** key and `application/x-www-form-urlencoded` for the value. -![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png) + ![Screenshot of the basic info using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman.png) 5. Switch to the **Body** tab, and add the following keys and values.- 1. Select **x-www-form-urlencoded**. - 2. Add `grant_type` key, and type `client_credentials` for the value. - 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier. - 4. Add `client_secret` key, and paste the value of client secret you noted down earlier. - 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value. + 1. Select **x-www-form-urlencoded**. + 2. Add `grant_type` key, and type `client_credentials` for the value. + 3. Add `client_id` key, and paste the value of **Application (client) ID** in the **Overview** tab of the application you created earlier. + 4. Add `client_secret` key, and paste the value of client secret you noted down earlier. + 5. Add `resource` key, and type `https://webpubsub.azure.com` for the value. -![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png) + ![Screenshot of the body parameters when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-body.png) -6. Select **Send** to send the request to get the token. You see the token in the `access_token` field. +6. Select **Send** to send the request to get the token. You see the token in the `access_token` field. -![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png) + ![Screenshot of the response token when using postman to get the token.](./media/howto-authorize-from-application/get-azure-ad-token-using-postman-response.png) -## Sample codes using Azure AD auth +## Sample codes using Microsoft Entra authorization We officially support 4 programming languages: We officially support 4 programming languages: See the following related articles: -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from managed identities](howto-authorize-from-managed-identity.md)-- [Disable local authentication](./howto-disable-local-auth.md)+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md) +- [Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities](howto-authorize-from-managed-identity.md) +- [Disable local authentication](./howto-disable-local-auth.md) |
azure-web-pubsub | Howto Authorize From Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-authorize-from-managed-identity.md | Title: Authorize request to Web PubSub resources with Azure AD from managed identities -description: This article provides information about authorizing request to Web PubSub resources with Azure AD from managed identities + Title: Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities +description: This article provides information about authorizing request to Web PubSub resources with Microsoft Entra ID from managed identities -# Authorize request to Web PubSub resources with Azure AD from managed identities -Azure Web PubSub Service supports Azure Active Directory (Azure AD) authorizing requests from [Managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). +# Authorize request to Web PubSub resources with Microsoft Entra ID from managed identities ++Azure Web PubSub Service supports Microsoft Entra ID for authorizing requests from [managed identities](../active-directory/managed-identities-azure-resources/overview.md). This article shows how to configure your Web PubSub resource and codes to authorize the request to a Web PubSub resource from a managed identity. This is an example for configuring `System-assigned managed identity` on a `Virt 1. Click the **Save** button to confirm the change. ### How to create user-assigned managed identities+ - [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity) ### How to configure managed identities on other platforms This is an example for configuring `System-assigned managed identity` on a `Virt - [How to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md). -## Add role assignments on Azure portal +## Add role assignments on Azure portal -This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource. +This sample shows how to assign a `Web PubSub Service Owner` role to a system-assigned identity over a Web PubSub resource. > [!Note] > A role can be assigned to any scope, including management group, subscription, resource group or a single resource. To learn more about scope, see [Understand scope for Azure RBAC](../role-based-access-control/scope-overview.md)+ 1. Open [Azure portal](https://portal.azure.com/), navigate to your Web PubSub resource. 1. Click **Access Control (IAM)** to display access control settings for the Azure Web PubSub. This sample shows how to assign a `Web PubSub Service Owner` role to a system-as 1. Click **Select** to confirm the selection. -2. Click **Next**. +1. Click **Next**. ![Screenshot of assigning role to managed identities.](./media/howto-authorize-from-managed-identity/assign-role-to-managed-identities.png) -3. Click **Review + assign** to confirm the change. +1. Click **Review + assign** to confirm the change. > [!IMPORTANT] > Azure role assignments may take up to 30 minutes to propagate.-To learn more about how to assign and manage Azure role assignments, see these articles: +> To learn more about how to assign and manage Azure role assignments, see these articles: + - [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) - [Assign Azure roles using the REST API](../role-based-access-control/role-assignments-rest.md) - [Assign Azure roles using Azure PowerShell](../role-based-access-control/role-assignments-powershell.md) We officially support 4 programming languages: See the following related articles: -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)-- [Authorize request to Web PubSub resources with Azure AD from Azure applications](howto-authorize-from-application.md)-- [Disable local authentication](./howto-disable-local-auth.md)+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md) +- [Authorize request to Web PubSub resources with Microsoft Entra ID from Azure applications](howto-authorize-from-application.md) +- [Disable local authentication](./howto-disable-local-auth.md) |
azure-web-pubsub | Howto Create Serviceclient With Java And Azure Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-java-and-azure-identity.md | -This how-to guide shows you how to create a `WebPubSubServiceClient` with Java and Azure Identity. +This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in Java. ## Requirements This how-to guide shows you how to create a `WebPubSubServiceClient` with Java a 1. Create a `TokenCredential` with Azure Identity SDK. - ```java - package com.webpubsub.tutorial; + ```java + package com.webpubsub.tutorial; - import com.azure.core.credential.TokenCredential; - import com.azure.identity.DefaultAzureCredentialBuilder; + import com.azure.core.credential.TokenCredential; + import com.azure.identity.DefaultAzureCredentialBuilder; - public class App { + public class App { - public static void main(String[] args) { - TokenCredential credential = new DefaultAzureCredentialBuilder().build(); - } - } - ``` + public static void main(String[] args) { + TokenCredential credential = new DefaultAzureCredentialBuilder().build(); + } + } + ``` - `credential` can be any class that inherits from `TokenCredential` class. + `credential` can be any class that inherits from `TokenCredential` class. - - EnvironmentCredential - - ClientSecretCredential - - ClientCertificateCredential - - ManagedIdentityCredential - - VisualStudioCredential - - VisualStudioCodeCredential - - AzureCliCredential + - EnvironmentCredential + - ClientSecretCredential + - ClientCertificateCredential + - ManagedIdentityCredential + - VisualStudioCredential + - VisualStudioCodeCredential + - AzureCliCredential - To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme) + To learn more, see [Azure Identity client library for Java](/java/api/overview/azure/identity-readme) -2. Then create a `client` with `endpoint`, `hub`, and `credential`. +2. Then create a `client` with `endpoint`, `hub`, and `credential`. - ```Java - package com.webpubsub.tutorial; + ```Java + package com.webpubsub.tutorial; - import com.azure.core.credential.TokenCredential; - import com.azure.identity.DefaultAzureCredentialBuilder; - import com.azure.messaging.webpubsub.WebPubSubServiceClient; - import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder; + import com.azure.core.credential.TokenCredential; + import com.azure.identity.DefaultAzureCredentialBuilder; + import com.azure.messaging.webpubsub.WebPubSubServiceClient; + import com.azure.messaging.webpubsub.WebPubSubServiceClientBuilder; - public class App { - public static void main(String[] args) { + public class App { + public static void main(String[] args) { - TokenCredential credential = new DefaultAzureCredentialBuilder().build(); + TokenCredential credential = new DefaultAzureCredentialBuilder().build(); - // create the service client - WebPubSubServiceClient client = new WebPubSubServiceClientBuilder() - .endpoint("<endpoint>") - .credential(credential) - .hub("<hub>") - .buildClient(); - } - } - ``` + // create the service client + WebPubSubServiceClient client = new WebPubSubServiceClientBuilder() + .endpoint("<endpoint>") + .credential(credential) + .hub("<hub>") + .buildClient(); + } + } + ``` - Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme) + Learn how to use this client, see [Azure Web PubSub service client library for Java](/java/api/overview/azure/messaging-webpubsub-readme) ## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp-aad)+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/java/chatapp-aad) |
azure-web-pubsub | Howto Create Serviceclient With Javascript And Azure Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-javascript-and-azure-identity.md | -This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in JavaScript. +This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in JavaScript. ## Requirements This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure 1. Create a `TokenCredential` with Azure Identity SDK. - ```javascript - const { DefaultAzureCredential } = require('@azure/identity') + ```javascript + const { DefaultAzureCredential } = require("@azure/identity"); - let credential = new DefaultAzureCredential(); - ``` + let credential = new DefaultAzureCredential(); + ``` - `credential` can be any class that inherits from `TokenCredential` class. + `credential` can be any class that inherits from `TokenCredential` class. - - EnvironmentCredential - - ClientSecretCredential - - ClientCertificateCredential - - ManagedIdentityCredential - - VisualStudioCredential - - VisualStudioCodeCredential - - AzureCliCredential + - EnvironmentCredential + - ClientSecretCredential + - ClientCertificateCredential + - ManagedIdentityCredential + - VisualStudioCredential + - VisualStudioCodeCredential + - AzureCliCredential - To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme) + To learn more, see [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme) -2. Then create a `client` with `endpoint`, `hub`, and `credential`. +2. Then create a `client` with `endpoint`, `hub`, and `credential`. - ```javascript - const { DefaultAzureCredential } = require('@azure/identity') + ```javascript + const { DefaultAzureCredential } = require("@azure/identity"); - let credential = new DefaultAzureCredential(); + let credential = new DefaultAzureCredential(); - let serviceClient = new WebPubSubServiceClient("<endpoint>", credential, "<hub>"); - ``` + let serviceClient = new WebPubSubServiceClient( + "<endpoint>", + credential, + "<hub>" + ); + ``` - Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme) + Learn how to use this client, see [Azure Web PubSub service client library for JavaScript](/javascript/api/overview/azure/web-pubsub-readme) ## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp-aad)+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/chatapp-aad) |
azure-web-pubsub | Howto Create Serviceclient With Net And Azure Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-net-and-azure-identity.md | -This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in .NET. +This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in .NET. ## Requirements This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure - Install [Azure.Messaging.WebPubSub](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) from nuget.org ```bash- Install-Package Azure.Messaging.WebPubSub + Install-Package Azure.Messaging.WebPubSub ``` ## Sample codes 1. Create a `TokenCredential` with Azure Identity SDK. - ```C# - using Azure.Identity; -- namespace chatapp - { - public class Program - { - public static void Main(string[] args) - { - var credential = new DefaultAzureCredential(); - } - } - } - ``` -- `credential` can be any class that inherits from `TokenCredential` class. -- - EnvironmentCredential - - ClientSecretCredential - - ClientCertificateCredential - - ManagedIdentityCredential - - VisualStudioCredential - - VisualStudioCodeCredential - - AzureCliCredential -- To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme) --2. Then create a `client` with `endpoint`, `hub`, and `credential`. -- ```C# - using Azure.Identity; - using Azure.Messaging.WebPubSub; - - public class Program - { - public static void Main(string[] args) - { - var credential = new DefaultAzureCredential(); - var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential); - } - } - ``` -- Or inject it into `IServiceCollections` with our `BuilderExtensions`. -- ```C# - using System; -- using Azure.Identity; -- using Microsoft.Extensions.Azure; - using Microsoft.Extensions.Configuration; - using Microsoft.Extensions.DependencyInjection; -- namespace chatapp - { - public class Startup - { - public Startup(IConfiguration configuration) - { - Configuration = configuration; - } -- public IConfiguration Configuration { get; } -- public void ConfigureServices(IServiceCollection services) - { - services.AddAzureClients(builder => - { - var credential = new DefaultAzureCredential(); - builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential); - }); - } - } - } - ``` -- Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme) + ```C# + using Azure.Identity; ++ namespace chatapp + { + public class Program + { + public static void Main(string[] args) + { + var credential = new DefaultAzureCredential(); + } + } + } + ``` ++ `credential` can be any class that inherits from `TokenCredential` class. ++ - EnvironmentCredential + - ClientSecretCredential + - ClientCertificateCredential + - ManagedIdentityCredential + - VisualStudioCredential + - VisualStudioCodeCredential + - AzureCliCredential ++ To learn more, see [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme) ++2. Then create a `client` with `endpoint`, `hub`, and `credential`. ++ ```C# + using Azure.Identity; + using Azure.Messaging.WebPubSub; ++ public class Program + { + public static void Main(string[] args) + { + var credential = new DefaultAzureCredential(); + var client = new WebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential); + } + } + ``` ++ Or inject it into `IServiceCollections` with our `BuilderExtensions`. ++ ```C# + using System; ++ using Azure.Identity; ++ using Microsoft.Extensions.Azure; + using Microsoft.Extensions.Configuration; + using Microsoft.Extensions.DependencyInjection; ++ namespace chatapp + { + public class Startup + { + public Startup(IConfiguration configuration) + { + Configuration = configuration; + } ++ public IConfiguration Configuration { get; } ++ public void ConfigureServices(IServiceCollection services) + { + services.AddAzureClients(builder => + { + var credential = new DefaultAzureCredential(); + builder.AddWebPubSubServiceClient(new Uri("<endpoint>"), "<hub>", credential); + }); + } + } + } + ``` ++ Learn how to use this client, see [Azure Web PubSub service client library for .NET](/dotnet/api/overview/azure/messaging.webpubsub-readme) ## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad)+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp/chatapp-aad) |
azure-web-pubsub | Howto Create Serviceclient With Python And Azure Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-python-and-azure-identity.md | -This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure Active Directory in Python. +This how-to guide shows you how to create a `WebPubSubServiceClient` using Microsoft Entra ID in Python. ## Requirements This how-to guide shows you how to create a `WebPubSubServiceClient` using Azure 1. Create a `TokenCredential` with Azure Identity SDK. - ```python - from azure.identity import DefaultAzureCredential + ```python + from azure.identity import DefaultAzureCredential - credential = DefaultAzureCredential() - ``` + credential = DefaultAzureCredential() + ``` - `credential` can be any class that inherits from `TokenCredential` class. + `credential` can be any class that inherits from `TokenCredential` class. - - EnvironmentCredential - - ClientSecretCredential - - ClientCertificateCredential - - ManagedIdentityCredential - - VisualStudioCredential - - VisualStudioCodeCredential - - AzureCliCredential + - EnvironmentCredential + - ClientSecretCredential + - ClientCertificateCredential + - ManagedIdentityCredential + - VisualStudioCredential + - VisualStudioCodeCredential + - AzureCliCredential - To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme) + To learn more, see [Azure Identity client library for Python](/python/api/overview/azure/identity-readme) -2. Then create a `client` with `endpoint`, `hub`, and `credential`. +2. Then create a `client` with `endpoint`, `hub`, and `credential`. - ```python - from azure.identity import DefaultAzureCredential + ```python + from azure.identity import DefaultAzureCredential - credential = DefaultAzureCredential() + credential = DefaultAzureCredential() - client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential) - ``` + client = WebPubSubServiceClient(hub="<hub>", endpoint="<endpoint>", credential=credential) + ``` - Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme) + Learn how to use this client, see [Azure Web PubSub service client library for Python](/python/api/overview/azure/messaging-webpubsubservice-readme) ## Complete sample -- [Simple chatroom with AAD Auth](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp-aad)+- [Simple chatroom with Microsoft Entra ID authorization](https://github.com/Azure/azure-webpubsub/tree/main/samples/python/chatapp-aad) |
azure-web-pubsub | Howto Develop Create Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-create-instance.md | Title: Create an Azure Web PubSub resource -description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template +description: Quickstart showing how to create a Web PubSub resource from Azure portal, using Azure CLI and a Bicep template Last updated 03/13/2023 zone_pivot_groups: azure-web-pubsub-create-resource-methods + # Create a Web PubSub resource ## Prerequisites+ > [!div class="checklist"]-> * An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already. +> +> - An Azure account with an active subscription. [Create a free Azure account](https://azure.microsoft.com/free/), if don't have one already. > [!TIP] > Web PubSub includes a generous **free tier** that can be used for testing and production purposes.- - + ::: zone pivot="method-azure-portal"+ ## Create a resource from Azure portal -1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter. +1. Select the New button found on the upper left-hand corner of the Azure portal. In the New screen, type **Web PubSub** in the search box and press enter. - :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal."::: + :::image type="content" source="./media/create-instance-portal/search-web-pubsub-in-portal.png" alt-text="Screenshot of searching the Azure Web PubSub in portal."::: 2. Select **Web PubSub** from the search results, then select **Create**. 3. Enter the following settings. - | Setting | Suggested value | Description | - | | - | -- | - | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. | - | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. | - | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. | - | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. | - | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) | - | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. | + | Setting | Suggested value | Description | + | -- | -- | | + | **Resource name** | Globally unique name | The globally unique Name that identifies your new Web PubSub service instance. Valid characters are `a-z`, `A-Z`, `0-9`, and `-`. | + | **Subscription** | Your subscription | The Azure subscription under which this new Web PubSub service instance is created. | + | **[Resource Group]** | myResourceGroup | Name for the new resource group in which to create your Web PubSub service instance. | + | **Location** | West US | Choose a [region](https://azure.microsoft.com/regions/) near you. | + | **Pricing tier** | Free | You can first try Azure Web PubSub service for free. Learn more details about [Azure Web PubSub service pricing tiers](https://azure.microsoft.com/pricing/details/web-pubsub/) | + | **Unit count** | - | Unit count specifies how many connections your Web PubSub service instance can accept. Each unit supports 1,000 concurrent connections at most. It is only configurable in the Standard tier. | - :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal."::: + :::image type="content" source="./media/howto-develop-create-instance/create-web-pubsub-instance-in-portal.png" alt-text="Screenshot of creating the Azure Web PubSub instance in portal."::: 4. Select **Create** to provision your Web PubSub resource.-+ ::: zone-end ::: zone pivot="method-azure-cli"+ ## Create a resource using Azure CLI -The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. +The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure resources. The Azure CLI is available across Azure services and is designed to get you working quickly with Azure, with an emphasis on automation. > [!IMPORTANT] > This quickstart requires Azure CLI of version 2.22.0 or higher. The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure [!INCLUDE [Create a Web PubSub instance](includes/cli-awps-creation.md)] ::: zone-end - ::: zone pivot="method-bicep"+ ## Create a resource using Bicep template [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] The template used in this quickstart is from [Azure Quickstart Templates](/sampl 1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. - # [CLI](#tab/CLI) + # [CLI](#tab/CLI) - ```azurecli - az group create --name exampleRG --location eastus - az deployment group create --resource-group exampleRG --template-file main.bicep - ``` + ```azurecli + az group create --name exampleRG --location eastus + az deployment group create --resource-group exampleRG --template-file main.bicep + ``` - # [PowerShell](#tab/PowerShell) + # [PowerShell](#tab/PowerShell) - ```azurepowershell - New-AzResourceGroup -Name exampleRG -Location eastus - New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep - ``` + ```azurepowershell + New-AzResourceGroup -Name exampleRG -Location eastus + New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep + ``` - + *** - When the deployment finishes, you should see a message indicating the deployment succeeded. + When the deployment finishes, you should see a message indicating the deployment succeeded. ## Review deployed resources Get-AzResource -ResourceGroupName exampleRG ``` + ## Clean up resources When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group and its resources. az group delete --name exampleRG ```azurepowershell-interactive Remove-AzResourceGroup -Name exampleRG ```+ ::: zone-end ## Next step+ Now that you have created a resource, you are ready to put it to use. Next, you will learn how to subscribe and publish messages among your clients.-> [!div class="nextstepaction"] ++> [!div class="nextstepaction"] > [PubSub among clients](quickstarts-pubsub-among-clients.md) |
azure-web-pubsub | Howto Develop Event Listener | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-event-listener.md | If you want to listen to your [client events](concept-service-internals.md#terms This tutorial shows you how to authorize your Web PubSub service to connect to Event Hubs and how to add an event listener rule to your service settings. -Web PubSub service uses Azure Active Directory (Azure AD) authentication with managed identity to connect to Event Hubs. Therefore, you should enable the managed identity of the service and make sure it has proper permissions to connect to Event Hubs. You can grant the built-in [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) role to the managed identity so that it has enough permissions. +Web PubSub service uses Microsoft Entra ID with managed identity to connect to Event Hubs. Therefore, you should enable the managed identity of the service and make sure it has proper permissions to connect to Event Hubs. You can grant the built-in [Azure Event Hubs Data sender](../role-based-access-control/built-in-roles.md#azure-event-hubs-data-sender) role to the managed identity so that it has enough permissions. To configure an Event Hubs listener, you need to: -1. [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service) -2. [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role) -3. [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings) +- [Send client events to Event Hubs](#send-client-events-to-event-hubs) + - [Overview](#overview) + - [Configure an event listener](#configure-an-event-listener) + - [Add a managed identity to your Web PubSub service](#add-a-managed-identity-to-your-web-pubsub-service) + - [Grant the managed identity an `Azure Event Hubs Data sender` role](#grant-the-managed-identity-an-azure-event-hubs-data-sender-role) + - [Add an event listener rule to your service settings](#add-an-event-listener-rule-to-your-service-settings) + - [Test your configuration with live demo](#test-your-configuration-with-live-demo) + - [Next steps](#next-steps) ## Configure an event listener Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity ### Add an event listener rule to your service settings -1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page. +1. Find your service from **Azure portal**. Navigate to **Settings**. Then select **Add** to configure your event listener. For an existing hub configuration, select **...** on right side will navigate to the same editing page. :::image type="content" source="media/howto-develop-event-listener/web-pubsub-settings.png" alt-text="Screenshot of Web PubSub settings"::: 1. Then in the below editing page, you'd need to configure hub name, and select **Add** to add an event listener. Find your Azure Web PubSub service from **Azure portal**. Navigate to **Identity 1. On the **Configure Event Listener** page, first configure an event hub endpoint. You can select **Select Event Hub from your subscription** to select, or directly input the fully qualified namespace and the event hub name. Then select `user` and `system` events you'd like to listen to. Finally select **Confirm** when everything is done. :::image type="content" source="media/howto-develop-event-listener/configure-event-hub-listener.png" alt-text="Screenshot of configuring Event Hubs Listener"::: - ## Test your configuration with live demo 1. Open this [Event Hubs Consumer Client](https://awpseventlistenerdemo.blob.core.windows.net/eventhub-consumer/https://docsupdatetracker.net/index.html) web app, input the Event Hubs connection string to connect to an event hub as a consumer. If you get the Event Hubs connection string from an Event Hubs namespace resource instead of an event hub instance, then you need to specify the event hub name. This event hub consumer client is connected with the mode that only reads new events; the events published before aren't seen here. You can change the consumer client connection mode to read all the available events in the production environment. 1. Use this [WebSocket Client](https://awpseventlistenerdemo.blob.core.windows.net/webpubsub-client/websocket-client.html) web app to generate client events. If you've configured to send system event `connected` to that event hub, you should be able to see a printed `connected` event in the Event Hubs consumer client after connecting to Web PubSub service successfully. You can also generate a user event with the app.- :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app"::: - :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="The area of the WebSocket client app to generate a user event"::: + :::image type="content" source="media/howto-develop-event-listener/eventhub-consumer-connected-event.png" alt-text="Screenshot of a printed connected event in the Event Hubs consumer client app."::: + :::image type="content" source="media/howto-develop-event-listener/web-pubsub-client-specify-event-name.png" alt-text="Screenshot showing the area of the WebSocket client app to generate a user event."::: ## Next steps In this article, you learned how event listeners work and how to configure an event listener with an event hub endpoint. To learn the data format sent to Event Hubs, read the following specification. -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Specification: CloudEvents AMQP extension for Azure Web PubSub](./reference-cloud-events-amqp.md)-<!--TODO: Add demo--> ++<!--TODO: Add demo--> |
azure-web-pubsub | Howto Develop Eventhandler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-eventhandler.md | description: Guidance about event handler concepts and integration introduction -+ Last updated 01/27/2023 # Event handler in Azure Web PubSub service -The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**. +The event handler handles the incoming client events. Event handlers are registered and configured in the service through the Azure portal or Azure CLI. When a client event is triggered, the service can send the event to the appropriate event handler. The Web PubSub service now supports the event handler as the server-side, which exposes the publicly accessible endpoint for the service to invoke when the event is triggered. In other words, it acts as a **webhook**. The Web PubSub service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](https://github.com/cloudevents/spec/blob/v1.0.1/http-protocol-binding.md). -For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response. +For every event, the service formulates an HTTP POST request to the registered upstream endpoint and expects an HTTP response. The data sending from the service to the server is always in CloudEvents `binary` format. For now, we don't support [WebHook-Request-Rate](https://github.com/cloudevents/ You can use any of these methods to authenticate between the service and webhook. - Anonymous mode-- Simple Auth with `?code=<code>` is provided through the configured Webhook URL as query parameter.-- Azure Active Directory(Azure AD) authentication. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios).+- Simple authentication with `?code=<code>` is provided through the configured Webhook URL as query parameter. +- Microsoft Entra authorization. For more information, see [Use a managed identity in client events](howto-use-managed-identity.md#use-a-managed-identity-in-client-events-scenarios). ## Configure event handler You can add an event handler to a new hub or edit an existing hub. To configure an event handler in a new hub: -1. Go to your Azure Web PubSub service page in the **Azure portal**. -1. Select **Settings** from the menu. +1. Go to your Azure Web PubSub service page in the **Azure portal**. +1. Select **Settings** from the menu. 1. Select **Add** to create a hub and configure your server-side webhook URL. Note: To add an event handler to an existing hub, select the hub and select **Edit**. :::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler."::: 1. Enter your hub name. 1. Select **Add** under **Configure Even Handlers**.-1. In the event handler page, configure the following fields: - 1. Enter the server webhook URL in the **URL Template** field. - 1. Select the **System events** that you want to subscribe to. - 1. Select the **User events** that you want to subscribe to. - 1. Select **Authentication** method to authenticate upstream requests. - 1. Select **Confirm**. +1. In the event handler page, configure the following fields: 1. Enter the server webhook URL in the **URL Template** field. 1. Select the **System events** that you want to subscribe to. 1. Select the **User events** that you want to subscribe to. 1. Select **Authentication** method to authenticate upstream requests. 1. Select **Confirm**. + :::image type="content" source="media/howto-develop-eventhandler/configure-event-handler.png" alt-text="Screenshot of Azure Web PubSub Configure Event Handler."::: 1. Select **Save** at the top of the **Configure Hub Settings** page. To configure an event handler in a new hub: Use the Azure CLI [**az webpubsub hub**](/cli/azure/webpubsub/hub) group commands to configure the event handler settings. -Commands | Description |---`create` | Create hub settings for WebPubSub Service. -`delete` | Delete hub settings for WebPubSub Service. -`list` | List all hub settings for WebPubSub Service. -`show` | Show hub settings for WebPubSub Service. -`update` | Update hub settings for WebPubSub Service. +| Commands | Description | +| -- | -- | +| `create` | Create hub settings for WebPubSub Service. | +| `delete` | Delete hub settings for WebPubSub Service. | +| `list` | List all hub settings for WebPubSub Service. | +| `show` | Show hub settings for WebPubSub Service. | +| `update` | Update hub settings for WebPubSub Service. | Here's an example of creating two webhook URLs for hub `MyHub` of `MyWebPubSub` resource: |
azure-web-pubsub | Howto Develop Reliable Clients | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-develop-reliable-clients.md | description: How to create reliable Websocket clients -+ Last updated 01/12/2023 The Web PubSub service supports two reliable subprotocols `json.reliable.webpubs The simplest way to create a reliable client is to use Client SDK. Client SDK implements [Web PubSub client specification](./reference-client-specification.md) and uses `json.reliable.webpubsub.azure.v1` by default. Please refer to [PubSub with client SDK](./quickstart-use-client-sdk.md) for quick start. - ## The Hard Way - Implement by hand The following tutorial walks you through the important part of implementing the [Web PubSub client specification](./reference-client-specification.md). This guide is not for people looking for a quick start but who wants to know the principle of achieving reliability. For quick start, please use the Client SDK. To use reliable subprotocols, you must set the subprotocol when constructing Web - Use Json reliable subprotocol: - ```js - var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1'); - ``` + ```js + var pubsub = new WebSocket( + "wss://test.webpubsub.azure.com/client/hubs/hub1", + "json.reliable.webpubsub.azure.v1" + ); + ``` - Use Protobuf reliable subprotocol: - ```js - var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'protobuf.reliable.webpubsub.azure.v1'); - ``` + ```js + var pubsub = new WebSocket( + "wss://test.webpubsub.azure.com/client/hubs/hub1", + "protobuf.reliable.webpubsub.azure.v1" + ); + ``` ### Connection recovery Connection recovery is the basis of achieving reliability and must be implemented when using the `json.reliable.webpubsub.azure.v1` and `protobuf.reliable.webpubsub.azure.v1` protocols. -Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery +Websocket connections rely on TCP. When the connection doesn't drop, messages are lossless and delivered in order. To prevent message loss over dropped connections, the Web PubSub service retains the connection status information, including group and message information. This information is used to restore the client on connection recovery -When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service. +When the client reconnects to the service using reliable subprotocols, the client will receive a `Connected` message containing the `connectionId` and `reconnectionToken`. The `connectionId` identifies the session of the connection in the service. ```json {- "type":"system", - "event":"connected", - "connectionId": "<connection_id>", - "reconnectionToken": "<reconnection_token>" + "type": "system", + "event": "connected", + "connectionId": "<connection_id>", + "reconnectionToken": "<reconnection_token>" } ``` Connection recovery may fail if the network issue hasn't been recovered yet. The ### Publisher -Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not. +Clients that send events to event handlers or publish messages to other clients are called publishers. Publishers should set `ackId` in the message to receive an acknowledgment from the Web PubSub service that publishing the message was successful or not. -The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message. +The `ackId` is the identifier of the message, each new message should use a unique ID. The original `ackId` should be used when resending a message. A sample group send message: ```json {- "type": "sendToGroup", - "group": "group1", - "dataType" : "text", - "data": "text data", - "ackId": 1 + "type": "sendToGroup", + "group": "group1", + "dataType": "text", + "data": "text data", + "ackId": 1 } ``` A sample ack response: ```json {- "type": "ack", - "ackId": 1, - "success": true + "type": "ack", + "ackId": 1, + "success": true } ``` When the Web PubSub service returns an ack response with `success: true`, the message has been processed by the service, and the client can expect the message will be delivered to all subscribers. -When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used. +When the service experiences a transient internal error and the message can't be sent to subscriber, the publisher will receive an ack with `success: false`. The publisher should read the error to determine whether or not to resend the message. If the message is resent, the same `ackId` should be used. ```json {- "type": "ack", - "ackId": 1, - "success": false, - "error": { - "name": "InternalServerError", - "message": "Internal server error" - } + "type": "ack", + "ackId": 1, + "success": false, + "error": { + "name": "InternalServerError", + "message": "Internal server error" + } } ``` ![Message Failure](./media/howto-develop-reliable-clients/message-failed.png) -If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message. +If the service's ack response is lost because the WebSocket connection dropped, the publisher should resend the message with the same `ackId` after recovery. When the message was previously processed by the service, it will send an ack containing a `Duplicate` error. The publisher should stop resending this message. ```json {- "type": "ack", - "ackId": 1, - "success": false, - "error": { - "name": "Duplicate", - "message": "Message with ack-id: 1 has been processed" - } + "type": "ack", + "ackId": 1, + "success": false, + "error": { + "name": "Duplicate", + "message": "Message with ack-id: 1 has been processed" + } } ``` A sample sequence ack: ```json {- "type": "sequenceAck", - "sequenceId": 1 + "type": "sequenceAck", + "sequenceId": 1 } ``` |
azure-web-pubsub | Howto Disable Local Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-disable-local-auth.md | Title: Disable local (access key) authentication with Azure Web PubSub Service -description: This article provides information about how to disable access key authentication and use only Azure AD authentication with Azure Web PubSub Service. +description: This article provides information about how to disable access key authentication and use only Microsoft Entra authorization with Azure Web PubSub Service. -There are two ways to authenticate to Azure Web PubSub Service resources: Azure Active Directory (Azure AD) and Access Key. Azure AD provides superior security and ease of use over access key. With Azure AD, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Azure AD with your Azure Web PubSub Service resources when possible. +There are two ways to authenticate to Azure Web PubSub Service resources: Microsoft Entra ID and Access Key. Microsoft Entra ID provides superior security and ease of use over access key. With Microsoft Entra ID, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Microsoft Entra ID with your Azure Web PubSub Service resources when possible. > [!IMPORTANT] > Disabling local authentication can have following influences.-> - The current set of access keys will be permanently deleted. -> - Tokens signed with current set of access keys will become unavailable. -> - Signature will **NOT** be attached in the upstream request header. Please visit *[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)* to learn how to validate requests via Azure AD token. +> +> - The current set of access keys will be permanently deleted. +> - Tokens signed with current set of access keys will become unavailable. +> - Signature will **NOT** be attached in the upstream request header. Please visit _[how to validate access token](./howto-use-managed-identity.md#validate-access-tokens)_ to learn how to validate requests via Microsoft Entra token. ## Use Azure portal You can disable local authentication by setting `disableLocalAuth` property to t ```json {- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", - "contentVersion": "1.0.0.0", - "parameters": { - "resource_name": { - "defaultValue": "test-for-disable-aad", - "type": "String" - } - }, - "variables": {}, - "resources": [ - { - "type": "Microsoft.SignalRService/WebPubSub", - "apiVersion": "2022-08-01-preview", - "name": "[parameters('resource_name')]", - "location": "eastus", - "sku": { - "name": "Premium_P1", - "tier": "Premium", - "size": "P1", - "capacity": 1 - }, - "properties": { - "tls": { - "clientCertEnabled": false - }, - "networkACLs": { - "defaultAction": "Deny", - "publicNetwork": { - "allow": [ - "ServerConnection", - "ClientConnection", - "RESTAPI", - "Trace" - ] - }, - "privateEndpoints": [] - }, - "publicNetworkAccess": "Enabled", - "disableLocalAuth": true, - "disableAadAuth": false - } - } - ] + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "resource_name": { + "defaultValue": "test-for-disable-aad", + "type": "String" + } + }, + "variables": {}, + "resources": [ + { + "type": "Microsoft.SignalRService/WebPubSub", + "apiVersion": "2022-08-01-preview", + "name": "[parameters('resource_name')]", + "location": "eastus", + "sku": { + "name": "Premium_P1", + "tier": "Premium", + "size": "P1", + "capacity": 1 + }, + "properties": { + "tls": { + "clientCertEnabled": false + }, + "networkACLs": { + "defaultAction": "Deny", + "publicNetwork": { + "allow": [ + "ServerConnection", + "ClientConnection", + "RESTAPI", + "Trace" + ] + }, + "privateEndpoints": [] + }, + "publicNetworkAccess": "Enabled", + "disableLocalAuth": true, + "disableAadAuth": false + } + } + ] } ``` You can assign the [Azure Web PubSub Service should have local authentication me See the following docs to learn about authentication methods. -- [Overview of Azure AD for Web PubSub](concept-azure-ad-authorization.md)+- [Overview of Microsoft Entra ID for Web PubSub](concept-azure-ad-authorization.md) - [Authenticate with Azure applications](./howto-authorize-from-application.md) - [Authenticate with managed identities](./howto-authorize-from-managed-identity.md) |
azure-web-pubsub | Howto Enable Geo Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md | Mission critical apps often need to have a robust failover system and serve user ### Contoso, a social media company Contoso is a social media company with its customer base spread across the US and Canada. Contoso provides a mobile and web app to its users so that they can connect with each other. Contoso application is deployed in Central US. As part of Contoso's architecture, Web PubSub is used to establish persistent WebSocket connections between client apps and the application server. Contoso **likes** that they can offload managing WebSocket connections to Web PubSub, but **doesn't** like reading reports of users in Canada experiencing higher latency. Furthermore, Contoso's development team wants to insure the app against regional outage so that the users can access the app with no interruptions. -![Screenshot of using one Azure WebPubSub instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-single.png "Single WebPubSub Example") +![Diagram of using one Azure WebPubSub instance to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-single.png "Single WebPubSub Example") Contoso **could** set up another Web PubSub resource in Canada Central which is geographically closer to its users in Canada. However, managing multiple Web PubSub resources brings some challenges: 1. A cross-region communication mechanism would need to be implemented so that users in Canada and US can interact with each other. Contoso **could** set up another Web PubSub resource in Canada Central which is All of the above takes engineering resources away from focusing on product innovation. -![Screenshot of using two Azure Web PubSub instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-multiple.png "Mutiple Web PubSub Example") +![Diagram of using two Azure Web PubSub instances to handle traffic from two countries. ](./media/howto-enable-geo-replication/web-pubsub-multiple.png "Mutiple Web PubSub Example") ### Harnessing the geo-replication feature With the geo-replication feature, Contoso can now establish a replica in Canada Central, effectively overcoming the above-mentioned challenges. The developer team is glad to find out that they don't need to make any code changes. It's as easy as clicking a few buttons on Azure portal. The developer team is also happy to share with the stakeholders that as Contoso plans to enter the European market, they simply need to add another replica in Europe. -![Screenshot of using one Azure Web PubSub instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/web-pubsub-replica.png "Replica Example") +![Diagram of using one Azure Web PubSub instance with replica to handle traffic from two countries.](./media/howto-enable-geo-replication/web-pubsub-replica.png "Replica Example") ## How to enable geo-replication in a Web PubSub resource To create a replica in an Azure region, go to your Web PubSub resource and find the **Replicas** blade on the Azure portal and click **Add** to create a replica. It will be automatically enabled upon creation. After creation, you would be able to view/edit your replica on the portal by cli > [!NOTE] > * Geo-replication is a feature available in premium tier.-> * A replica is considered a separate resource when it comes to billing. See [Pricing](concept-billing-model.md#how-replica-is-billed) for more details. +> * A replica is considered a separate resource when it comes to billing. See [Pricing and resource unit](#pricing-and-resource-unit) for more details. ++## Pricing and resource unit +Each replica has its **own** `unit` and `autoscale settings`. ++Replica is a feature of [Premium tier](https://azure.microsoft.com/pricing/details/web-pubsub/) of Azure Web PubSub Service. Each replica is billed **separately** according to its own unit and outbound traffic. Free message quota is also calculated separately. ++In the preceding example, Contoso added one replica in Canada Central. Contoso would pay for the replica in Canada Central according to its unit and message in Premium Price. ## Delete a replica After you've created a replica for a Web PubSub resource, you can delete it at any time if it's no longer needed. To delete a replica in the Azure portal: 1. Navigate to your Web PubSub resource, and select **Replicas** blade. Click the replica you want to delete. 2. Click Delete button on the replica overview blade. -## Impact on performance after enabling geo-replication feature -After a replica is created, your clients will be distributed across selected Azure regions based on their geographical locations. Web PubSub service handles synchronizing data across these replicas automatically and this synchronization incurs a low level of cost. The cost is negligible if your use case primarily involves `sendToGroup()` where the group has more than 100 connections. However, the cost may become more apparent when sending to smaller groups (connection count < 10) or a single user. --For more performance evaluation, refer to [Performance](concept-performance.md). --## Best practices -To ensure effective failover management, it is recommended to enable [autoscaling](howto-scale-autoscale.md) for the resource and its replicas. If there are two replicas in a Web PubSub resource and one of the replicas is not available due to an outage, the available replica will receive all the traffic and handle all the WebSocket connections. Auto-scaling can scale up to meet the demand automatically. -> [!NOTE] -> * Autoscaling for replica is configured on its own resource level. Scaling primary resource won't change the unit size of the replica. - ## Understand how the geo-replication feature works -![Screenshot of the arch of Azure Web PubSub replica. ](./media/howto-enable-geo-replication/web-pubsub-replica-arch.png "Replica Arch") +![Diagram of the arch of Azure Web PubSub replica. ](./media/howto-enable-geo-replication/web-pubsub-replica-arch.png "Replica Arch") -1. The client resolves the Fully Qualified Domain Name (FQDN) `contoso.webpubsub.azure.com` of the Web PubSub service. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional Web PubSub instance. -2. With this CNAME, the client establishes a websocket connection to the regional instance. +1. The client resolves the Fully Qualified Domain Name (FQDN) `contoso.webpubsub.azure.com` of the Web PubSub service. This FQDN points to a Traffic Manager, which returns the Canonical Name (CNAME) of the nearest regional Web PubSub instance. +2. With this CNAME, the client establishes a websocket connection to the regional instance (replica). 3. The two replicas will synchronize data with each other. Messages sent to one replica would be transferred to other replicas if necessary.-4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution results. +4. In case a replica fails the health check conducted by the Traffic Manager (TM), the TM will exclude the failed instance's endpoint from its domain resolution results. For details, refer to below [Resiliency and Disaster Recovery](#resiliency-and-disaster-recovery) > [!NOTE] > * In the data plane, a primary Azure Web PubSub resource functions identically to its replicas -++## Resiliency and disaster recovery ++Azure Web PubSub Service utilizes a traffic manager for health checks and DNS resolution towards its replicas. Under normal circumstances, when all replicas are functioning properly, clients will be directed to the closest replica. For instance: ++- Clients close to `eastus` will be directed to the replica located in `eastus`. +- Similarly, clients close to `westus` will be directed to the replica in `westus`. ++In the event of a **regional outage** in eastus (illustrated below), the traffic manager will detect the health check failure for that region. Then, this faulty replica's DNS will be excluded from the traffic manager's DNS resolution results. After a DNS Time-to-Live (TTL) duration, which is set to 90 seconds, clients in `eastus` will be redirected to connect with the replica in `westus`. ++![Diagram of Azure Web PubSub replica failover. ](./media/howto-enable-geo-replication/web-pubsub-replica-failover.png "Replica Failover") ++Once the issue in `eastus` is resolved and the region is back online, the health check will succeed. Clients in `eastus` will then, once again, be directed to the replica in their region. This transition is smooth as the connected clients will not be impacted until those existing connections are closed. ++![Diagram of Azure Web PubSub replica failover recovery. ](./media/howto-enable-geo-replication/web-pubsub-replica-failover-recovery.png "Replica Failover Recover") +++This failover and recovery process is **automatic** and requires no manual intervention. ++## Impact on performance after enabling geo-replication feature +After replicas are enabled, clients will naturally distribute based on their geographical locations. While Web PubSub takes on the responsibility to synchronize data across these replicas, you'll be pleased to know that the associated overhead on [Server Load](concept-performance.md#quick-evaluation-using-metrics) is minimal for most common use cases. ++Specifically, if your application typically broadcasts to larger groups (size >10) or a single connection, the performance impact of synchronization is barely noticeable. If you're messaging small groups (size < 10), you might notice a bit more synchronization overhead. ++To ensure effective failover management, it is recommended to set each replica's unit size to handle all traffic. Alternatively, you could enable [autoscaling](howto-scale-autoscale.md) to manage this. ++For more performance evaluation, refer to [Performance](concept-performance.md). ++## Breaking issues +* **Using replica and event handler together** ++ If you use the Web PubSub event handler with Web PubSub C# server SDK or an Azure Function that utilizes the Web PubSub extension, you may encounter issues with the abuse protection once replicas are enabled. To address this, you can either **disable the abuse protection** or **upgrade to the latest SDK/extension versions**. + + For a detailed explanation and potential solutions, please refer to this [issue](https://github.com/Azure/azure-webpubsub/issues/598). + |
azure-web-pubsub | Howto Generate Client Access Url | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-generate-client-access-url.md | -A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL. +A client, be it a browser 💻, a mobile app 📱, or an IoT device 💡, uses a **Client Access URL** to connect and authenticate with your resource. This URL follows a pattern of `wss://<service_name>.webpubsub.azure.com/client/hubs/<hub_name>?access_token=<token>`. This article shows you several ways to get the Client Access URL. - For quick start, copy one from the Azure portal - For development, generate the value using [Web PubSub server SDK](./reference-server-sdk-js.md)-- If you're using Azure AD, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)+- If you're using Microsoft Entra ID, you can also invoke the [Generate Client Token REST API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) ## Copy from the Azure portal+ In the Keys tab in Azure portal, there's a Client URL Generator tool to quickly generate a Client Access URL for you, as shown in the following diagram. Values input here aren't stored. :::image type="content" source="./media/howto-websocket-connect/generate-client-url.png" alt-text="Screenshot of the Web PubSub Client URL Generator."::: ## Generate from service SDK+ The same Client Access URL can be generated by using the Web PubSub server SDK. # [JavaScript](#tab/javascript) The same Client Access URL can be generated by using the Web PubSub server SDK. 1. Follow [Getting started with server SDK](./reference-server-sdk-js.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:- * Configure user ID - ```js - let token = await serviceClient.getClientAccessToken({ userId: "user1" }); - ``` - * Configure the lifetime of the token - ```js - let token = await serviceClient.getClientAccessToken({ expirationTimeInMinutes: 5 }); - ``` - * Configure a role that can join group `group1` directly when it connects using this Client Access URL - ```js - let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.joinLeaveGroup.group1"] }); - ``` - * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL - ```js - let token = await serviceClient.getClientAccessToken({ roles: ["webpubsub.sendToGroup.group1"] }); - ``` - * Configure a group `group1` that the client joins once it connects using this Client Access URL - ```js - let token = await serviceClient.getClientAccessToken({ groups: ["group1"] }); - ``` ++ - Configure user ID ++ ```js + let token = await serviceClient.getClientAccessToken({ userId: "user1" }); + ``` ++ - Configure the lifetime of the token ++ ```js + let token = await serviceClient.getClientAccessToken({ + expirationTimeInMinutes: 5, + }); + ``` ++ - Configure a role that can join group `group1` directly when it connects using this Client Access URL ++ ```js + let token = await serviceClient.getClientAccessToken({ + roles: ["webpubsub.joinLeaveGroup.group1"], + }); + ``` ++ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL ++ ```js + let token = await serviceClient.getClientAccessToken({ + roles: ["webpubsub.sendToGroup.group1"], + }); + ``` ++ - Configure a group `group1` that the client joins once it connects using this Client Access URL ++ ```js + let token = await serviceClient.getClientAccessToken({ + groups: ["group1"], + }); + ``` # [C#](#tab/csharp) 1. Follow [Getting started with server SDK](./reference-server-sdk-csharp.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.GetClientAccessUri`:- * Configure user ID - ```csharp - var url = service.GetClientAccessUri(userId: "user1"); - ``` - * Configure the lifetime of the token - ```csharp - var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5)); - ``` - * Configure a role that can join group `group1` directly when it connects using this Client Access URL - ```csharp - var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" }); - ``` - * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL - ```csharp - var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" }); - ``` - * Configure a group `group1` that the client joins once it connects using this Client Access URL - ```csharp - var url = service.GetClientAccessUri(groups: new string[] { "group1" }); - ``` ++ - Configure user ID ++ ```csharp + var url = service.GetClientAccessUri(userId: "user1"); + ``` ++ - Configure the lifetime of the token ++ ```csharp + var url = service.GetClientAccessUri(expiresAfter: TimeSpan.FromMinutes(5)); + ``` ++ - Configure a role that can join group `group1` directly when it connects using this Client Access URL ++ ```csharp + var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.joinLeaveGroup.group1" }); + ``` ++ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL ++ ```csharp + var url = service.GetClientAccessUri(roles: new string[] { "webpubsub.sendToGroup.group1" }); + ``` ++ - Configure a group `group1` that the client joins once it connects using this Client Access URL ++ ```csharp + var url = service.GetClientAccessUri(groups: new string[] { "group1" }); + ``` # [Python](#tab/python) 1. Follow [Getting started with server SDK](./reference-server-sdk-python.md#install-the-package) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.get_client_access_token`:- * Configure user ID - ```python - token = service.get_client_access_token(user_id="user1") - ``` - * Configure the lifetime of the token - ```python - token = service.get_client_access_token(minutes_to_expire=5) - ``` - * Configure a role that can join group `group1` directly when it connects using this Client Access URL - ```python - token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"]) - ``` - * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL - ```python - token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"]) - ``` - * Configure a group `group1` that the client joins once it connects using this Client Access URL - ```python - token = service.get_client_access_token(groups=["group1"]) - ``` ++ - Configure user ID ++ ```python + token = service.get_client_access_token(user_id="user1") + ``` ++ - Configure the lifetime of the token ++ ```python + token = service.get_client_access_token(minutes_to_expire=5) + ``` ++ - Configure a role that can join group `group1` directly when it connects using this Client Access URL ++ ```python + token = service.get_client_access_token(roles=["webpubsub.joinLeaveGroup.group1"]) + ``` ++ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL ++ ```python + token = service.get_client_access_token(roles=["webpubsub.sendToGroup.group1"]) + ``` ++ - Configure a group `group1` that the client joins once it connects using this Client Access URL ++ ```python + token = service.get_client_access_token(groups=["group1"]) + ``` # [Java](#tab/java) 1. Follow [Getting started with server SDK](./reference-server-sdk-java.md#getting-started) to create a `WebPubSubServiceClient` object `service` 2. Generate Client Access URL by calling `WebPubSubServiceClient.getClientAccessToken`:- * Configure user ID - ```java - GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); - option.setUserId(id); - WebPubSubClientAccessToken token = service.getClientAccessToken(option); - ``` - * Configure the lifetime of the token - ```java - GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); - option.setExpiresAfter(Duration.ofDays(1)); - WebPubSubClientAccessToken token = service.getClientAccessToken(option); - ``` - * Configure a role that can join group `group1` directly when it connects using this Client Access URL - ```java - GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); - option.addRole("webpubsub.joinLeaveGroup.group1"); - WebPubSubClientAccessToken token = service.getClientAccessToken(option); - ``` - * Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL - ```java - GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); - option.addRole("webpubsub.sendToGroup.group1"); - WebPubSubClientAccessToken token = service.getClientAccessToken(option); - ``` - * Configure a group `group1` that the client joins once it connects using this Client Access URL - ```java - GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); - option.setGroups(Arrays.asList("group1")), - WebPubSubClientAccessToken token = service.getClientAccessToken(option); - ``` ++ - Configure user ID ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.setUserId(id); + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` ++ - Configure the lifetime of the token ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.setExpiresAfter(Duration.ofDays(1)); + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` ++ - Configure a role that can join group `group1` directly when it connects using this Client Access URL ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.addRole("webpubsub.joinLeaveGroup.group1"); + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` ++ - Configure a role that the client can send messages to group `group1` directly when it connects using this Client Access URL ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.addRole("webpubsub.sendToGroup.group1"); + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` ++ - Configure a group `group1` that the client joins once it connects using this Client Access URL ++ ```java + GetClientAccessTokenOptions option = new GetClientAccessTokenOptions(); + option.setGroups(Arrays.asList("group1")), + WebPubSubClientAccessToken token = service.getClientAccessToken(option); + ``` + In real-world code, we usually have a server side to host the logic generating the Client Access URL. When a client request comes in, the server side can use the general authentication/authorization workflow to validate the client request. Only valid client requests can get the Client Access URL back. ## Invoke the Generate Client Token REST API -You can enable Azure AD in your service and use the Azure AD token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use. --1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Azure AD. -2. Follow [Get Azure AD token](./howto-authorize-from-application.md#use-postman-to-get-the-azure-ad-token) to get the Azure AD token with Postman. -3. Use the Azure AD token to invoke `:generateToken` with Postman: - 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01` - 2. On the **Auth** tab, select **Bearer Token** and paste the Azure AD token fetched in the previous step - 3. Select **Send** and you see the Client Access Token in the response: - ```json - { - "token": "ABCDEFG.ABC.ABC" - } - ``` -4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>` +You can enable Microsoft Entra ID in your service and use the Microsoft Entra token to invoke [Generate Client Token rest API](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token) to get the token for the client to use. ++1. Follow [Authorize from application](./howto-authorize-from-application.md) to enable Microsoft Entra ID. +2. Follow [Get Microsoft Entra token](./howto-authorize-from-application.md#use-postman-to-get-the-microsoft-entra-token) to get the Microsoft Entra token with Postman. +3. Use the Microsoft Entra token to invoke `:generateToken` with Postman: + 1. For the URI, enter `https://{Endpoint}/api/hubs/{hub}/:generateToken?api-version=2022-11-01` + 2. On the **Auth** tab, select **Bearer Token** and paste the Microsoft Entra token fetched in the previous step + 3. Select **Send** and you see the Client Access Token in the response: ++ ```json + { + "token": "ABCDEFG.ABC.ABC" + } + ``` ++4. The Client Access URI is in the format of `wss://<endpoint>/client/hubs/<hub_name>?access_token=<token>` |
azure-web-pubsub | Howto Monitor Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-monitor-azure-policy.md | -This article describes the built-in policies for Azure Web PubSub Service. +This article describes the built-in policies for Azure Web PubSub Service. ## Built-in policy definitions - The following table contains an index of Azure Policy built-in policy definitions for Azure Web PubSub. For Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md). The name of each built-in policy definition links to the policy definition in the Azure portal. Use the link in the Version column to view the source on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). The name of each built-in policy definition links to the policy definition in th When assigning a policy definition: -* You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs. -* Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). -* You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time. -* Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope. +- You can assign policy definitions using the [Azure portal](../governance/policy/assign-policy-portal.md), [Azure CLI](../governance/policy/assign-policy-azurecli.md), a [Resource Manager template](../governance/policy/assign-policy-template.md), or the Azure Policy SDKs. +- Policy assignments can be scoped to a resource group, a subscription, or an [Azure management group](../governance/management-groups/overview.md). +- You can enable or disable [policy enforcement](../governance/policy/concepts/assignment-structure.md#enforcement-mode) at any time. +- Web PubSub policy assignments apply to existing and new Web PubSub resources within the scope. > [!NOTE] > After you assign or update a policy, it takes some time for the assignment to be applied to resources in the defined scope. See information about [policy evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). When a resource is non-compliant, there are many possible reasons. To determine 1. Open the Azure portal and search for **Policy**. 1. Select **Policy**. 1. Select **Compliance**.-1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or - ID. - [ ![Policy compliance in portal](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox) -1. Select a policy to review aggregate compliance details and events. +1. Use the filters to display by **Scope**, **Type** or **Compliance state**. Use search list by name or + ID. + [ ![Screenshot showing policy compliance in portal.](./media/howto-monitor-azure-policy/azure-policy-compliance.png) ](./media/howto-monitor-azure-policy/azure-policy-compliance.png#lightbox) +1. Select a policy to review aggregate compliance details and events. 1. Select a specific Web PubSub for resource compliance. ### Policy compliance in the Azure CLI az policy state list \ ## Next steps -* Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md) --* Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md) +- Learn more about Azure Policy [definitions](../governance/policy/concepts/definition-structure.md) and [effects](../governance/policy/concepts/effects.md) -* Learn more about [governance capabilities](../governance/index.yml) in Azure +- Create a [custom policy definition](../governance/policy/tutorials/create-custom-policy-definition.md) +- Learn more about [governance capabilities](../governance/index.yml) in Azure <!-- LINKS - External -->-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ ++[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/ |
azure-web-pubsub | Howto Secure Shared Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-secure-shared-private-endpoints.md | When the `properties.provisioningState` is `Succeeded` and `properties.status` ( At this point, the private endpoint between Azure Web PubSub Service and Azure Function is established. -### Step 4: Verify upstream calls are from a private IP +## Step 4: Verify upstream calls are from a private IP Once the private endpoint is set up, you can verify incoming calls are from a private IP by checking the `X-Forwarded-For` header at upstream side. |
azure-web-pubsub | Howto Troubleshoot Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-troubleshoot-resource-logs.md | description: Learn what resource logs are and how to use them for troubleshootin -+ Last updated 07/21/2022 # How to troubleshoot with resource logs -This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis. +This how-to guide provides an overview of Azure Web PubSub resource logs and some tips for using the logs to troubleshoot certain problems. Logs can be used for issue identification, connection tracking, message tracing, HTTP request tracing, and analysis. -## What are resource logs? +## What are resource logs? ++There are three types of resource logs: _Connectivity_, _Messaging_, and _HTTP requests_. -There are three types of resource logs: *Connectivity*, *Messaging*, and *HTTP requests*. - **Connectivity** logs provide detailed information for Azure Web PubSub hub connections. For example, basic information (user ID, connection ID, and so on) and event information (connect, disconnect, and so on). - **Messaging** logs provide tracing information for the Azure Web PubSub hub messages received and sent via Azure Web PubSub service. For example, tracing ID and message type of the message. - **HTTP requests** logs provide tracing information for HTTP requests to the Azure Web PubSub service. For example, HTTP method and status code. Typically the HTTP request is recorded when it arrives at or leave from service. The Azure Web PubSub service live trace tool has ability to collect resource log > [!NOTE] > The following considerations apply to using the live trace tool:-> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic). -> - The live trace tool does not currently support Azure Active Directory authentication. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**. -> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit. +> +> - The real-time resource logs captured by live trace tool will be billed as messages (outbound traffic). +> - The live trace tool does not currently support Microsoft Entra authorization. You must enable access keys to use live trace. Under **Settings**, select **Keys**, and then enable **Access Key**. +> - The Azure Web PubSub service Free Tier instance has a daily limit of 20,000 messages (outbound traffic). Live trace can cause you to unexpectedly reach the daily limit. ++## Launch the live trace tool ++> [!NOTE] +> When enable access key, you'll use access token to authenticate live trace tool. +> Otherwise, you'll use Microsoft Entra ID to authenticate live trace tool. +> You can check whether you enable access key or not in your SignalR Service's Keys page in Azure portal. ++### Steps for access key enabled ++1. Go to the Azure portal and your SignalR Service page. +1. From the menu on the left, under **Monitoring** select **Live trace settings**. +1. Select **Enable Live Trace**. +1. Select **Save** button. It will take a moment for the changes to take effect. +1. When updating is complete, select **Open Live Trace Tool**. ++ :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool."::: ++### Steps for access key disabled -### Launch the live trace tool +#### Assign live trace tool API permission to yourself +1. Go to the Azure portal and your SignalR Service page. +1. Select **Access control (IAM)**. +1. In the new page, Click **+Add**, then click **Role assignment**. +1. In the new page, focus on **Job function roles** tab, Select **SignalR Service Owner** role, and then click **Next**. +1. In **Members** page, click **+Select members**. +1. In the new panel, search and select members, and then click **Select**. +1. Click **Review + assign**, and wait for the completion notification. -1. Go to the Azure portal and your Web PubSub service. -1. On the left menu, under **Monitoring**, select **Live trace settings.** -1. On the **Live trace settings** page, select **Enable Live Trace**. -1. Choose the log categories to collect. -1. Select **Save** and then wait until the settings take effect. -1. Select **Open Live Trace Tool**. +#### Visit live trace tool +1. Go to the Azure portal and your SignalR Service page. +1. From the menu on the left, under **Monitoring** select **Live trace settings**. +1. Select **Enable Live Trace**. +1. Select **Save** button. It will take a moment for the changes to take effect. +1. When updating is complete, select **Open Live Trace Tool**. :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-logs-with-live-trace-tool.png" alt-text="Screenshot of launching the live trace tool."::: +#### Sign in with your Microsoft account ++1. The live trace tool will pop up a Microsoft sign in window. If no window is pop up, check and allow pop up windows in your browser. +1. Wait for **Ready** showing in the status bar. + ### Capture the resource logs The live trace tool provides functionality to help you capture the resource logs for troubleshooting. -* **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub. -* **Clear**: Clear the captured real-time resource logs. -* **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word. -* **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance. +- **Capture**: Begin to capture the real-time resource logs from Azure Web PubSub. +- **Clear**: Clear the captured real-time resource logs. +- **Log filter**: The live trace tool lets you filter the captured real-time resource logs with one specific key word. The common separators (for example, space, comma, semicolon, and so on) will be treated as part of the key word. +- **Status**: The status shows whether the live trace tool is connected or disconnected with the specific instance. :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/live-trace-tool-capture.png" alt-text="Screenshot of capturing resource logs with live trace tool."::: -The real-time resource logs captured by live trace tool contain detailed information for troubleshooting. --| Name | Description | -| | | -| Time | Log event time | -| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] | -| Event Name | Operation name of the event | -| Message | Detailed message for the event | -| Exception | The run-time exception of Azure Web PubSub service | -| Hub | User-defined hub name | -| Connection ID | Identity of the connection | -| User ID | User identity| -| IP | Client IP address | -| Route Template | The route template of the API | -| Http Method | The Http method (POST/GET/PUT/DELETE) | -| URL | The uniform resource locator | -| Trace ID | The unique identifier to the invocation | -| Status Code | The Http response code | -| Duration | The duration between receiving the request and processing the request | -| Headers | The additional information passed by the client and the server with an HTTP request or response | +The real-time resource logs captured by live trace tool contain detailed information for troubleshooting. ++| Name | Description | +| -- | -- | +| Time | Log event time | +| Log Level | Log event level, can be [Trace \| Debug \| Informational \| Warning \| Error] | +| Event Name | Operation name of the event | +| Message | Detailed message for the event | +| Exception | The run-time exception of Azure Web PubSub service | +| Hub | User-defined hub name | +| Connection ID | Identity of the connection | +| User ID | User identity | +| IP | Client IP address | +| Route Template | The route template of the API | +| Http Method | The Http method (POST/GET/PUT/DELETE) | +| URL | The uniform resource locator | +| Trace ID | The unique identifier to the invocation | +| Status Code | The Http response code | +| Duration | The duration between receiving the request and processing the request | +| Headers | The additional information passed by the client and the server with an HTTP request or response | ## Capture resource logs with Azure Monitor Currently Azure Web PubSub supports integration with [Azure Storage](../azure-mo 1. Go to Azure portal. 1. On **Diagnostic settings** page of your Azure Web PubSub service instance, select **+ Add diagnostic setting**.- :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one"::: + :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-list.png" alt-text="Screenshot of viewing diagnostic settings and create a new one"::: 1. In **Diagnostic setting name**, input the setting name. 1. In **Category details**, select any log category you need. 1. In **Destination details**, check **Archive to a storage account**. - :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail"::: + :::image type="content" source="./media/howto-troubleshoot-diagnostic-logs/diagnostic-settings-details.png" alt-text="Screenshot of configuring diagnostic setting detail"::: + 1. Select **Save** to save the diagnostic setting.-> [!NOTE] -> The storage account should be in the same region as Azure Web PubSub service. + > [!NOTE] + > The storage account should be in the same region as Azure Web PubSub service. ### Archive to an Azure Storage Account All logs are stored in JavaScript Object Notation (JSON) format. Each entry has Archive log JSON strings include elements listed in the following tables: -**Format** --Name | Description -- | --time | Log event time -level | Log event level -resourceId | Resource ID of your Azure SignalR Service -location | Location of your Azure SignalR Service -category | Category of the log event -operationName | Operation name of the event -callerIpAddress | IP address of your server or client -properties | Detailed properties related to this log event. For more detail, see the properties table below --**Properties Table** --Name | Description -- | --collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` -connectionId | Identity of the connection -userId | Identity of the user -message | Detailed message of log event -hub | User-defined Hub Name | -routeTemplate | The route template of the API | -httpMethod | The Http method (POST/GET/PUT/DELETE) | -url | The uniform resource locator | -traceId | The unique identifier to the invocation | -statusCode | The Http response code | -duration | The duration between the request is received and processed | -headers | The additional information passed by the client and the server with an HTTP request or response | +#### Format ++| Name | Description | +| | - | +| time | Log event time | +| level | Log event level | +| resourceId | Resource ID of your Azure SignalR Service | +| location | Location of your Azure SignalR Service | +| category | Category of the log event | +| operationName | Operation name of the event | +| callerIpAddress | IP address of your server or client | +| properties | Detailed properties related to this log event. For more detail, see the properties table below | ++#### Properties Table ++| Name | Description | +| - | -- | +| collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` | +| connectionId | Identity of the connection | +| userId | Identity of the user | +| message | Detailed message of log event | +| hub | User-defined Hub Name | +| routeTemplate | The route template of the API | +| httpMethod | The Http method (POST/GET/PUT/DELETE) | +| url | The uniform resource locator | +| traceId | The unique identifier to the invocation | +| statusCode | The Http response code | +| duration | The duration between the request is received and processed | +| headers | The additional information passed by the client and the server with an HTTP request or response | The following code is an example of an archive log JSON string: The following code is an example of an archive log JSON string: ### Archive to Azure Log Analytics To send logs to a Log Analytics workspace:-1. On the **Diagnostic setting** page, under **Destination details**, select **Send to Log Analytics workspace. ++1. On the **Diagnostic setting** page, under **Destination details**, select \*\*Send to Log Analytics workspace. 1. Select the **Subscription** you want to use. 1. Select the **Log Analytics workspace** to use as the destination for the logs. To view the resource logs, follow these steps: 1. Select `Logs` in your target Log Analytics. - :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png"::: + :::image type="content" alt-text="Log Analytics menu item" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-menu-item.png"::: 1. Enter `WebPubSubConnectivity`, `WebPubSubMessaging` or `WebPubSubHttpRequest`, and then select the time range to query the log. For advanced queries, see [Get started with Log Analytics in Azure Monitor](../azure-monitor/logs/log-analytics-tutorial.md). - :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png"::: -+ :::image type="content" alt-text="Query log in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/query-log-in-log-analytics.png"::: To use a sample query for SignalR service, follow the steps below.+ 1. Select `Logs` in your target Log Analytics. 1. Select `Queries` to open query explorer. 1. Select `Resource type` to group sample queries in resource type. 1. Select `Run` to run the script.- :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png"::: -+ :::image type="content" alt-text="Sample query in Log Analytics" source="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png" lightbox="./media/howto-troubleshoot-diagnostic-logs/log-analytics-sample-query.png"::: Archive log columns include elements listed in the following table. -Name | Description -- | - -TimeGenerated | Log event time -Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` -OperationName | Operation name of the event -Location | Location of your Azure SignalR Service -Level | Log event level -CallerIpAddress | IP address of your server/client -Message | Detailed message of log event -UserId | Identity of the user -ConnectionId | Identity of the connection -ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side -TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling` +| Name | Description | +| | - | +| TimeGenerated | Log event time | +| Collection | Collection of the log event. Allowed values are: `Connection`, `Authorization` and `Throttling` | +| OperationName | Operation name of the event | +| Location | Location of your Azure SignalR Service | +| Level | Log event level | +| CallerIpAddress | IP address of your server/client | +| Message | Detailed message of log event | +| UserId | Identity of the user | +| ConnectionId | Identity of the connection | +| ConnectionType | Type of the connection. Allowed values are: `Server` \| `Client`. `Server`: connection from server side; `Client`: connection from client side | +| TransportType | Transport type of the connection. Allowed values are: `Websockets` \| `ServerSentEvents` \| `LongPolling` | ## Troubleshoot with the resource logs The difference between `ConnectionAborted` and `ConnectionEnded` is that `Connec The abort reasons are listed in the following table: -| Reason | Description | -| - | - | -| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit -| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service | -| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered +| Reason | Description | +| - | - | +| Connection count reaches limit | Connection count reaches limit of your current price tier. Consider scale up service unit | +| Service reloading, reconnect | Azure Web PubSub service is reloading. You need to implement your own reconnect mechanism or manually reconnect to Azure Web PubSub service | +| Internal server transient error | Transient error occurs in Azure Web PubSub service, should be auto recovered | #### Unexpected increase in connections If you get 401 Unauthorized returned for client requests, check your resource lo ### Throttling -If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/). +If you find that you can't establish client connections to Azure Web PubSub service, check your resource logs. If you see `Connection count reaches limit` in the resource log, you established too many connections to Azure Web PubSub service and reached the connection count limit. Consider scaling up your Azure Web PubSub service instance. If you see `Message count reaches limit` in the resource log and you're using the Free tier, it means you used up the quota of messages. If you want to send more messages, consider changing your Azure Web PubSub service instance to Standard tier. For more information, see [Azure Web PubSub service Pricing](https://azure.microsoft.com/pricing/details/web-pubsub/). |
azure-web-pubsub | Howto Use Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-use-managed-identity.md | -> [!Important] -> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity. +> [!Important] +> Azure Web PubSub Service can support only one managed identity. That means you can add either a system-assigned identity or a user-assigned identity. ## Add a system-assigned identity To set up a managed identity in the Azure portal, you'll first create an Azure W 2. Select **Identity**. -4. On the **System assigned** tab, switch **Status** to **On**. Select **Save**. +3. On the **System assigned** tab, switch **Status** to **On**. Select **Save**. - :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal"::: + :::image type="content" source="media/howto-use-managed-identity/system-identity-portal.png" alt-text="Add a system-assigned identity in the portal"::: ## Add a user-assigned identity Creating an Azure Web PubSub Service instance with a user-assigned identity requ 5. Search for the identity that you created earlier and selects it. Select **Add**. - :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal"::: + :::image type="content" source="media/howto-use-managed-identity/user-identity-portal.png" alt-text="Add a user-assigned identity in the portal"::: ## Use a managed identity in client events scenarios Azure Web PubSub Service is a fully managed service, so you can't use a managed 2. Navigate to the rule and switch on the **Authentication**. - :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting"::: + :::image type="content" source="media/howto-use-managed-identity/msi-settings.png" alt-text="msi-setting"::: 3. Select application. The application ID will become the `aud` claim in the obtained access token, which can be used as a part of validation in your event handler. You can choose one of the following:- - Use default AAD application. - - Select from existing AAD applications. The application ID of the one you choose will be used. - - Specify an AAD application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) - > [!NOTE] - > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests. + - Use default Microsoft Entra application. + - Select from existing Microsoft Entra applications. The application ID of the one you choose will be used. + - Specify a Microsoft Entra application. The value should be [Resource ID of an Azure service](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-azure-ad-authentication) ++ > [!NOTE] + > If you validate an access token by yourself in your service, you can choose any one of the resource formats. If you use Azure role-based access control (Azure RBAC) for a data plane, you must use the resource that the service provider requests. ### Validate access tokens The token in the `Authorization` header is a [Microsoft identity platform access To validate access tokens, your app should also validate the audience and the signing tokens. These need to be validated against the values in the OpenID discovery document. For example, see the [tenant-independent version of the document](https://login.microsoftonline.com/common/.well-known/openid-configuration). -The Azure Active Directory (Azure AD) middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice. +The Microsoft Entra middleware has built-in capabilities for validating access tokens. You can browse through our [samples](../active-directory/develop/sample-v2-code.md) to find one in the language of your choice. -We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Azure AD authentication libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). +We provide libraries and code samples that show how to handle token validation. There are also several open-source partner libraries available for JSON Web Token (JWT) validation. There's at least one option for almost every platform and language out there. For more information about Microsoft Entra authorization libraries and code samples, see [Microsoft identity platform authentication libraries](../active-directory/develop/reference-v2-libraries.md). -Specially, if the event handler hosts in Azure Function or Web Apps, an easy way is to [Configure Azure AD login](../app-service/configure-authentication-provider-aad.md). +Specially, if the event handler hosts in Azure Function or Web Apps, an easy way is to [Configure Microsoft Entra login](../app-service/configure-authentication-provider-aad.md). ## Use a managed identity for Key Vault reference |
azure-web-pubsub | Quickstart Live Demo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-live-demo.md | In this quickstart, we use the *Client URL Generator* to generate a temporarily In real-world applications, you can use SDKs in various languages build your own application. We also provide Function extensions for you to build serverless applications easily. |
azure-web-pubsub | Quickstart Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md | description: A tutorial to walk through how to use Azure Web PubSub service and -+ Last updated 05/05/2023 The Azure Web PubSub service helps you build real-time messaging web application In this tutorial, you learn how to: > [!div class="checklist"]-> * Build a serverless real-time chat app -> * Work with Web PubSub function trigger bindings and output bindings -> * Deploy the function to Azure Function App -> * Configure Azure Authentication -> * Configure Web PubSub Event Handler to route events and messages to the application +> +> - Build a serverless real-time chat app +> - Work with Web PubSub function trigger bindings and output bindings +> - Deploy the function to Azure Function App +> - Configure Azure Authentication +> - Configure Web PubSub Event Handler to route events and messages to the application ## Prerequisites # [JavaScript](#tab/javascript) -* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/) +- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/) -* [Node.js](https://nodejs.org/en/download/), version 10.x. - > [!NOTE] - > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages). -* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure. +- [Node.js](https://nodejs.org/en/download/), version 10.x. + > [!NOTE] + > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages). +- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure. -* The [Azure CLI](/cli/azure) to manage Azure resources. +- The [Azure CLI](/cli/azure) to manage Azure resources. # [C# in-process](#tab/csharp-in-process) -* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/). +- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/). -* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure. +- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure. -* The [Azure CLI](/cli/azure) to manage Azure resources. +- The [Azure CLI](/cli/azure) to manage Azure resources. # [C# isolated process](#tab/csharp-isolated-process) -* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/). +- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/). -* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure. +- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure. -* The [Azure CLI](/cli/azure) to manage Azure resources. +- The [Azure CLI](/cli/azure) to manage Azure resources. In this tutorial, you learn how to: 1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory. - # [JavaScript](#tab/javascript) - ```bash - func init --worker-runtime javascript - ``` + # [JavaScript](#tab/javascript) ++ ```bash + func init --worker-runtime javascript + ``` ++ # [C# in-process](#tab/csharp-in-process) ++ ```bash + func init --worker-runtime dotnet + ``` - # [C# in-process](#tab/csharp-in-process) - ```bash - func init --worker-runtime dotnet - ``` + # [C# isolated process](#tab/csharp-isolated-process) - # [C# isolated process](#tab/csharp-isolated-process) - ```bash - func init --worker-runtime dotnet-isolated - ``` + ```bash + func init --worker-runtime dotnet-isolated + ``` 2. Install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`.- - # [JavaScript](#tab/javascript) - Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support. - ```json - { - "version": "2.0", - "extensionBundle": { - "id": "Microsoft.Azure.Functions.ExtensionBundle", - "version": "[3.3.*, 4.0.0)" - } - } - ``` - - # [C# in-process](#tab/csharp-in-process) - ```bash - dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub - ``` -- # [C# isolated process](#tab/csharp-isolated-process) - ```bash - dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease - ``` ++ # [JavaScript](#tab/javascript) ++ Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support. ++ ```json + { + "version": "2.0", + "extensionBundle": { + "id": "Microsoft.Azure.Functions.ExtensionBundle", + "version": "[3.3.*, 4.0.0)" + } + } + ``` ++ # [C# in-process](#tab/csharp-in-process) ++ ```bash + dotnet add package Microsoft.Azure.WebJobs.Extensions.WebPubSub + ``` ++ # [C# isolated process](#tab/csharp-isolated-process) ++ ```bash + dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease + ``` 3. Create an `index` function to read and host a static web page for clients.- ```bash - func new -n index -t HttpTrigger - ``` ++ ```bash + func new -n index -t HttpTrigger + ``` + # [JavaScript](#tab/javascript)+ - Update `index/function.json` and copy following json codes.- ```json - { - "bindings": [ - { - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req", - "methods": [ - "get", - "post" - ] - }, - { - "type": "http", - "direction": "out", - "name": "res" - } - ] - } - ``` + ```json + { + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req", + "methods": ["get", "post"] + }, + { + "type": "http", + "direction": "out", + "name": "res" + } + ] + } + ``` - Update `index/index.js` and copy following codes.- ```js - var fs = require('fs'); - var path = require('path'); -- module.exports = function (context, req) { - var index = context.executionContext.functionDirectory + '/../https://docsupdatetracker.net/index.html'; - context.log("https://docsupdatetracker.net/index.html path: " + index); - fs.readFile(index, 'utf8', function (err, data) { - if (err) { - console.log(err); - context.done(err); - } - context.res = { - status: 200, - headers: { - 'Content-Type': 'text/html' - }, - body: data - }; - context.done(); - }); - } - ``` ++ ```js + var fs = require("fs"); + var path = require("path"); ++ module.exports = function (context, req) { + var index = + context.executionContext.functionDirectory + "/../https://docsupdatetracker.net/index.html"; + context.log("https://docsupdatetracker.net/index.html path: " + index); + fs.readFile(index, "utf8", function (err, data) { + if (err) { + console.log(err); + context.done(err); + } + context.res = { + status: 200, + headers: { + "Content-Type": "text/html", + }, + body: data, + }; + context.done(); + }); + }; + ``` # [C# in-process](#tab/csharp-in-process)+ - Update `index.cs` and replace `Run` function with following codes.- ```c# - [FunctionName("index")] - public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log) - { - var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html"); - log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}."); - return new ContentResult - { - Content = File.ReadAllText(indexFile), - ContentType = "text/html", - }; - } - ``` - - # [C# isolated process](#tab/csharp-isolated-process) + ```c# + [FunctionName("index")] + public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log) + { + var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html"); + log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}."); + return new ContentResult + { + Content = File.ReadAllText(indexFile), + ContentType = "text/html", + }; + } + ``` ++ # [C# isolated process](#tab/csharp-isolated-process) + - Update `index.cs` and replace `Run` function with following codes.- ```c# - [Function("index")] - public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context) - { - var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html"); - _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}."); -- var response = req.CreateResponse(); - response.WriteString(File.ReadAllText(path)); - response.Headers.Add("Content-Type", "text/html"); - return response; - } - ``` ++ ```c# + [Function("index")] + public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, FunctionContext context) + { + var path = Path.Combine(context.FunctionDefinition.PathToAssembly, "../https://docsupdatetracker.net/index.html"); + _logger.LogInformation($"https://docsupdatetracker.net/index.html path: {path}."); ++ var response = req.CreateResponse(); + response.WriteString(File.ReadAllText(path)); + response.Headers.Add("Content-Type", "text/html"); + return response; + } + ``` 4. Create a `negotiate` function to help clients get service connection url with access token.- ```bash - func new -n negotiate -t HttpTrigger - ``` - > [!NOTE] - > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`. - - # [JavaScript](#tab/javascript) - - Update `negotiate/function.json` and copy following json codes. - ```json - { - "bindings": [ - { - "authLevel": "anonymous", - "type": "httpTrigger", - "direction": "in", - "name": "req" - }, - { - "type": "http", - "direction": "out", - "name": "res" - }, - { - "type": "webPubSubConnection", - "name": "connection", - "hub": "simplechat", - "userId": "{headers.x-ms-client-principal-name}", - "direction": "in" - } - ] - } - ``` - - Update `negotiate/index.js` and copy following codes. - ```js - module.exports = function (context, req, connection) { - context.res = { body: connection }; - context.done(); - }; - ``` -- # [C# in-process](#tab/csharp-in-process) - - Update `negotiate.cs` and replace `Run` function with following codes. - ```c# - [FunctionName("negotiate")] - public static WebPubSubConnection Run( - [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, - [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection, - ILogger log) - { - log.LogInformation("Connecting..."); - return connection; - } - ``` - - Add `using` statements in header to resolve required dependencies. - ```c# - using Microsoft.Azure.WebJobs.Extensions.WebPubSub; - ``` -- # [C# isolated process](#tab/csharp-isolated-process) - - Update `negotiate.cs` and replace `Run` function with following codes. - ```c# - [Function("negotiate")] - public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, - [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo) - { - var response = req.CreateResponse(HttpStatusCode.OK); - response.WriteAsJsonAsync(connectionInfo); - return response; - } - ``` ++ ```bash + func new -n negotiate -t HttpTrigger + ``` ++ > [!NOTE] + > In this sample, we use [AAD](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`. ++ # [JavaScript](#tab/javascript) ++ - Update `negotiate/function.json` and copy following json codes. + ```json + { + "bindings": [ + { + "authLevel": "anonymous", + "type": "httpTrigger", + "direction": "in", + "name": "req" + }, + { + "type": "http", + "direction": "out", + "name": "res" + }, + { + "type": "webPubSubConnection", + "name": "connection", + "hub": "simplechat", + "userId": "{headers.x-ms-client-principal-name}", + "direction": "in" + } + ] + } + ``` + - Update `negotiate/index.js` and copy following codes. + ```js + module.exports = function (context, req, connection) { + context.res = { body: connection }; + context.done(); + }; + ``` ++ # [C# in-process](#tab/csharp-in-process) ++ - Update `negotiate.cs` and replace `Run` function with following codes. + ```c# + [FunctionName("negotiate")] + public static WebPubSubConnection Run( + [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, + [WebPubSubConnection(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connection, + ILogger log) + { + log.LogInformation("Connecting..."); + return connection; + } + ``` + - Add `using` statements in header to resolve required dependencies. + ```c# + using Microsoft.Azure.WebJobs.Extensions.WebPubSub; + ``` ++ # [C# isolated process](#tab/csharp-isolated-process) ++ - Update `negotiate.cs` and replace `Run` function with following codes. + ```c# + [Function("negotiate")] + public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "get", "post")] HttpRequestData req, + [WebPubSubConnectionInput(Hub = "simplechat", UserId = "{headers.x-ms-client-principal-name}")] WebPubSubConnection connectionInfo) + { + var response = req.CreateResponse(HttpStatusCode.OK); + response.WriteAsJsonAsync(connectionInfo); + return response; + } + ``` 5. Create a `message` function to broadcast client messages through service.+ ```bash func new -n message -t HttpTrigger ``` In this tutorial, you learn how to: > This function is actually using `WebPubSubTrigger`. However, the `WebPubSubTrigger` is not integrated in function's template. We use `HttpTrigger` to initialize the function template and change trigger type in code. # [JavaScript](#tab/javascript)+ - Update `message/function.json` and copy following json codes.- ```json - { - "bindings": [ - { - "type": "webPubSubTrigger", - "direction": "in", - "name": "data", - "hub": "simplechat", - "eventName": "message", - "eventType": "user" - }, - { - "type": "webPubSub", - "name": "actions", - "hub": "simplechat", - "direction": "out" - } - ] - } - ``` + ```json + { + "bindings": [ + { + "type": "webPubSubTrigger", + "direction": "in", + "name": "data", + "hub": "simplechat", + "eventName": "message", + "eventType": "user" + }, + { + "type": "webPubSub", + "name": "actions", + "hub": "simplechat", + "direction": "out" + } + ] + } + ``` - Update `message/index.js` and copy following codes.- ```js - module.exports = async function (context, data) { - context.bindings.actions = { - "actionName": "sendToAll", - "data": `[${context.bindingData.request.connectionContext.userId}] ${data}`, - "dataType": context.bindingData.dataType - }; - // UserEventResponse directly return to caller - var response = { - "data": '[SYSTEM] ack.', - "dataType" : "text" - }; - return response; - }; - ``` -- # [C# in-process](#tab/csharp-in-process) - - Update `message.cs` and replace `Run` function with following codes. - ```c# - [FunctionName("message")] - public static async Task<UserEventResponse> Run( - [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request, - BinaryData data, - WebPubSubDataType dataType, - [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions) - { - await actions.AddAsync(WebPubSubAction.CreateSendToAllAction( - BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"), - dataType)); - return new UserEventResponse - { - Data = BinaryData.FromString("[SYSTEM] ack"), - DataType = WebPubSubDataType.Text - }; - } - ``` - - Add `using` statements in header to resolve required dependencies. - ```c# - using Microsoft.Azure.WebJobs.Extensions.WebPubSub; - using Microsoft.Azure.WebPubSub.Common; - ``` - - # [C# isolated process](#tab/csharp-isolated-process) - - Update `message.cs` and replace `Run` function with following codes. - ```c# - [Function("message")] - [WebPubSubOutput(Hub = "simplechat")] - public SendToAllAction Run( - [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request) - { - return new SendToAllAction - { - Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"), - DataType = request.DataType - }; - } - ``` + ```js + module.exports = async function (context, data) { + context.bindings.actions = { + actionName: "sendToAll", + data: `[${context.bindingData.request.connectionContext.userId}] ${data}`, + dataType: context.bindingData.dataType, + }; + // UserEventResponse directly return to caller + var response = { + data: "[SYSTEM] ack.", + dataType: "text", + }; + return response; + }; + ``` ++ # [C# in-process](#tab/csharp-in-process) ++ - Update `message.cs` and replace `Run` function with following codes. + ```c# + [FunctionName("message")] + public static async Task<UserEventResponse> Run( + [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request, + BinaryData data, + WebPubSubDataType dataType, + [WebPubSub(Hub = "simplechat")] IAsyncCollector<WebPubSubAction> actions) + { + await actions.AddAsync(WebPubSubAction.CreateSendToAllAction( + BinaryData.FromString($"[{request.ConnectionContext.UserId}] {data.ToString()}"), + dataType)); + return new UserEventResponse + { + Data = BinaryData.FromString("[SYSTEM] ack"), + DataType = WebPubSubDataType.Text + }; + } + ``` + - Add `using` statements in header to resolve required dependencies. + ```c# + using Microsoft.Azure.WebJobs.Extensions.WebPubSub; + using Microsoft.Azure.WebPubSub.Common; + ``` ++ # [C# isolated process](#tab/csharp-isolated-process) ++ - Update `message.cs` and replace `Run` function with following codes. + ```c# + [Function("message")] + [WebPubSubOutput(Hub = "simplechat")] + public SendToAllAction Run( + [WebPubSubTrigger("simplechat", WebPubSubEventType.User, "message")] UserEventRequest request) + { + return new SendToAllAction + { + Data = BinaryData.FromString($"[{request.ConnectionContext.UserId}] {request.Data.ToString()}"), + DataType = request.DataType + }; + } + ``` 6. Add the client single page `https://docsupdatetracker.net/index.html` in the project root folder and copy content.- ```html - <html> - <body> - <h1>Azure Web PubSub Serverless Chat App</h1> - <div id="login"></div> - <p></p> - <input id="message" placeholder="Type to chat..."> - <div id="messages"></div> - <script> - (async function () { - let authenticated = window.location.href.includes('?authenticated=true'); - if (!authenticated) { - // auth - let login = document.querySelector("#login"); - let link = document.createElement('a'); - link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`; - link.text = "login"; - login.appendChild(link); - } - else { - // negotiate - let messages = document.querySelector('#messages'); - let res = await fetch(`${window.location.origin}/api/negotiate`, { - credentials: "include" - }); - let url = await res.json(); - // connect - let ws = new WebSocket(url.url); - ws.onopen = () => console.log('connected'); - ws.onmessage = event => { - let m = document.createElement('p'); - m.innerText = event.data; - messages.appendChild(m); - }; - let message = document.querySelector('#message'); - message.addEventListener('keypress', e => { - if (e.charCode !== 13) return; - ws.send(message.value); - message.value = ''; - }); - } - })(); - </script> - </body> - </html> - ``` -- # [JavaScript](#tab/javascript) -- # [C# in-process](#tab/csharp-in-process) - Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it. - ```xml - <ItemGroup> - <None Update="https://docsupdatetracker.net/index.html"> - <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> - </None> - </ItemGroup> - ``` -- # [C# isolated process](#tab/csharp-isolated-process) - Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it. - ```xml - <ItemGroup> - <None Update="https://docsupdatetracker.net/index.html"> - <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> - </None> - </ItemGroup> - ``` ++ ```html + <html> + <body> + <h1>Azure Web PubSub Serverless Chat App</h1> + <div id="login"></div> + <p></p> + <input id="message" placeholder="Type to chat..." /> + <div id="messages"></div> + <script> + (async function () { + let authenticated = window.location.href.includes( + "?authenticated=true" + ); + if (!authenticated) { + // auth + let login = document.querySelector("#login"); + let link = document.createElement("a"); + link.href = `${window.location.origin}/.auth/login/aad?post_login_redirect_url=/api/index?authenticated=true`; + link.text = "login"; + login.appendChild(link); + } else { + // negotiate + let messages = document.querySelector("#messages"); + let res = await fetch(`${window.location.origin}/api/negotiate`, { + credentials: "include", + }); + let url = await res.json(); + // connect + let ws = new WebSocket(url.url); + ws.onopen = () => console.log("connected"); + ws.onmessage = (event) => { + let m = document.createElement("p"); + m.innerText = event.data; + messages.appendChild(m); + }; + let message = document.querySelector("#message"); + message.addEventListener("keypress", (e) => { + if (e.charCode !== 13) return; + ws.send(message.value); + message.value = ""; + }); + } + })(); + </script> + </body> + </html> + ``` ++ # [JavaScript](#tab/javascript) ++ # [C# in-process](#tab/csharp-in-process) ++ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it. ++ ```xml + <ItemGroup> + <None Update="https://docsupdatetracker.net/index.html"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + </ItemGroup> + ``` ++ # [C# isolated process](#tab/csharp-isolated-process) ++ Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it. ++ ```xml + <ItemGroup> + <None Update="https://docsupdatetracker.net/index.html"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + </ItemGroup> + ``` ## Create and Deploy the Azure Function App Before you can deploy your function code to Azure, you need to create three resources:-* A resource group, which is a logical container for related resources. -* A storage account, which is used to maintain state and other information about your functions. -* A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources. -Use the following commands to create these items. +- A resource group, which is a logical container for related resources. +- A storage account, which is used to maintain state and other information about your functions. +- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment and sharing of resources. ++Use the following commands to create these items. 1. If you haven't done so already, sign in to Azure: - ```azurecli - az login - ``` + ```azurecli + az login + ``` 1. Create a resource group or you can skip by reusing the one of Azure Web PubSub service: - ```azurecli - az group create -n WebPubSubFunction -l <REGION> - ``` + ```azurecli + az group create -n WebPubSubFunction -l <REGION> + ``` 1. Create a general-purpose storage account in your resource group and region: - ```azurecli - az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction - ``` + ```azurecli + az storage account create -n <STORAGE_NAME> -l <REGION> -g WebPubSubFunction + ``` 1. Create the function app in Azure: - # [JavaScript](#tab/javascript) + # [JavaScript](#tab/javascript) ++ ```azurecli + az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> + ``` - ```azurecli - az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> - ``` - > [!NOTE] - > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value. + > [!NOTE] + > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value. - # [C# in-process](#tab/csharp-in-process) + # [C# in-process](#tab/csharp-in-process) - ```azurecli - az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> - ``` + ```azurecli + az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> + ``` - # [C# isolated process](#tab/csharp-isolated-process) + # [C# isolated process](#tab/csharp-isolated-process) - ```azurecli - az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> - ``` + ```azurecli + az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime dotnet-isolated --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME> + ``` 1. Deploy the function project to Azure: - After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command. + After you have successfully created your function app in Azure, you're now ready to deploy your local functions project by using the [func azure functionapp publish](./../azure-functions/functions-run-local.md) command. ++ ```bash + func azure functionapp publish <FUNCIONAPP_NAME> + ``` - ```bash - func azure functionapp publish <FUNCIONAPP_NAME> - ``` 1. Configure the `WebPubSubConnectionString` for the function app: First, find your Web PubSub resource from **Azure Portal** and copy out the connection string under **Keys**. Then, navigate to Function App settings in **Azure Portal** -> **Settings** -> **Configuration**. And add a new item under **Application settings**, with name equals `WebPubSubConnectionString` and value is your Web PubSub resource connection string. Go to **Azure portal** -> Find your Function App resource -> **App keys** -> **S Set `Event Handler` in Azure Web PubSub service. Go to **Azure portal** -> Find your Web PubSub resource -> **Settings**. Add a new hub settings mapping to the one function in use. Replace the `<FUNCTIONAPP_NAME>` and `<APP_KEY>` to yours. - - Hub Name: `simplechat` - - URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>** - - User Event Pattern: * - - System Events: -(No need to configure in this sample) +- Hub Name: `simplechat` +- URL Template: **https://<FUNCTIONAPP_NAME>.azurewebsites.net/runtime/webhooks/webpubsub?code=<APP_KEY>** +- User Event Pattern: \* +- System Events: -(No need to configure in this sample) :::image type="content" source="media/quickstart-serverless/set-event-handler.png" alt-text="Screenshot of setting the event handler."::: Go to **Azure portal** -> Find your Function App resource -> **Authentication**. Here we choose `Microsoft` as identify provider, which uses `x-ms-client-principal-name` as `userId` in the `negotiate` function. Besides, you can configure other identity providers following the links, and don't forget update the `userId` value in `negotiate` function accordingly. -* [Microsoft(Azure AD)](../app-service/configure-authentication-provider-aad.md) -* [Facebook](../app-service/configure-authentication-provider-facebook.md) -* [Google](../app-service/configure-authentication-provider-google.md) -* [Twitter](../app-service/configure-authentication-provider-twitter.md) +- [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) +- [Facebook](../app-service/configure-authentication-provider-facebook.md) +- [Google](../app-service/configure-authentication-provider-google.md) +- [Twitter](../app-service/configure-authentication-provider-twitter.md) ## Try the application Now you're able to test your page from your function app: `https://<FUNCTIONAPP_NAME>.azurewebsites.net/api/index`. See snapshot.+ 1. Click `login` to auth yourself. 2. Type message in the input box to chat. If you're not going to continue to use this app, delete all resources created by ## Next steps -In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. +In this quickstart, you learned how to run a serverless chat application. Now, you could start to build your own application. -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Azure Web PubSub bindings for Azure Functions](./reference-functions-bindings.md) -> [!div class="nextstepaction"] +> [!div class="nextstepaction"] > [Quick start: Create a simple chatroom with Azure Web PubSub](./tutorial-build-chat.md) -> [!div class="nextstepaction"] -> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples) +> [!div class="nextstepaction"] +> [Explore more Azure Web PubSub samples](https://github.com/Azure/azure-webpubsub/tree/main/samples) |
azure-web-pubsub | Reference Rest Api Data Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-rest-api-data-plane.md | As illustrated by the above workflow graph, and also detailed workflow described In each HTTP request, an authorization header with a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is required to authenticate with Azure Web PubSub Service. <a name="signing"></a>+ #### Signing Algorithm and Signature `HS256`, namely HMAC-SHA256, is used as the signing algorithm. You should use the `AccessKey` in Azure Web PubSub Service instance's connection Below claims are required to be included in the JWT token. -Claim Type | Is Required | Description -|| -`aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`. -`exp` | true | Epoch time when this token will be expired. +| Claim Type | Is Required | Description | +| - | -- | - | +| `aud` | true | Should be the **SAME** as your HTTP request url. For example, a broadcast request's audience looks like: `https://example.webpubsub.azure.com/api/hubs/myhub/:send?api-version=2022-11-01`. | +| `exp` | true | Epoch time when this token will be expired. | A pseudo code in JS:+ ```js const bearerToken = jwt.sign({}, connectionString.accessKey, {- audience: request.url, - expiresIn: "1h", - algorithm: "HS256", - }); + audience: request.url, + expiresIn: "1h", + algorithm: "HS256", +}); ``` -### Authenticate via Azure Active Directory Token (Azure AD Token) +### Authenticate via Microsoft Entra token -Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request. +Like using `AccessKey`, a [JSON Web Token (JWT)](https://en.wikipedia.org/wiki/JSON_Web_Token) is also required to authenticate the HTTP request. -**The difference is**, in this scenario, JWT Token is generated by Azure Active Directory. +**The difference is**, in this scenario, JWT Token is generated by Microsoft Entra ID. -[Learn how to generate Azure AD Tokens](../active-directory/develop/reference-v2-libraries.md) +[Learn how to generate Microsoft Entra tokens](../active-directory/develop/reference-v2-libraries.md) The credential scope used should be `https://webpubsub.azure.com/.default`. You could also use **Role Based Access Control (RBAC)** to authorize the request [Learn how to configure Role Based Access Control roles for your resource](./howto-authorize-from-application.md#add-role-assignments-on-azure-portal) -## APIs +## APIs -| Operation Group | Description | -|--|-| -|[Service Status](/rest/api/webpubsub/dataplane/health-api)| Provides operations to check the service status | -|[Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub)| Provides operations to manage the connections and send messages to them. | +| Operation Group | Description | +| -- | | +| [Service Status](/rest/api/webpubsub/dataplane/health-api) | Provides operations to check the service status | +| [Hub Operations](/rest/api/webpubsub/dataplane/web-pub-sub) | Provides operations to manage the connections and send messages to them. | |
azure-web-pubsub | Reference Server Sdk Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-csharp.md | You can use this library in your app server side to manage the WebSocket client Use this library to: -- Send messages to hubs and groups. +- Send messages to hubs and groups. - Send messages to particular users and connections. - Organize users and connections into groups. - Close connections You can also [enable console logging](https://github.com/Azure/azure-sdk-for-net [azure_sub]: https://azure.microsoft.com/free/dotnet/ [samples_ref]: https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/webpubsub/Azure.Messaging.WebPubSub/tests/Samples/-[awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp +[awps_sample]: https://github.com/Azure/azure-webpubsub/tree/main/samples/csharp |
azure-web-pubsub | Reference Server Sdk Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-java.md | Use this library to: For more information, see: -- [Azure Web PubSub client library Java SDK][source_code] -- [Azure Web PubSub client library reference documentation][api] +- [Azure Web PubSub client library Java SDK][source_code] +- [Azure Web PubSub client library reference documentation][api] - [Azure Web PubSub client library samples for Java][samples_readme] - [Azure Web PubSub service documentation][product_documentation] For more information, see: ### Include the Package -[//]: # ({x-version-update-start;com.azure:azure-messaging-webpubsub;current}) +[//]: # "{x-version-update-start;com.azure:azure-messaging-webpubsub;current}" ```xml <dependency> For more information, see: </dependency> ``` -[//]: # ({x-version-update-end}) +[//]: # "{x-version-update-end}" ### Create a `WebPubSubServiceClient` using connection string <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L21-L24 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .connectionString("{connection-string}") WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde ### Create a `WebPubSubServiceClient` using access key <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L31-L35 -->+ ```java WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilder() .credential(new AzureKeyCredential("{access-key}")) WebPubSubServiceClient webPubSubServiceClient = new WebPubSubServiceClientBuilde ### Broadcast message to entire hub <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L47-L47 -->+ ```java webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN); ``` webPubSubServiceClient.sendToAll("Hello world!", WebPubSubContentType.TEXT_PLAIN ### Broadcast message to a group <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L59-L59 -->+ ```java webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.TEXT_PLAIN); ``` webPubSubServiceClient.sendToGroup("java", "Hello Java!", WebPubSubContentType.T ### Send message to a connection <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L71-L71 -->+ ```java webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", WebPubSubContentType.TEXT_PLAIN); ``` webPubSubServiceClient.sendToConnection("myconnectionid", "Hello connection!", W <a name="send-to-user"></a> ### Send message to a user+ <!-- embedme ./src/samples/java/com/azure/messaging/webpubsub/ReadmeSamples.java#L83-L83 -->+ ```java webPubSubServiceClient.sendToUser("Andy", "Hello Andy!", WebPubSubContentType.TEXT_PLAIN); ``` the client library to use the Netty HTTP client. Configuring or changing the HTT By default, all client libraries use the Tomcat-native Boring SSL library to enable native-level performance for SSL operations. The Boring SSL library is an uber jar containing native libraries for Linux / macOS / Windows, and provides-better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning]. +better performance compared to the default SSL implementation within the JDK. For more information, including how to reduce the dependency size, see [performance tuning][https://github.com/Azure/azure-sdk-for-java/wiki/Performance-Tuning]. [!INCLUDE [next step](includes/include-next-step.md)] better performance compared to the default SSL implementation within the JDK. Fo [coc]: https://opensource.microsoft.com/codeofconduct/ [coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/ [coc_contact]: mailto:opencode@microsoft.com-[api]: /java/api/overview/azure/messaging-webpubsub-readme +[api]: /java/api/overview/azure/messaging-webpubsub-readme |
azure-web-pubsub | Reference Server Sdk Js | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-js.md | npm install @azure/web-pubsub ```js const { WebPubSubServiceClient } = require("@azure/web-pubsub"); -const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); ``` You can also authenticate the `WebPubSubServiceClient` using an endpoint and an `AzureKeyCredential`: ```js-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub"); +const { + WebPubSubServiceClient, + AzureKeyCredential, +} = require("@azure/web-pubsub"); const key = new AzureKeyCredential("<Key>");-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<Endpoint>", + key, + "<hubName>" +); ``` -Or authenticate the `WebPubSubServiceClient` using [Azure Active Directory][aad_doc] +Or authenticate the `WebPubSubServiceClient` using [Microsoft Entra ID][microsoft_entra_id_doc] 1. Install the `@azure/identity` dependency npm install @azure/identity 1. Update the source code to use `DefaultAzureCredential`: ```js-const { WebPubSubServiceClient, AzureKeyCredential } = require("@azure/web-pubsub"); +const { + WebPubSubServiceClient, + AzureKeyCredential, +} = require("@azure/web-pubsub"); const key = new DefaultAzureCredential();-const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<Endpoint>", + key, + "<hubName>" +); ``` ### Examples const serviceClient = new WebPubSubServiceClient("<Endpoint>", key, "<hubName>") ```js const { WebPubSubServiceClient } = require("@azure/web-pubsub"); -const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); // Get the access token for the WebSocket client connection to use let token = await serviceClient.getClientAccessToken(); token = await serviceClient.getClientAccessToken({ userId: "user1" }); ```js const { WebPubSubServiceClient } = require("@azure/web-pubsub"); -const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); // Send a JSON message await serviceClient.sendToAll({ message: "Hello world!" }); await serviceClient.sendToAll(payload.buffer); ```js const { WebPubSubServiceClient } = require("@azure/web-pubsub"); -const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); const groupClient = serviceClient.group("<groupName>"); await groupClient.sendToAll(payload.buffer); ```js const { WebPubSubServiceClient } = require("@azure/web-pubsub"); -const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); // Send a JSON message await serviceClient.sendToUser("user1", { message: "Hello world!" }); // Send a plain text message-await serviceClient.sendToUser("user1", "Hi there!", { contentType: "text/plain" }); +await serviceClient.sendToUser("user1", "Hi there!", { + contentType: "text/plain", +}); // Send a binary message const payload = new Uint8Array(10); await serviceClient.sendToUser("user1", payload.buffer); const { WebPubSubServiceClient } = require("@azure/web-pubsub"); const WebSocket = require("ws"); -const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); const groupClient = serviceClient.group("<groupName>"); const { WebPubSubServiceClient } = require("@azure/web-pubsub"); function onResponse(rawResponse: FullOperationResponse): void { console.log(rawResponse); }-const serviceClient = new WebPubSubServiceClient("<ConnectionString>", "<hubName>"); +const serviceClient = new WebPubSubServiceClient( + "<ConnectionString>", + "<hubName>" +); await serviceClient.sendToAll({ message: "Hello world!" }, { onResponse }); ``` const app = express(); app.use(handler.getMiddleware()); app.listen(3000, () =>- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`) + console.log( + `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}` + ) ); ``` const handler = new WebPubSubEventHandler("chat", { handleConnect: (req, res) => { // auth the connection and set the userId of the connection res.success({- userId: "<userId>" + userId: "<userId>", }); },- allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"] + allowedEndpoints: ["https://<yourAllowedService>.webpubsub.azure.com"], }); const app = express(); const app = express(); app.use(handler.getMiddleware()); app.listen(3000, () =>- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`) + console.log( + `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}` + ) ); ``` const { WebPubSubEventHandler } = require("@azure/web-pubsub-express"); const handler = new WebPubSubEventHandler("chat", { allowedEndpoints: [ "https://<yourAllowedService1>.webpubsub.azure.com",- "https://<yourAllowedService2>.webpubsub.azure.com" - ] + "https://<yourAllowedService2>.webpubsub.azure.com", + ], }); const app = express(); const app = express(); app.use(handler.getMiddleware()); app.listen(3000, () =>- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`) + console.log( + `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}` + ) ); ``` const express = require("express"); const { WebPubSubEventHandler } = require("@azure/web-pubsub-express"); const handler = new WebPubSubEventHandler("chat", {- path: "/customPath1" + path: "/customPath1", }); const app = express(); app.use(handler.getMiddleware()); app.listen(3000, () => // Azure WebPubSub Upstream ready at http://localhost:3000/customPath1- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`) + console.log( + `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}` + ) ); ``` const handler = new WebPubSubEventHandler("chat", { // You can also set the state here res.setState("calledTime", calledTime); res.success();- } + }, }); const app = express(); const app = express(); app.use(handler.getMiddleware()); app.listen(3000, () =>- console.log(`Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}`) + console.log( + `Azure WebPubSub Upstream ready at http://localhost:3000${handler.path}` + ) ); ``` For more detailed instructions on how to enable logs, see [@azure/logger package Use **Live Trace** from the Web PubSub service portal to view the live traffic. -[aad_doc]: howto-authorize-from-application.md +[microsoft_entra_id_doc]: howto-authorize-from-application.md [azure_sub]: https://azure.microsoft.com/free/ [samples_ref]: https://github.com/Azure/azure-webpubsub/tree/main/samples/javascript/ ## Next steps |
azure-web-pubsub | Reference Server Sdk Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-server-sdk-python.md | description: Learn about the Python server SDK for the Azure Web PubSub service. -+ Last updated 05/23/2022 Or use the service endpoint and the access key: >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=AzureKeyCredential("<access_key>")) ``` -Or use [Azure Active Directory][aad_doc] (Azure AD): +Or use [Microsoft Entra ID][microsoft_entra_id_doc]: 1. [pip][pip] install [`azure-identity`][azure_identity_pip].-2. [Enable Azure AD authentication on your Webpubsub resource][aad_doc]. +2. [Enable Microsoft Entra authorization on your Webpubsub resource][microsoft_entra_id_doc]. 3. Update code to use [DefaultAzureCredential][default_azure_credential]. - ```python - >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient - >>> from azure.identity import DefaultAzureCredential - >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential()) - ``` + ```python + >>> from azure.messaging.webpubsubservice import WebPubSubServiceClient + >>> from azure.identity import DefaultAzureCredential + >>> service = WebPubSubServiceClient(endpoint='<endpoint>', hub='hub', credential=DefaultAzureCredential()) + ``` ## Examples When you submit a pull request, a CLA-bot automatically determines whether you n This project has adopted the Microsoft Open Source Code of Conduct. For more information, see [Code of Conduct][code_of_conduct] FAQ or contact [Open Source Conduct Team](mailto:opencode@microsoft.com) with questions or comments. <!-- LINKS -->+ [webpubsubservice_docs]: ./index.yml [azure_cli]: /cli/azure [azure_sub]: https://azure.microsoft.com/free/ This project has adopted the Microsoft Open Source Code of Conduct. For more inf [connection_string]: howto-websocket-connect.md#authorization [azure_portal]: howto-develop-create-instance.md [azure-key-credential]: https://aka.ms/azsdk-python-core-azurekeycredential-[aad_doc]: howto-authorize-from-application.md +[microsoft_entra_id_doc]: howto-authorize-from-application.md [samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/webpubsub/azure-messaging-webpubsubservice/samples |
azure-web-pubsub | Samples Authenticate And Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/samples-authenticate-and-connect.md | Title: Azure Web PubSub samples - authenticate and connect -description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s) +description: A list of code samples showing how to authenticate and connect to Web PubSub resource(s) Last updated 05/15/2023 zone_pivot_groups: azure-web-pubsub-samples-authenticate-and-connect + # Azure Web PubSub samples - Authenticate and connect To make use of your Azure Web PubSub resource, you need to authenticate and connect to the service first. Azure Web PubSub service distinguishes two roles and they're given a different set of capabilities.- + ## Client-The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages. ++The client can be a browser, a mobile app, an IoT device or even an EV charging point as long as it supports WebSocket. A client is limited to publishing and subscribing to messages. ## Application server-While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource. ++While the client's role is often limited, the application server's role goes beyond simply receiving and publishing messages. Before a client tries to connect with your Web PubSub resource, it goes to the application server for a Client Access Token first. The token is used to establish a persistent WebSocket connection with your Web PubSub resource. ::: zone pivot="method-sdk-csharp"-| Use case | Description | +| Use case | Description | | | -- |-| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only. -| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server. -| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. -| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point. +| [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/Startup.cs#L29) | Applies to application server only. +| [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp/wwwroot/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server. +| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/chatapp-aad/Startup.cs#L26) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization. +| [Anonymous connection](https://github.com/Azure/azure-webpubsub/blob/main/samples/csharp/clientWithCert/client/Program.cs#L15) | Anonymous connection allows clients to connect with Azure Web PubSub directly without going to an application server for a Client Access Token first. This is useful for clients that have limited networking capabilities, like an EV charging point. ::: zone-end ::: zone pivot="method-sdk-javascript"-| Use case | Description | +| Use case | Description | | | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/server.js#L9) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp/sdk/src/index.js#L5) | Applies to client only. Client Access Token is generated on the application server.-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. +| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/main/samples/javascript/chatapp-aad/server.js#L24) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization. ::: zone-end ::: zone pivot="method-sdk-java"-| Use case | Description | +| Use case | Description | | | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/java/com/webpubsub/tutorial/App.java#L21) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp/src/main/resources/public/https://docsupdatetracker.net/index.html#L12) | Applies to client only. Client Access Token is generated on the application server.-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. +| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/java/chatapp-aad/src/main/java/com/webpubsub/tutorial/App.java#L22) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization. ::: zone-end ::: zone pivot="method-sdk-python"-| Use case | Description | +| Use case | Description | | | -- | | [Using connection string](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/server.py#L19) | Applies to application server only. | [Using Client Access Token](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp/public/https://docsupdatetracker.net/index.html#L13) | Applies to client only. Client Access Token is generated on the application server.-| [Using Azure Active Directory](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Azure AD for authorization offers improved security and ease of use compared to Access Key authorization. +| [Using Microsoft Entra ID](https://github.com/Azure/azure-webpubsub/blob/eb60438ff9e0735d90a6e7e6370b9d38aa6bc730/samples/python/chatapp-aad/server.py#L21) | Using Microsoft Entra ID for authorization offers improved security and ease of use compared to Access Key authorization. ::: zone-end |
azure-web-pubsub | Socketio Build Realtime Code Streaming App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-build-realtime-code-streaming-app.md | Title: Build a real-time code streaming app using Socket.IO and host it on Azure -description: An end-to-end tutorial demonstrating how to build an app that allows coders to share coding activities with their audience in real time using Web PubSub for Socket.IO + Title: Build a real-time code-streaming app by using Socket.IO and host it on Azure +description: Learn how to build an app that allows coders to share coding activities with their audience in real time by using Web PubSub for Socket.IO. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO Last updated 08/01/2023-# Build a real-time code streaming app using Socket.IO and host it on Azure +# Build a real-time code-streaming app by using Socket.IO and host it on Azure -Building a real-time experience like the cocreation feature from [Microsoft Word](https://www.microsoft.com/microsoft-365/word) can be challenging. +Building a real-time experience like the co-creation feature in [Microsoft Word](https://www.microsoft.com/microsoft-365/word) can be challenging. -Through its easy-to-use APIs, [Socket.IO](https://socket.io/) has proven itself as a battle-tested library for real-time communication between clients and server. However, Socket.IO users often report difficulty around scaling Socket.IO's connections. With Web PubSub for Socket.IO, developers no longer need to worry about managing persistent connections. +Through its easy-to-use APIs, [Socket.IO](https://socket.io/) has proven itself as a library for real-time communication between clients and a server. However, Socket.IO users often report difficulty around scaling Socket.IO's connections. With Web PubSub for Socket.IO, developers no longer need to worry about managing persistent connections. ## Overview-This tutorial shows how to build an app that allows a coder to stream his/her coding activities to an audience. We build this application using ++This article shows how to build an app that allows a coder to stream coding activities to an audience. You build this application by using: + >[!div class="checklist"]-> * Monitor Editor, the code editor that powers VS code -> * [Express](https://expressjs.com/), a Node.js web framework -> * APIs provided by Socket.IO library for real-time communication -> * Host Socket.IO connections using Web PubSub for Socket.IO +> * Monaco Editor, the code editor that powers Visual Studio Code. +> * [Express](https://expressjs.com/), a Node.js web framework. +> * APIs that the Socket.IO library provides for real-time communication. +> * Host Socket.IO connections that use Web PubSub for Socket.IO. ### The finished app-The finished app allows a code editor user to share a web link through which people can watch him/her typing. +The finished app allows the user of a code editor to share a web link through which people can watch the typing. +++To keep the procedures focused and digestible in around 15 minutes, this article defines two user roles and what they can do in the editor: -To keep this tutorial focused and digestible in around 15 minutes, we define two user roles and what they can do in the editor -- a writer, who can type in the online editor and the content is streamed-- viewers, who receive real-time content typed by the writer and cannot edit the content+* A writer, who can type in the online editor and the content is streamed +* Viewers, who receive real-time content typed by the writer and can't edit the content ### Architecture-| / | Purpose | Benefits | ++| Item | Purpose | Benefits | |-|-||-|[Socket.IO library](https://socket.io/) | Provides low-latency, bi-directional data exchange mechanism between the backend application and clients | Easy-to-use APIs that cover most real-time communication scenarios -|Web PubSub for Socket.IO | Host WebSocket or poll-based persistent connections with Socket.IO clients | 100 K concurrent connections built-in; Simplify application architecture; +|[Socket.IO library](https://socket.io/) | Provides a low-latency, bidirectional data exchange mechanism between the back-end application and clients | Easy-to-use APIs that cover most real-time communication scenarios +|Web PubSub for Socket.IO | Hosts WebSocket or poll-based persistent connections with Socket.IO clients | Support for 100,000 concurrent connections; simplified application architecture +++## Prerequisites +To follow all the steps in this article, you need: -## Prerequisites -In order to follow the step-by-step guide, you need > [!div class="checklist"] > * An [Azure](https://portal.azure.com/) account. If you don't have an Azure subscription, create an [Azure free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.-> * [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or higher) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources. -> * Basic familiarity of [Socket.IO's APIs](https://socket.io/docs/v4/) +> * The [Azure CLI](/cli/azure/install-azure-cli) (version 2.29.0 or later) or [Azure Cloud Shell](../cloud-shell/quickstart.md) to manage Azure resources. +> * Basic familiarity with [Socket.IO's APIs](https://socket.io/docs/v4/). ## Create a Web PubSub for Socket.IO resource-We are going to use Azure CLI to create the resource. ++Use the Azure CLI to create the resource: + ```bash az webpubsub create -n <resource-name> \ -l <resource-location> \ az webpubsub create -n <resource-name> \ --kind SocketIO \ --sku Free_F1 ```-## Get connection string -A connection string allows you to connect with Web PubSub for Socket.IO. Keep the returned connection string somewhere for use as we need it when we run the application at the end of the tutorial. ++## Get a connection string ++A connection string allows you to connect with Web PubSub for Socket.IO. ++Run the following commands. Keep the returned connection string somewhere, because you'll need it when you run the application later in this article. + ```bash az webpubsub key show -n <resource-name> \ -g <resource-group> \ az webpubsub key show -n <resource-name> \ -o tsv ``` -## Write the application ->[!NOTE] -> This tutorial focuses on explaining the core code for implementing real-time communication. Complete code can be found in the [samples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream). +## Write the application's server-side code ++Start writing your application's code by working on the server side. ++### Build an HTTP server ++1. Create a Node.js project: -### Server-side code -#### Build an HTTP server -1. Create a Node.js project ```bash mkdir codestream cd codestream npm init ``` -2. Install server SDK and Express +2. Install the server SDK and Express: + ```bash npm install @azure/web-pubsub-socket.io npm install express ``` -3. Import required packages and create an HTTP server to serve static files +3. Import required packages and create an HTTP server to serve static files: + ```javascript- /* server.js*/ + /*server.js*/ // Import required packages const express = require('express'); const path = require('path'); - // Create a HTTP server based on Express + // Create an HTTP server based on Express const app = express(); const server = require('http').createServer(app); app.use(express.static(path.join(__dirname, 'public'))); ``` -4. Define an endpoint called `/negotiate`. A **writer** client hits this endpoint first. This endpoint returns an HTTP response, which contains -- an endpoint the client should establish a persistent connection with, -- `room` the client is assigned to+4. Define an endpoint called `/negotiate`. A writer client hits this endpoint first. This endpoint returns an HTTP response. The response contains an endpoint that the client should use to establish a persistent connection. It also returns a `room` value that the client is assigned to. - ```javascript - /* server.js*/ + ```javascript + /*server.js*/ app.get('/negotiate', async (req, res) => { res.json({ url: endpoint az webpubsub key show -n <resource-name> \ }); ``` -#### Create Web PubSub for Socket.IO server -1. Import Web PubSub for Socket.IO SDK and define options +### Create the Web PubSub for Socket.IO server ++1. Import the Web PubSub for Socket.IO SDK and define options: + ```javascript- /* server.js*/ + /*server.js*/ const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); const wpsOptions = { hub: "codestream", connectionString: process.argv[2]- }; + } ``` -2. Create a Web PubSub for Socket.IO server +2. Create a Web PubSub for Socket.IO server: + ```javascript- /* server.js*/ + /*server.js*/ const io = require("socket.io")(); useAzureSocketIO(io, wpsOptions); ``` -The two steps are slightly different than how you would normally create a Socket.IO server as [described here](https://socket.io/docs/v4/server-installation/). With these two steps, your server-side code can offload managing persistent connections to an Azure service. With the help of an Azure service, your application server acts **only** as a lightweight HTTP server. +The two steps are slightly different from how you would normally create a Socket.IO server, as described in [this Socket.IO documentation](https://socket.io/docs/v4/server-installation/). With these two steps, your server-side code can offload managing persistent connections to an Azure service. With the help of an Azure service, your application server acts *only* as a lightweight HTTP server. ++### Implement business logic -Now that we've created a Socket.IO server hosted by Web PubSub, we can define how the clients and server communicate using Socket.IO's APIs. This process is referred to as implementing business logic. +Now that you've created a Socket.IO server hosted by Web PubSub, you can define how the clients and server communicate by using Socket.IO's APIs. This process is called implementing business logic. -#### Implement business logic -1. After a client is connected, the application server tells the client that "you are logged in" by sending a custom event named `login`. +1. After a client is connected, the application server tells the client that it's logged in by sending a custom event named `login`. ```javascript- /* server.js*/ + /*server.js*/ io.on('connection', socket => { socket.emit("login"); }); ``` -2. Each client emits two events `joinRoom` and `sendToRoom` that the server can respond to. After the server getting the `room_id` a client wishes to join, we use Socket.IO's API `socket.join` to join the target client to the specified room. +2. Each client emits two events that the server can respond to: `joinRoom` and `sendToRoom`. After the server gets the `room_id` value that a client wants to join, you use `socket.join` from Socket.IO's API to join the target client to the specified room. ```javascript- /* server.js*/ + /*server.js*/ socket.on('joinRoom', async (message) => { const room_id = message["room_id"]; await socket.join(room_id); }); ``` -3. After a client has successfully been joined, the server informs the client of the successful result with the `message` event. Upon receiving an `message` event with a type of `ackJoinRoom`, the client can ask the server to send the latest editor state. +3. After a client is joined, the server informs the client of the successful result by sending a `message` event. When the client receives a `message` event with a type of `ackJoinRoom`, the client can ask the server to send the latest editor state. ```javascript- /* server.js*/ + /*server.js*/ socket.on('joinRoom', async (message) => { // ... socket.emit("message", { Now that we've created a Socket.IO server hosted by Web PubSub, we can define ho ``` ```javascript- /* client.js*/ + /*client.js*/ socket.on("message", (message) => { let data = message; if (data.type === 'ackJoinRoom' && data.success) { sendToRoom(socket, `${room_id}-control`, { data: 'sync'}); } // ... - }) + }); ``` -4. When a client sends `sendToRoom` event to the server, the server broadcasts the **changes to the code editor state** to the specified room. All clients in the room can now receive the latest update. +4. When a client sends a `sendToRoom` event to the server, the server broadcasts the *changes to the code editor state* to the specified room. All clients in the room can now receive the latest update. ```javascript socket.on('sendToRoom', (message) => { Now that we've created a Socket.IO server hosted by Web PubSub, we can define ho }); ``` -Now that the server-side is finished. Next, we work on the client-side. +## Write the application's client-side code -### Client-side code -#### Initial setup -1. On the client side, we need to create an Socket.IO client to communicate with the server. The question is which server the client should establish a persistent connection with. Since we use Web PubSub for Socket.IO, the server is an Azure service. Recall that we defined [`/negotiate`](#build-an-http-server) route to serve clients an endpoint to Web PubSub for Socket.IO. +Now that the server-side procedures are finished, you can work on the client side. - ```javascript - /*client.js*/ +### Initial setup - async function initialize(url) { - let data = await fetch(url).json() +You need to create a Socket.IO client to communicate with the server. The question is which server the client should establish a persistent connection with. Because you're using Web PubSub for Socket.IO, the server is an Azure service. Recall that you defined a [/negotiate](#build-an-http-server) route to serve clients an endpoint to Web PubSub for Socket.IO. - updateStreamId(data.room_id); +```javascript +/*client.js*/ - let editor = createEditor(...); // Create a editor component +async function initialize(url) { + let data = await fetch(url).json() - var socket = io(data.url, { - path: "/clients/socketio/hubs/codestream", - }); + updateStreamId(data.room_id); - return [socket, editor, data.room_id]; - } - ``` -The `initialize(url)` organizes a few setup operations together. -- fetches the endpoint to an Azure service from your HTTP server,-- creates a Monoca editor instance,-- establishes a persistent connection with Web PubSub for Socket.IO+ let editor = createEditor(...); // Create an editor component ++ var socket = io(data.url, { + path: "/clients/socketio/hubs/codestream", + }); ++ return [socket, editor, data.room_id]; +} +``` ++The `initialize(url)` function organizes a few setup operations together: ++* Fetches the endpoint to an Azure service from your HTTP server +* Creates a Monaco Editor instance +* Establishes a persistent connection with Web PubSub for Socket.IO -#### Writer client -[As mentioned earlier](#the-finished-app), we have two user roles on the client side. The first one is the writer and another one is viewer. Anything written by the writer is streamed to the viewer's screen. +### Writer client ++As mentioned [earlier](#the-finished-app), you have two user roles on the client side: writer and viewer. Anything that the writer types is streamed to the viewer's screen. ++1. Get the endpoint to Web PubSub for Socket.IO and the `room_id` value: -##### Writer client -1. Get the endpoint to Web PubSub for Socket.IO and the `room_id`. ```javascript /*client.js*/ let [socket, editor, room_id] = await initialize('/negotiate'); ``` -2. When the writer client is connected with server, the server sends a `login` event to him. The writer can respond by asking the server to join itself to a specified room. Importantly, every 200 ms the writer sends its latest editor state to the room. A function aptly named `flush` organizes the sending logic. +2. When the writer client is connected with the server, the server sends a `login` event to the writer. The writer can respond by asking the server to join itself to a specified room. Every 200 milliseconds, the writer client sends the latest editor state to the room. A function named `flush` organizes the sending logic. ```javascript /*client.js*/ The `initialize(url)` organizes a few setup operations together. }); ``` -3. If a writer doesn't make any edits, `flush()` does nothing and simply returns. Otherwise, the **changes to the editor state** are sent to the room. +3. If a writer doesn't make any edits, `flush()` does nothing and simply returns. Otherwise, the *changes to the editor state* are sent to the room. + ```javascript /*client.js*/ function flush() {- // No change from editor need to be flushed + // No changes from editor need to be flushed if (changes.length === 0) return; // Broadcast the changes made to editor content The `initialize(url)` organizes a few setup operations together. } ``` -4. When a new viewer client is connected, the viewer needs to get the latest **complete state** of the editor. To achieve this, a message containing `sync` data will be sent to the writer client, asking the writer client to send the complete editor state. +4. When a new viewer client is connected, the viewer needs to get the latest *complete state* of the editor. To achieve this, a message that contains `sync` data is sent to the writer client. The message asks the writer client to send the complete editor state. + ```javascript /*client.js*/ The `initialize(url)` organizes a few setup operations together. }); ``` -##### Viewer client -1. Same with the writer client, the viewer client creates its Socket.IO client through `initialize()`. When the viewer client is connected and received a `login` event from server, it asks the server to join itself to the specified room. The query `room_id` specifies the room . +### Viewer client ++1. Like the writer client, the viewer client creates its Socket.IO client through `initialize()`. When the viewer client is connected and receives a `login` event from the server, it asks the server to join itself to the specified room. The query `room_id` specifies the room. ```javascript /*client.js*/ The `initialize(url)` organizes a few setup operations together. }); ``` -2. When a viewer client receives a `message` event from server and the data type is `ackJoinRoom`, the viewer client asks the writer client in the room to send over the complete editor state. +2. When a viewer client receives a `message` event from the server and the data type is `ackJoinRoom`, the viewer client asks the writer client in the room to send the complete editor state. ```javascript /*client.js*/ The `initialize(url)` organizes a few setup operations together. }); ``` -3. If the data type is `editorMessage`, the viewer client **updates the editor** according to its actual content. +3. If the data type is `editorMessage`, the viewer client *updates the editor* according to its actual content. ```javascript /*client.js*/ The `initialize(url)` organizes a few setup operations together. }); ``` -4. Implement `joinRoom()` and `sendToRoom()` using Socket.IO's APIs +4. Implement `joinRoom()` and `sendToRoom()` by using Socket.IO's APIs: + ```javascript /*client.js*/ The `initialize(url)` organizes a few setup operations together. ``` ## Run the application+ ### Locate the repo-We dived deep into the core logic related to synchronizing editor state between viewers and writer. The complete code can be found in [ examples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream). ++The preceding sections covered the core logic related to synchronizing the editor state between viewers and the writer. You can find the complete code in the [examples repository](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/codestream). ### Clone the repo+ You can clone the repo and run `npm install` to install project dependencies. ### Start the server+ ```bash node index.js <web-pubsub-connection-string> ```-> [!NOTE] -> This is the connection string you received from [a previous step](#get-connection-string). ++This is the connection string that you received in [an earlier step](#get-a-connection-string). ### Play with the real-time code editor-Open `http://localhost:3000` in a browser tab and open another tab with the url displayed in the first web page. -If you write code in the first tab, you should see your typing reflected real-time in the other tab. Web PubSub for Socket.IO facilitates message passing in the cloud. Your `express` server only serves the static `https://docsupdatetracker.net/index.html` and `/negotiate` endpoint. +Open `http://localhost:3000` on a browser tab. Open another tab with the URL displayed on the first webpage. ++If you write code on the first tab, you should see your typing reflected in real time on the other tab. Web PubSub for Socket.IO facilitates message passing in the cloud. Your `express` server only serves the static `https://docsupdatetracker.net/index.html` file and the `/negotiate` endpoint. |
azure-web-pubsub | Socketio Migrate From Self Hosted | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-migrate-from-self-hosted.md | Title: How to migrate a self-hosted Socket.IO to be fully managed on Azure -description: A tutorial showing how to migrate an Socket.IO chat app to Azure + Title: Migrate a self-hosted Socket.IO app to be fully managed on Azure +description: Learn how to migrate a Socket.IO chat app to Azure. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO Last updated 07/21/2023-# How to migrate a self-hosted Socket.IO app to be fully managed on Azure ->[!NOTE] -> Web PubSub for Socket.IO is in "Private Preview" and is available to selected customers only. To register your interest, please write to us awps@microsoft.com. +# Migrate a self-hosted Socket.IO app to be fully managed on Azure ++In this article, you migrate a Socket.IO chat app to Azure by using Web PubSub for Socket.IO. ## Prerequisites+ > [!div class="checklist"]-> * An Azure account with an active subscription. If you don't have one, you can [create a free accout](https://azure.microsoft.com/free/). -> * Some familiarity with Socket.IO library. +> * An Azure account with an active subscription. If you don't have one, you can [create a free account](https://azure.microsoft.com/free/). +> * Some familiarity with the Socket.IO library. ## Create a Web PubSub for Socket.IO resource-Head over to Azure portal and search for `socket.io`. -## Migrate an official Socket.IO sample app -To focus this guide to the migration process, we're going to use a sample chat app provided on [Socket.IO's website](https://github.com/socketio/socket.io/tree/4.6.2/examples/chat). We need to make some minor changes to both the **server-side** and **client-side** code to complete the migration. +1. Go to the [Azure portal](https://portal.azure.com/). +1. Search for **socket.io**, and then select **Web PubSub for Socket.IO**. +1. Select a plan, and then select **Create**. ++ :::image type="content" source="./media/socketio-migrate-from-self-hosted/create-resource.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the Azure portal."::: ++## Migrate the app ++For the migration process in this guide, you use a sample chat app provided on [Socket.IO's website](https://github.com/socketio/socket.io/tree/4.6.2/examples/chat). You need to make some minor changes to both the server-side and client-side code to complete the migration. ### Server side-Locate `index.js` in the server-side code. -1. Add package `@azure/web-pubsub-socket.io` +1. Locate `index.js` in the server-side code. ++2. Add the `@azure/web-pubsub-socket.io` package: + ```bash npm install @azure/web-pubsub-socket.io ``` -2. Import package in server code `index.js` +3. Import the package: + ```javascript const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); ``` -3. Add configuration so that the server can connect with your Web PubSub for Socket.IO resource. +4. Locate in your server-side code where you created the Socket.IO server, and append useAzureSocketIO(...): + ```javascript- const wpsOptions = { + const io = require("socket.io")(); + useAzureSocketIO(io, { hub: "eio_hub", // The hub name can be any valid string. connectionString: process.argv[2]- }; + }); ```- -4. Locate in your server-side code where Socket.IO server is created and append `.useAzureSocketIO(wpsOptions)`: - ```javascript - const io = require("socket.io")(); - useAzureSocketIO(io, wpsOptions); - ``` ->[!IMPORTANT] -> `useAzureSocketIO` is an asynchronous method. Here we `await`. So you need to wrap it and related code in an asynchronous function. -5. If you use the following server APIs, add `async` before using them as they're asynchronous with Web PubSub for Socket.IO. -- [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms)-- [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms)-- [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom)-- [socket.leave](https://socket.io/docs/v4/server-api/#socketleaveroom)+ >[!IMPORTANT] + > The `useAzureSocketIO` method is asynchronous, and it does initialization steps to connect to Web PubSub. You can use `await useAzureSocketIO(...)` or use `useAzureSocketIO(...).then(...)` to make sure your app server starts to serve requests after the initialization succeeds. ++5. If you use the following server APIs, add `async` before using them, because they're asynchronous with Web PubSub for Socket.IO: ++ * [server.socketsJoin](https://socket.io/docs/v4/server-api/#serversocketsjoinrooms) + * [server.socketsLeave](https://socket.io/docs/v4/server-api/#serversocketsleaverooms) + * [socket.join](https://socket.io/docs/v4/server-api/#socketjoinroom) + * [socket.leave](https://socket.io/docs/v4/server-api/#socketleaveroom) ++ For example, if there's code like this: - For example, if there's code like: ```javascript io.on("connection", (socket) => { socket.join("room abc"); }); ```- you should update it to: ++ Update it to: + ```javascript io.on("connection", async (socket) => { await socket.join("room abc"); }); ``` - In this chat example, none of them are used. So no changes are needed. + This chat example doesn't use any of those APIs. So you don't need to make any changes. ++### Client side ++1. Find the endpoint to your resource on the Azure portal. -### Client Side -In client-side code found in `./public/main.js` + :::image type="content" source="./media/socketio-migrate-from-self-hosted/get-resource-endpoint.png" alt-text="Screenshot of getting the endpoint to a Web PubSub for Socket.IO resource."::: +1. Go to `./public/main.js` in the client-side code. -Find where Socket.IO client is created, then replace its endpoint with Azure Socket.IO endpoint and add an `path` option. You can find the endpoint to your resource on Azure portal. -```javascript -const socket = io("<web-pubsub-for-socketio-endpoint>", { - path: "/clients/socketio/hubs/eio_hub", -}); -``` +1. Find where the Socket.IO client is created. Replace its endpoint with the Socket.IO endpoint in Azure, and add a `path` option: + ```javascript + const socket = io("<web-pubsub-for-socketio-endpoint>", { + path: "/clients/socketio/hubs/eio_hub", + }); + ``` |
azure-web-pubsub | Socketio Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-overview.md | Title: Overview of Web PubSub for Socket.IO -description: An overview of Web PubSub's support for the open-source Socket.IO library +description: Get an overview of Azure Web PubSub support for the open-source Socket.IO library. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO Last updated 07/27/2023-Web PubSub for Socket.IO is a fully managed cloud service for [Socket.IO](https://socket.io/), which is a widely popular open-source library for real-time messaging between clients and server. -Managing stateful and persistent connections between clients and server is often a source of frustration for Socket.IO users. The problem is more acute when there are multiple Socket.IO instances spread across servers. +Web PubSub for Socket.IO is a fully managed cloud offering for [Socket.IO](https://socket.io/). Socket.IO is a widely popular open-source library for real-time messaging between clients and a server. Web PubSub for Socket.IO is a feature of the Azure Web PubSub service. -Web PubSub for Socket.IO removes the burden of deploying, hosting and coordinating Socket.IO instances for developers, allowing development team to focus on building real-time experiences using their familiar APIs provided by Socket.IO library. +Managing stateful and persistent connections between clients and a server is often a source of frustration for Socket.IO users. The problem is more acute when multiple Socket.IO instances are spread across servers. +Web PubSub for Socket.IO removes the burden of deploying, hosting, and coordinating Socket.IO instances for developers. Development teams can then focus on building real-time experiences by using familiar APIs from the Socket.IO library. -## Benefits over hosting Socket.IO app yourself ->[!NOTE] -> - **Socket.IO** refers to the open-source library. -> - **Web PubSub for Socket.IO** refers to a fully managed Azure service. +## Benefits over hosting a Socket.IO app yourself -| / | Hosting Socket.IO app yourself | Using Web PubSub for Socket.IO| +The following table shows the benefits of using the fully managed Azure service. ++| Item | Hosting a Socket.IO app yourself | Using Web PubSub for Socket.IO| |||| | Deployment | Customer managed | Azure managed | | Hosting | Customer needs to provision enough server resources to serve and maintain persistent connections | Azure managed |-| Scaling connections | Customer managed by using a server-side component called ["adapter"](https://socket.io/docs/v4/adapter/) | Azure managed with **100k+** client connections out-of-the-box | -| Uptime guarantee | Customer managed | Azure managed with **99.9%+** uptime | -| Enterprise-grade security | Customer managed | Azure managed | -| Ticket support system | N/A | Azure managed | +| Scaling connections | Customer managed by using a server-side component called an [adapter](https://socket.io/docs/v4/adapter/) | Azure managed with more than 100,000 client connections out of the box | +| Uptime guarantee | Customer managed | Azure managed with more than 99.9 percent uptime | +| Enterprise-grade security | Customer managed | Azure managed | +| Ticket support system | Not applicable | Azure managed | -When you host Socket.IO app yourself, clients establish WebSocket or long-polling connections directly with your server. Maintaining such **stateful** connections places a heavy burden to your Socket.IO server, which limits the number of concurrent connections and increases messaging latency. +When you host a Socket.IO app yourself, clients establish WebSocket or long-polling connections directly with your server. Maintaining such *stateful* connections places a heavy burden on your Socket.IO server. This burden limits the number of concurrent connections and increases messaging latency. -A common approach to meeting the concurrent and latency challenge is to [scale out to multiple Socket.IO servers](https://socket.io/docs/v4/adapter/). Scaling out requires a server-side component called "adapter" like the Redis adapter provided by Socket.IO library. However, such adapter introduces an extra component you need to deploy and manage on top of writing extra code logic to get things to work properly. +A common approach to meeting the concurrency and latency challenge is to [scale out to multiple Socket.IO servers](https://socket.io/docs/v4/adapter/). Scaling out requires a server-side component called an *adapter*, like the Redis adapter that the Socket.IO library provides. However, such an adapter introduces an extra component that you need to deploy and manage. It also requires you to write extra code logic to get things to work properly. With Web PubSub for Socket.IO, you're freed from handling scaling issues and implementing code logic related to using an adapter. ## Same programming model-To migrate a self-hosted Socket.IO app to Azure, you only need to add a few lines of code with **no need** to change the rest of the application code. In other words, the programming model remains the same and the complexity of managing a real-time app is reduced. ++To migrate a self-hosted Socket.IO app to Azure, you add only a few lines of code. There's no need to change the rest of the application code. In other words, the programming model remains the same, and the complexity of managing a real-time app is reduced. > [!div class="nextstepaction"] > [Quickstart for Socket.IO users](./socketio-quickstart.md) >-> [Quickstart: Mirgrate an self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md) -----------+> [Migrate a self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md) |
azure-web-pubsub | Socketio Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-quickstart.md | Title: Quick start of Web PubBub for Socket.IO -description: A quickstart demonstrating how to use Web PubSub for Socket.IO + Title: 'Quickstart: Incorporate Web PubSub for Socket.IO in your app' +description: In this quickstart, you learn how to use Web PubSub for Socket.IO on an existing Socket.IO app. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO Last updated 08/01/2023 -+ -# Quickstart for Socket.IO users -This quickstart is aimed for existing Socket.IO users. It demontrates how quickly Socket.IO users can incorporate Web PubSub for Socket.IO in their app to simplify development, speed up deployment and achieve scalability without complexity. +# Quickstart: Incorporate Web PubSub for Socket.IO in your app ++This quickstart is for existing Socket.IO users. It demonstrates how quickly you can incorporate Web PubSub for Socket.IO in your app to simplify development, speed up deployment, and achieve scalability without complexity. ++Code shown in this quickstart is in CommonJS. If you want to use an ECMAScript module, see the [chat demo for Socket.IO with Azure Web PubSub](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/chat). ## Prerequisites+ > [!div class="checklist"]-> * An Azure account with an active subscription. If you don't have one, you can [create a free accout](https://azure.microsoft.com/free/). -> * Some familiarity with Socket.IO library. +> * An Azure account with an active subscription. If you don't have one, you can [create a free account](https://azure.microsoft.com/free/). +> * Some familiarity with the Socket.IO library. ## Create a Web PubSub for Socket.IO resource-Head over to Azure portal and search for `socket.io`. ++1. Go to the [Azure portal](https://portal.azure.com/). +1. Search for **socket.io**, and then select **Web PubSub for Socket.IO**. +1. Select a plan, and then select **Create**. ++ :::image type="content" source="./media/socketio-migrate-from-self-hosted/create-resource.png" alt-text="Screenshot of the Web PubSub for Socket.IO service in the Azure portal."::: ## Initialize a Node project and install required packages+ ```bash mkdir quickstart cd quickstart npm install @azure/web-pubsub-socket.io socket.io-client ``` ## Write server code-1. Import required packages and create a configuration for Web PubSub ++1. Import required packages and create a configuration for Web PubSub: + ```javascript- /* server.js */ + /*server.js*/ const { Server } = require("socket.io"); const { useAzureSocketIO } = require("@azure/web-pubsub-socket.io"); - // Add a Web PubSub Option + // Add a Web PubSub option const wpsOptions = { hub: "eio_hub", // The hub name can be any valid string. connectionString: process.argv[2] || process.env.WebPubSubConnectionString } ``` -2. Create a Socket.IO server supported by Web PubSub for Socket.IO +2. Create a Socket.IO server that Web PubSub for Socket.IO supports: + ```javascript- /* server.js */ + /*server.js*/ let io = new Server(3000); useAzureSocketIO(io, wpsOptions); ``` -3. Write server logic +3. Write server logic: + ```javascript- /* server.js */ + /*server.js*/ io.on("connection", (socket) => {- // send a message to the client + // Sends a message to the client socket.emit("hello", "world"); - // receive a message from the client + // Receives a message from the client socket.on("howdy", (arg) => {- console.log(arg); // prints "stranger" + console.log(arg); // Prints "stranger" }) }); ``` ## Write client code-1. Create a Socket.IO client ++1. Create a Socket.IO client: + ```javascript- /* client.js */ + /*client.js*/ const io = require("socket.io-client"); const webPubSubEndpoint = process.argv[2] || "<web-pubsub-socketio-endpoint>"; npm install @azure/web-pubsub-socket.io socket.io-client }); ``` -2. Define the client behavior +2. Define the client behavior: + ```javascript- /* client.js */ + /*client.js*/ // Receives a message from the server socket.on("hello", (arg) => { npm install @azure/web-pubsub-socket.io socket.io-client ``` ## Run the app-1. Run the server app ++1. Run the server app: + ```bash node server.js "<web-pubsub-connection-string>" ``` -2. Run the client app in another terminal +2. Run the client app in another terminal: + ```bash node client.js "<web-pubsub-endpoint>" ```--Note: Code shown in this quickstart is in CommonJS. If you'd like to use ES Module, please refer to [quickstart-esm](https://github.com/Azure/azure-webpubsub/tree/main/experimental/sdk/webpubsub-socketio-extension/examples/chat). |
azure-web-pubsub | Socketio Service Internal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-service-internal.md | Title: Service internal - how does Web PubSub support Socket.IO library -description: An article explaining how Web PubSub supports Socket.IO library + Title: How does Azure Web PubSub support the Socket.IO library? +description: This article explains how Azure Web PubSub supports the Socket.IO library. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO Last updated 08/1/2023-# Service internal - how does Web PubSub support Socket.IO library +# How does Azure Web PubSub support the Socket.IO library? -> [!NOTE] -> This article peels back the curtain from an engieerning perspective of how self-hosted Socket.IO apps can migrate to Azure with minimal code change to simplify app architecture and deployment, while achieving 100 K+ concurrent connections out-of-the-box. It's not necessary to understand everything in this article to use Web PubSub for Socket.IO effectively. +This article provides an engineering perspective on how you can migrate self-hosted Socket.IO apps to Azure by using Web PubSub for Socket.IO with minimal code changes. You can then take advantage of simplified app architecture and deployment, while achieving 100,000 concurrent connections. You don't need to understand everything in this article to use Web PubSub for Socket.IO effectively. -## A typical architecture of a self-hosted Socket.IO app +## Architecture of a self-hosted Socket.IO app -The diagram shows a typical architecture of a self-hosted Socket.IO app. To ensure that an app is scalable and reliable, Socket.IO users often have an architecture involving multiple Socket.IO servers. Client connections are distributed among Socket.IO servers to balance load on the system. A setup of multiple Socket.IO servers introduces the challenge when developers need to send the same message to clients connected to different server. This use case is often referred to as "broadcasting messages" by developers. +The following diagram shows a typical architecture of a self-hosted Socket.IO app. -The official recommendation from Soket.IO library is to introduce a server-side component called ["adapter"](https://socket.io/docs/v4/using-multiple-nodes/)to coordinate Socket.IO servers. What an adapter does is to figure out which servers clients are connected to and instruct those servers to send messages. -Adding an adapter component introduces complexity to both development and deployment. For example, if the [Redis adapter](https://socket.io/docs/v4/redis-adapter/) is used, it means developers need to -- implement sticky session-- deploy and maintain Redis instance(s)+To ensure that an app is scalable and reliable, Socket.IO users often have an architecture that involves multiple Socket.IO servers. Client connections are distributed among Socket.IO servers to balance load on the system. -The engineering effort and time of getting a real-time communication channel in place distracts developers from working on features that make an app or system unique and valuable to end users. +A setup of multiple Socket.IO servers introduces a challenge when developers need to send the same message to clients that are connected to a different server. Developers often refer to this use case as "broadcasting messages." ++The official recommendation from the Socket.IO library is to introduce a server-side component called an [adapter](https://socket.io/docs/v4/using-multiple-nodes/) to coordinate Socket.IO servers. An adapter figures out which servers the clients are connected to and instructs those servers to send messages. ++Adding an adapter component introduces complexity to both development and deployment. For example, if an architecture uses the [Redis adapter](https://socket.io/docs/v4/redis-adapter/), developers need to: ++- Implement sticky sessions. +- Deploy and maintain Redis instances. ++The engineering effort and time in getting a real-time communication channel in place distracts developers from working on features that make an app or system unique and valuable to users. ## What Web PubSub for Socket.IO aims to solve for developers-Although setting up a reliable and scalable app built with Socket.IO library is often reported as challenging by developers, developers **enjoy** the intuitive APIs offered and the wide range of clients the library supports. Web PubSub for Socket.IO builds on the values the library brings, while relieving developers the complexity of managing persistent connections reliably and at scale. -In practice, developers can continue using the APIs offered by Socket.IO library, but don't need to provision server resources to maintain WebSocket or long-polling based connections, which can be resource intensive. Also, developers don't need to manage and deploy an "adapter" component. The app server only needs to send a **single** operation and the Web PubSub for Socket.IO broadcasts the messages to relevant clients. +Although developers often report that setting up a reliable and scalable app that's built with the Socket.IO library is challenging, developers can benefit from the intuitive APIs and the wide range of clients that the library supports. Web PubSub for Socket.IO builds on the value that the library brings, while relieving developers of the complexity in managing persistent connections reliably and at scale. -## How does it work under the hood? -Web PubSub for Socket.IO builds upon Socket.IO protocols by implementing the Adapter and Engine.IO. The diagram describes the typical architecture when you use the Web PubSub for Socket.IO with your Socket.IO server. +In practice, developers can continue to use the Socket.IO library's APIs without needing to provision server resources to maintain WebSocket or long-polling-based connections, which can be resource intensive. Also, developers don't need to manage and deploy an adapter component. The app server needs to send only a single operation, and Web PubSub for Socket.IO broadcasts the messages to relevant clients. ++## How it works ++Web PubSub for Socket.IO builds on Socket.IO protocols by implementing the adapter and Engine.IO. The following diagram shows the typical architecture when you use Web PubSub for Socket.IO with your Socket.IO server. :::image type="content" source="./media/socketio-service-internal/typical-architecture-managed-socketio.jpg" alt-text="Screenshot of a typical architecture of a fully managed Socket.IO app."::: -Like a self-hosted Socket.IO app, you still need to host your Socket.IO application logic on your own server. However, with Web PubSub for Socket.IO**(the service)**, your server no longer manages client connections directly. -- **Your clients** establish persistent connections with the service, which we call "client connections". -- **Your servers** also establish persistent connections with the service, which we call "server connections". +Like a self-hosted Socket.IO app, you still need to host your Socket.IO application logic on your own server. However, with the Web PubSub for Socket.IO service: ++- Your server no longer manages client connections directly. +- Your clients establish persistent connections with the service (*client connections*). +- Your servers also establish persistent connections with the service (*server connections*). -When your server logic uses `send to client`, `broadcast`, and `add client to rooms`, these operations are sent to the service through established server connection. Messages from your server are translated to Socket.IO operations that Socket.IO clients can understand. As a result, any existing Socket.IO implementation can work without modification. The only modification needed is to change the endpoint your clients connect to. Refer to this article of [how to migrate a self-hosted Socket.IO app to Azure](./socketio-migrate-from-self-hosted.md). +When your server logic uses `send to client`, `broadcast`, and `add client to rooms`, these operations are sent to the service through an established server connection. Messages from your server are translated to Socket.IO operations that Socket.IO clients can understand. As a result, any existing Socket.IO implementation can work without major modifications. The only modification that you need to make is to change the endpoint that your clients connect to. For more information, see [Migrate a self-hosted Socket.IO app to be fully managed on Azure](./socketio-migrate-from-self-hosted.md). -When a client connects to the service, the service -- forwards Engine.IO connection `connect` to the server-- handles transport upgrade of client connections -- forwards all Socket.IO messages to server+When a client connects to the service, the service: +- Forwards the Engine.IO connection (`connect`) to the server. +- Handles the transport upgrade of client connections. +- Forwards all Socket.IO messages to the server. |
azure-web-pubsub | Socketio Supported Server Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-supported-server-apis.md | Title: Supported server APIs of Socket.IO -description: An article listing out Socket.IO server APIs that are partially supported or unsupported by Web PubSub for Socekt.IO +description: This article lists Socket.IO server APIs that are partially supported or unsupported in Web PubSub for Socket.IO. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO APIs Last updated 07/27/2023 -# Server APIs supported by Web PubSub for Socket.IO +# Supported server APIs of Socket.IO -Socket.IO library provides a set of [server API](https://socket.io/docs/v4/server-api/). -Note the following server APIs that are partially supported or unsupported by Web PubSub for Socket.IO. +The Socket.IO library provides a set of [server APIs](https://socket.io/docs/v4/server-api/). The following server APIs are partially supported or unsupported by Web PubSub for Socket.IO. | Server API | Support | |--|-| | [fetchSockets](https://socket.io/docs/v4/server-api/#serverfetchsockets) | Local only | | [serverSideEmit](https://socket.io/docs/v4/server-api/#serverserversideemiteventname-args) | Unsupported | | [serverSideEmitWithAck](https://socket.io/docs/v4/server-api/#serverserversideemitwithackeventname-args) | Unsupported |--Apart from the mentioned server APIs, all other server APIs from Socket.IO are fully supported. |
azure-web-pubsub | Socketio Troubleshoot Common Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-common-issues.md | Title: How to troubleshoot Socket.IO common issues -description: Learn how to troubleshoot Socket.IO common issues + Title: Troubleshoot Socket.IO common problems +description: Learn how to troubleshoot common problems with the Socket.IO library and the Azure Web PubSub service. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO issues Last updated 08/01/2023 -# Troubleshooting for common issues +# Troubleshoot Socket.IO common problems -Web PubSub for Socket.IO builds on Socket.IO library. When you use this Azure service, issues may lie with Socket.IO library itself or the service. +Web PubSub for Socket.IO builds on the Socket.IO library. When you're using the Azure service, problems might lie with the service or with the library. -## Issues with Socket.IO library +To find the origin of problems, you can isolate the Socket.IO library by temporarily removing Web PubSub for Socket.IO from your application. If the application works as expected after the removal, the root cause is probably with the Azure service. -To determine if the issues are with Socket.IO library, you can isolate it by temporarily removing Web PubSub for Socket.IO from your application. If the application works as expected after the removal, the root cause is probably with the Azure service. +Use this article to find solutions to common problems with the service. Additionally, you can [enable logging on the server side](./socketio-troubleshoot-logging.md#server-side) to examine the behavior of your Socket.IO app, if none of the listed solutions help. -If you suspect the issues are with Socket.IO library, refer to [Socket.IO library's documentation](https://socket.io/docs/v4/troubleshooting-connection-issues/) for common connection issues. +If you suspect that the problems are with the Socket.IO library, refer to the [Socket.IO library's documentation](https://socket.io/docs/v4/troubleshooting-connection-issues/). -## Issues with Web PubSub for Socket.IO -If you suspect that the issues are with the Azure service after investigation, take a look at the list of common issues. +## Server side -Additionally, you can [enable logging on the server side](./socketio-troubleshoot-logging.md#server-side) to examine closely the behavior of your Socket.IO app, if none of the listed issues helps. +### Improper package import -### Server side +#### Possible error -#### `useAzureSocketIO is not a function` -##### Possible error -- `TypeError: (intermediate value).useAzureSocketIO is not a function`+`TypeError: (intermediate value).useAzureSocketIO is not a function` -##### Root cause -If you use TypeScript in your project, you may observe this error. It's due to the improper package import. +#### Root cause ++If you use TypeScript in your project, you might observe this error. It's due to improper package import. ```typescript // Bad example import * as wpsExt from "@azure/web-pubsub-socket.io" ```-If a package isn't used or referenced after importing, the default behavior of TypeScript compiler is not to emit the package in the compiled `.js` file. -##### Solution -Use `import "@azure/web-pubsub-socket.io"`, instead. This import statement forces TypeScript compiler to include a package in the compiled `.js` file even if the package isn't referenced anywhere in the source code. [Read more](https://github.com/Microsoft/TypeScript/wiki/FAQ#why-are-imports-being-elided-in-my-emit)about this frequently asked question from the TypeScript community. +If a package isn't used or referenced after importing, the default behavior of the TypeScript compiler is not to emit the package in the compiled *.js* file. ++#### Solution ++Use `import "@azure/web-pubsub-socket.io"` instead. This import statement forces the TypeScript compiler to include a package in the compiled *.js* file, even if the package isn't referenced anywhere in the source code. [Read more](https://github.com/Microsoft/TypeScript/wiki/FAQ#why-are-imports-being-elided-in-my-emit) about this frequently asked question from the TypeScript community. + ```typescript // Good example. -// It forces TypeScript to include the package in compiled `.js` file. +// It forces TypeScript to include the package in compiled .js file. import "@azure/web-pubsub-socket.io" ``` -### Client side +## Client side ++### Incorrect path option ++#### Possible error ++`GET <web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found -#### `404 Not Found in client side with AWPS endpoint` -##### Possible Error - `GET <web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found +#### Root cause ++The Socket.IO client was created without a correct `path` option. -##### Root cause -Socket.IO client is created without a correct `path` option. ```javascript // Bad example const socket = io(endpoint) ``` -##### Solution -Add the correct `path` option with value `/clients/socketio/hubs/eio_hub` +#### Solution ++Add the correct `path` option with the value `/clients/socketio/hubs/eio_hub`. + ```javascript // Good example const socket = io(endpoint, { const socket = io(endpoint, { }); ``` -#### `404 Not Found in client side with non-AWPS endpoint` +### Incorrect Web PubSub for Socket.IO endpoint ++#### Possible error -##### Possible Error - `GET <non-web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found +`GET <non-web-pubsub-endpoint>/socket.io/?EIO=4&transport=polling&t=OcmE4Ni` 404 Not Found -##### Root cause -Socket.IO client is created without correct Web PubSub for Socket.IO endpoint. For example, +#### Root cause ++The Socket.IO client was created without a correct Web PubSub for Socket.IO endpoint. For example: ```javascript // Bad example. const socket = io(endpoint, { }); ``` -When you use Web PubSub for Socket.IO, your clients establish connections with an Azure service. When creating a Socket.IO client, you need use the endpoint to your Web PubSub for Socket.IO resource. +When you use Web PubSub for Socket.IO, your clients establish connections with an Azure service. When you create a Socket.IO client, you need to use the endpoint for your Web PubSub for Socket.IO resource. ++#### Solution -##### Solution -Let Socket.IO client use the endpoint of your Web PubSub for Socket.IO resource. +Let Socket.IO client use the endpoint for your Web PubSub for Socket.IO resource. ```javascript // Good example. const webPubSubEndpoint = "<web-pubsub-endpoint>"; const socket = io(webPubSubEndpoint, { path: "/clients/socketio/hubs/<Your hub name>", });-``` +``` |
azure-web-pubsub | Socketio Troubleshoot Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-troubleshoot-logging.md | Title: How to collect logs in Azure Socket.IO -description: This article explains how to collect logs when using Web PubSub for Socket.IO + Title: Collect logs in Web PubSub for Socket.IO +description: This article explains how to collect logs when you're using Web PubSub for Socket.IO. +keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO logging, Socket.IO debugging Last updated 08/01/2023 -# How to collect logs using Web PubSub for Socket.IO +# Collect logs in Web PubSub for Socket.IO -Like when you self-host Socket.IO library, you can collect logs on both the server and client side when you use Web PubSub for Socket.IO. +Like when you self-host the Socket.IO library, you can collect logs on both the server and client side when you use Web PubSub for Socket.IO. -## Server-side -On the server-side, two utilities are included that provide debugging -capabilities. -- [DEBUG](https://github.com/debug-js/debug), which is used by Socket.IO library and extension library provided by Web PubSub for certain logging.-- [@azure/logger](https://www.npmjs.com/package/@azure/logger), which provides more low-level network-related logging. Conveniently, it also allows you to set a log level. +## Server side ++The server side includes two utilities that provide debugging capabilities: ++- [DEBUG](https://github.com/debug-js/debug), which the Socket.IO library and extension library provided by Web PubSub use for certain logging. +- [@azure/logger](https://www.npmjs.com/package/@azure/logger), which provides lower-level network-related logging. Conveniently, it also allows you to set a log level. ### `DEBUG` JavaScript utility -#### Logs all debug information +#### Log all debug information + ```bash DEBUG=* node yourfile.js ``` -#### Logs debug information of specific packages. +#### Log the debug information from specific packages + ```bash-# Logs debug information of "socket.io" package +# Logs debug information from the "socket.io" package DEBUG=socket.io:* node yourfile.js -# Logs debug information of "engine.io" package +# Logs debug information from the "engine.io" package DEBUG=engine:* node yourfile.js -# Logs debug information of extention library "wps-sio-ext" provided by Web PubSub +# Logs debug information from the extension library "wps-sio-ext" provided by Web PubSub DEBUG=wps-sio-ext:* node yourfile.js -# Logs debug information of mulitple packages +# Logs debug information from multiple packages DEBUG=engine:*,socket.io:*,wps-sio-ext:* node yourfile.js ```+ :::image type="content" source="./media/socketio-troubleshoot-logging/log-debug.png" alt-text="Screenshot of logging information from DEBUG JavaScript utility"::: ### `@azure/logger` utility-You can enable logging from this utility to get more low-level network-related information by setting the environmental variable `AZURE_LOG_LEVEL`. ++You can enable logging from the `@azure/logger` utility to get lower-level network-related information by setting the environmental variable `AZURE_LOG_LEVEL`. ```bash AZURE_LOG_LEVEL=verbose node yourfile.js ``` -`Azure_LOG_LEVEL` has four levels: `verbose`, `info`, `warning` and `error`. +`Azure_LOG_LEVEL` has four levels: `verbose`, `info`, `warning`, and `error`. + ## Client side-Using Web PubSub for Socket.IO doesn't change how you debug Socket.IO library. [Refer to the documentation](https://socket.io/docs/v4/logging-and-debugging/) from Socket.IO library. -### Debug Socket.IO client in Node +Using Web PubSub for Socket.IO doesn't change how you debug the Socket.IO library. [Refer to the documentation](https://socket.io/docs/v4/logging-and-debugging/) from the Socket.IO library. ++### Debug the Socket.IO client in Node + ```bash # Logs all debug information DEBUG=* node yourfile.js -# Logs debug information from "socket.io-client" package +# Logs debug information from the "socket.io-client" package DEBUG=socket.io-client:* node yourfile.js -# Logs debug information from "engine.io-client" package +# Logs debug information from the "engine.io-client" package DEBUG=engine.io-client:* node yourfile.js # Logs debug information from multiple packages DEBUG=socket.io-client:*,engine.io-client* node yourfile.js ``` -### Debug Socket.IO client in browser -In browser, use `localStorage.debug = '<scope>'`. +### Debug the Socket.IO client in a browser ++In a browser, use `localStorage.debug = '<scope>'`. ```bash # Logs all debug information localStorage.debug = '*'; -# Logs debug information from "socket.io-client" package +# Logs debug information from the "socket.io-client" package localStorage.debug = 'socket.io-client'; ``` |
backup | About Restore Microsoft Azure Recovery Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/about-restore-microsoft-azure-recovery-services.md | Title: Restore options with Microsoft Azure Recovery Services (MARS) agent description: Learn about the restore options available with the Microsoft Azure Recovery Services (MARS) agent. Previously updated : 05/07/2021 Last updated : 08/14/2023 Using the MARS agent you can: - **[Restore all backed up files in a volume](restore-all-files-volume-mars.md):** This option recovers all backed up data in a specified volume from the recovery point in Azure Backup. It allows a faster transfer speed (up to 40 MBPS).<br>We recommend you to use this option for recovering large amounts of data, or entire volumes. - **[Restore a specific set of backed up files and folders in a volume using PowerShell](backup-client-automation.md#restore-data-from-azure-backup):** If the paths to the files and folders relative to the volume root are known, this option allows you to restore the specified set of files and folders from a recovery point, using the faster transfer speed of the full volume restore. However, this option doesnΓÇÖt provide the convenience of browsing files and folders in the recovery point using the Instant Restore option. - **[Restore individual files and folders using Instant Restore](backup-azure-restore-windows-server.md):** This option allows quick access to the backup data by mounting volume in the recovery point as a drive. You can then browse, and copy files and folders. This option offers a copy speed of up to 6 MBPS, which is suitable for recovering individual files and folders of total size less than 80 GB. Once the required files are copied, you can unmount the recovery point.+- **Cross Region Restore for MARS (preview)**: If your Recovery Services vault uses GRS resiliency and has the [Cross Region Restore setting turned on](backup-create-recovery-services-vault.md#set-cross-region-restore), you can restore the backup data from the secondary region. ++## Cross Region Restore (preview) ++Cross Region Restore (CRR) allows you to restore MARS backup data from a secondary region, which is an Azure paired region. This enables you to conduct drills for audit and compliance, and recover data during the unavailability of the primary region in Azure in the case of a disaster. ++To use this feature: ++1. [Turn on Cross Region Restore in your Recovery Services vault](backup-create-rs-vault.md#set-cross-region-restore). Once Cross Region Restore is enabled, you can't disable it. +2. After you turn on the feature, it can take up to *48 hours* for the backup items to be available in the secondary region. Currently, the secondary region RPO is *36 hours*, because the RPO in the primary region is *24 hours*, and it can take up to *12 hours* to replicate the backup data from the primary to secondary region. +3. To restore the backup data for the original machine, you can directly select **Secondary Region** as the source of the backup data in the wizard. ++ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection for secondary region as the backup data source during Cross Region Restore."::: ++4. To restore backup data for an alternate server from the secondary region, you need to download the *Secondary Region vault credential* from the Azure portal. ++ :::image type="content" source="./media/about-restore-microsoft-azure-recovery-services/download-vault-credentials-for-cross-region-restore.png" alt-text="Screenshot shows how to download vault credentials for secondary region."::: ++5. To automate recovery from secondary region for audit or compliance drills, [use this command](backup-client-automation.md#cross-region-restore). ++>[!Note] +>- Recovery Services vaults with private endpoint are currently not supported for Cross Region Restore with MARS. +>- Recovery Services vaults enabled with Cross Region Restore will be automatically charged at RA-GRS rates for the MARS backups stored in the vault once the feature is generally available. ## Next steps -- For more frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml).+- For additional frequently asked questions, see [MARS agent FAQs](backup-azure-file-folder-backup-faq.yml). - For information about supported scenarios and limitations, see [Support Matrix for the backup with the MARS agent](backup-support-matrix-mars-agent.md). |
backup | Azure Kubernetes Service Cluster Backup Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-concept.md | Title: Azure Kubernetes Service (AKS) backup using Azure Backup prerequisites description: This article explains the prerequisites for Azure Kubernetes Service (AKS) backup. Previously updated : 07/27/2023 Last updated : 08/17/2023 Azure Backup now allows you to back up AKS clusters (cluster resources and persi - Extension agent and extension operator are the core platform components in AKS, which are installed when an extension of any type is installed for the first time in an AKS cluster. These provide capabilities to deploy *1P* and *3P* extensions. The backup extension also relies on these for installation and upgrades. -- Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster.---+ >[!Note] + >Both of these core components are deployed with aggressive hard limits on CPU and memory, with CPU *less than 0.5% of a core* and memory limit ranging from *50-200 MB*. So, the *COGS impact* of these components is very low. Because they are core platform components, there is no workaround available to remove them once installed in the cluster. Learn [how to manage the operation to install Backup Extension using Azure CLI](azure-kubernetes-service-cluster-manage-backups.md#backup-extension-related-operations). |
backup | Azure Kubernetes Service Cluster Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-backup-support-matrix.md | Title: Azure Kubernetes Service (AKS) backup support matrix description: This article provides a summary of support settings and limitations of Azure Kubernetes Service (AKS) backup. Previously updated : 03/27/2023 Last updated : 08/17/2023 AKS backup is available in all the Azure public cloud regions: East US, North Eu ## Limitations -- AKS backup supports AKS clusters with Kubernetes version 1.21.1 or later. This version has Container Storage Interface (CSI) drivers installed.+- AKS backup supports AKS clusters with Kubernetes version *1.22* or later. This version has Container Storage Interface (CSI) drivers installed. -- A CSI driver supports performing backup and restore operations for persistent volumes.+- Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster). -- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). If you're using Azure Files shares and Azure Blob Storage persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [About Azure file share backup](azure-file-share-backup-overview.md) and [Overview of Azure Blob Storage backup](blob-backup-overview.md).+- AKS backups don't support in-tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](../aks/csi-migrate-in-tree-volumes.md). -- AKS backups don't support tree volumes. You can back up only CSI driver-based volumes. You can [migrate from tree volumes to CSI driver-based persistent volumes](../aks/csi-migrate-in-tree-volumes.md).+- Currently, an AKS backup supports only the backup of Azure disk-based persistent volumes (enabled by the CSI driver). Also, these persistent volumes should be dynamically provisioned as static volumes are not supported. -- Before you install the backup extension in an AKS cluster, ensure that the CSI drivers and snapshot are enabled for your cluster. If they're disabled, [enable these settings](../aks/csi-storage-drivers.md#enable-csi-storage-drivers-on-an-existing-cluster).+- Azure Files shares and Azure Blob Storage persistent volumes are currently not supported by AKS backup due to lack of CSI Driver-based snapshotting capability. If you're using said persistent volumes in your AKS clusters, you can configure backups for them via the Azure Backup solutions. For more information, see [Azure file share backup](azure-file-share-backup-overview.md) and [Azure Blob Storage backup](blob-backup-overview.md). ++- Any unsupported persistent volume type is skipped while a backup is being created for the AKS cluster. -- The backup extension uses the AKS cluster's managed system identity to perform backup operations. So, an AKS backup doesn't support AKS clusters that use a service principal. You can [update your AKS cluster to use a managed system identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster).+- The backup extension uses the AKS cluster's system identity to do the backup operations. Currently, AKS clusters using a User Identity, or a Service Principal aren't supported. If your AKS cluster uses a Service Principal, you can [update your AKS cluster to use a System Identity](../aks/use-managed-identity.md#enable-managed-identities-on-an-existing-aks-cluster). - You must install the backup extension in the AKS cluster. If you're using Azure CLI to install the backup extension, ensure that the version is 2.41 or later. Use `az upgrade` command to upgrade the Azure CLI. |
backup | Backup Azure About Mars | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-about-mars.md | Title: About the MARS Agent description: Learn how the MARS Agent supports the backup scenarios Previously updated : 11/28/2022 Last updated : 08/18/2023 -In this article, you'll learn about: --> [!div class="checklist"] -> - Backup scenarios -> - Recovery scenarios -> - Backup process - ## Backup scenarios The MARS agent supports the following backup scenarios: The MARS agent supports the following recovery scenarios: | Server | Recovery scenario | Description | | | | |-| **Same Server** | | The server on which the backup was originally created. | +| **Same Server** | | Server on which the backup was originally created. | | | **Files and Folders** | Choose the individual files and folders that you want to restore. | | | **Volume Level** | Choose the volume and recovery point that you want to restore, and then restore it to the same location or an alternate location on the same machine. Create a copy of existing files, overwrite existing files, or skip recovering existing files. | | | **System Level** | Choose the system state and recovery point to restore to the same machine at a specified location. | The MARS agent supports the following recovery scenarios: ## Backup process 1. From the Azure portal, create a [Recovery Services vault](install-mars-agent.md#create-a-recovery-services-vault), and choose files, folders, and the system state from the **Backup goals**.-2. [Download the Recovery Services vault credentials and agent installer](./install-mars-agent.md#download-the-mars-agent) to an on-premises machine. --3. [Install the agent](./install-mars-agent.md#install-and-register-the-agent) and use the downloaded vault credentials to register the machine to the Recovery Services vault. -4. From the agent console on the client, [configure the backup](./backup-windows-with-mars-agent.md#create-a-backup-policy) to specify what to back up, when to back up (the schedule), how long the backups should be retained in Azure (the retention policy) and start protecting. +2. [Configure your Recovery Services vault to securely save the backup passphrase to Azure Key vault](save-backup-passphrase-securely-in-azure-key-vault.md). +3. [Download the Recovery Services vault credentials and agent installer](./install-mars-agent.md#download-the-mars-agent) to an on-premises machine. +4. [Install the agent](./install-mars-agent.md#install-and-register-the-agent) and use the downloaded vault credentials to register the machine to the Recovery Services vault. +5. From the agent console on the client, [configure the backup](./backup-windows-with-mars-agent.md#create-a-backup-policy) to specify what to back up, when to back up (the schedule), how long the backups should be retained in Azure (the retention policy) and start protecting. The following diagram shows the backup flow: |
backup | Backup Azure Arm Restore Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md | Title: Restore VMs by using the Azure portal description: Restore an Azure virtual machine from a recovery point by using the Azure portal, including the Cross Region Restore feature. Previously updated : 07/13/2023 Last updated : 08/21/2023 Azure Backup provides several ways to restore a VM. **Restore disk** | Restores a VM disk, which can then be used to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings, and create a VM.<br/><br/> The disks are copied to the Resource Group you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured using the template or PowerShell. **Replace existing** | You can restore a disk, and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it's been deleted, this option can't be used.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk. The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> When you choose a Vault-Standard recovery point, a VHD file with the content of the chosen recovery point is also created in the staging location you specify. Existing disks connected to the VM are replaced with the selected restore point. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>Replace existing is supported for unencrypted managed VMs, including VMs [created using custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's unsupported for classic VMs, unmanaged VMs, and [generalized VMs](../virtual-machines/windows/upload-generalized-managed.md).<br/><br/> If the restore point has more or less disks than the current VM, then the number of disks in the restore point will only reflect the VM configuration.<br><br> Replace existing is also supported for VMs with linked resources, like [user-assigned managed-identity](../active-directory/managed-identities-azure-resources/overview.md) or [Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | Cross Region restore can be used to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> During the backup, snapshots aren't replicated to the secondary region. Only the data stored in the vault is replicated. So secondary region restores are only [vault tier](about-azure-vm-restore.md#concepts) restores. The restore time for the secondary region will be almost the same as the vault tier restore time for the primary region. <br><br> This feature is available for the options below:<br><br> - [Create a VM](#create-a-vm) <br> - [Restore Disks](#restore-disks) <br><br> We don't currently support the [Replace existing disks](#replace-existing-disks) option.<br><br> Permissions<br> The restore operation on secondary region can be performed by Backup Admins and App admins.-**Cross Subscription Restore (preview)** | Allows you to restore Azure Virtual Machines or disks to any subscription (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed), [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup). -**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [Trusted Launch VMs](backup-support-matrix-iaas.md#tvm-backup). +**Cross Subscription Restore** | Allows you to restore Azure Virtual Machines or disks to a different subscription within the same tenant as the source subscription (as per the Azure RBAC capabilities) from restore points. <br><br> Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> Works with [Cross Region Restore](backup-azure-arm-restore-vms.md#cross-region-restore) and [Cross Zonal Restore](backup-azure-arm-restore-vms.md#create-a-vm). <br><br> You can trigger Cross Subscription Restore for managed virtual machines only. <br><br> Cross Subscription Restore is supported for [Restore with Managed System Identities (MSI)](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities). <br><br> It's unsupported for [snapshots tier](backup-azure-vms-introduction.md#snapshot-creation) recovery points. <br><br> It's unsupported for [unmanaged VMs](#restoring-unmanaged-vms-and-disks-as-managed) and [ADE encrypted VMs](backup-azure-vms-encryption.md#encryption-support-using-ade). +**Cross Zonal Restore** | Allows you to restore Azure Virtual Machines or disks pinned to any zone to different available zones (as per the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> You can trigger Cross Zonal Restore for managed virtual machines only. <br><br> Cross Zonal Restore is supported for [Restore with Managed System Identities (MSI)](#restore-vms-with-managed-identities). <br><br> Cross Zonal Restore supports restore of an Azure zone pinned/non-zone pinned VM from a vault with Zonal-redundant storage (ZRS) enabled. Learn [how to set Storage Redundancy](backup-create-rs-vault.md#set-storage-redundancy). <br><br> It's supported to restore an Azure zone pinned VM only from a [vault with Cross Region Restore (CRR)](backup-create-rs-vault.md#set-storage-redundancy) (if the secondary region supports zones) or Zone Redundant Storage (ZRS) enabled. <br><br> Cross Zonal Restore is supported from [secondary regions](#restore-in-secondary-region). <br><br> It's unsupported from [snapshots](backup-azure-vms-introduction.md#snapshot-creation) restore point. <br><br> It's unsupported for [Encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups). >[!Tip] >To receive alerts/notifications when a restore operation fails, use [Azure Monitor alerts for Azure Backup](backup-azure-monitoring-built-in-monitor.md#azure-monitor-alerts-for-azure-backup). This helps you to monitor such failures and take necessary actions to remediate the issues. |
backup | Backup Azure Delete Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-delete-vault.md | Choose a client: To delete a vault, follow these steps: -- **Step 1**: Go to **vault Overview**, click **Delete**, and then follow the instructions to complete the removal of Azure Backup and Azure Site Recovery items for vault deletion as shown below. Each link calls the respective _blade_ to perform the corresponding vault deletion steps.+- **Step 1:** Go to **vault Overview**, click **Delete**, and then follow the instructions to complete the removal of Azure Backup and Azure Site Recovery items for vault deletion as shown below. Each link calls the respective _blade_ to perform the corresponding vault deletion steps. See the instructions in the following steps to understand the process. Also, you can go to each blade to delete vaults. To delete a vault, follow these steps: Alternately, go to the blades manually by following the steps below. -- <a id="portal-mua">**Step 2**</a>: If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)+- <a id="portal-mua">**Step 2:**</a> If Multi-User Authorization (MUA) is enabled, seek necessary permissions from the security administrator before vault deletion. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) -- <a id="portal-disable-soft-delete">**Step 3**</a>: Disable the soft delete and Security features+- <a id="portal-disable-soft-delete">**Step 3:**</a> Disable the soft delete and Security features 1. Go to **Properties** -> **Security Settings** and disable the **Soft Delete** feature if enabled. See [how to disable soft delete](./backup-azure-security-feature-cloud.md#enabling-and-disabling-soft-delete). 1. Go to **Properties** -> **Security Settings** and disable **Security Features**, if enabled. [Learn more](./backup-azure-security-feature.md) -- <a id="portal-delete-cloud-protected-items">**Step 4**</a>: Delete Cloud protected items+- <a id="portal-delete-cloud-protected-items">**Step 4:**</a> Delete Cloud protected items 1. **Delete Items in soft-deleted state**: After disabling soft delete, check if there are any items previously remaining in the soft deleted state. If there are items in soft deleted state, then you need to *undelete* and *delete* them again. [Follow these steps](./backup-azure-security-feature-cloud.md#using-azure-portal) to find soft delete items and permanently delete them. To delete a vault, follow these steps: 1. Go to the vault dashboard menu -> **Backup Items**. Click **Stop Backup** to stop the backups of all listed items, and then click **Delete Backup Data** to delete. [Follow these steps](#delete-protected-items-in-the-cloud) to remove those items. -- <a id="portal-delete-backup-servers">**Step 5**</a>: Delete Backup Servers+- <a id="portal-delete-backup-servers">**Step 5:**</a> Delete Backup Servers 1. Go to the vault dashboard menu > **Backup Infrastructure** > **Protected Servers**. In Protected Servers, select the server to unregister. To delete the vault, you must unregister all the servers. Right-click each protected server and select **Unregister**. To delete a vault, follow these steps: >[!Note] >Deleting MARS/MABS/DPM servers also removes the corresponding backup items protected in the vault. -- <a id="portal-unregister-storage-accounts">**Step 6**</a>: Unregister Storage Accounts+- <a id="portal-unregister-storage-accounts">**Step 6:**</a> Unregister Storage Accounts Ensure all registered storage accounts are unregistered for successful vault deletion. Go to the vault dashboard menu > **Backup Infrastructure** > **Storage Accounts**. If you've storage accounts listed here, then you must unregister all of them. Learn more how to [Unregister a storage account](manage-afs-backup.md#unregister-a-storage-account). -- <a id="portal-remove-private-endpoints">**Step 7**</a>: Remove Private Endpoints+- <a id="portal-remove-private-endpoints">**Step 7:**</a> Remove Private Endpoints Ensure there are no Private endpoints created for the vault. Go to Vault dashboard menu > **Private endpoint Connections** under 'Settings' > if the vault has any Private endpoint connections created or attempted to be created, ensure they are removed before proceeding with vault delete. -- **Step 8**: Delete vault+- **Step 8:** Delete vault After you've completed these steps, you can continue to [delete the vault](?tabs=portal#delete-the-recovery-services-vault). If you're sure that all the items backed up in the vault are no longer required Follow these steps: -- **Step 1**: Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management)+- **Step 1:** Seek the necessary permissions from the security administrator to delete the vault if Multi-User Authorization has been enabled against the vault. [Learn more](./multi-user-authorization.md#authorize-critical-protected-operations-using-azure-ad-privileged-identity-management) -- <a id="powershell-install-az-module">**Step 2**</a>: Upgrade to PowerShell 7 version by performing these steps:+- <a id="powershell-install-az-module">**Step 2:**</a> Upgrade to PowerShell 7 version by performing these steps: 1. Upgrade to PowerShell 7: Run the following command in your console: Follow these steps: 1. Open PowerShell 7 as administrator. -- **Step 3**: Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault.+- **Step 3:** Save the PowerShell script in .ps1 format. Then, to run the script in your PowerShell console, type `./NameOfFile.ps1`. This recursively deletes all backup items and eventually the entire Recovery Services vault. >[!Note] >To access the PowerShell script for vault deletion, see the [PowerShell script for vault deletion](./scripts/delete-recovery-services-vault.md) article. |
backup | Backup Azure Restore System State | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-system-state.md | Title: Restore System State to a Windows Server description: Step-by-step explanation for restoring Windows Server System State from a backup in Azure. Previously updated : 12/09/2022 Last updated : 08/14/2023 This article explains how to restore Windows Server System State backups from an 1. Restore System State as files from Azure Backup. When restoring System State as files from Azure Backup, you can either: * Restore System State to the same server where the backups were taken, or * Restore System State file to an alternate server.+ * If you have Cross Region Restore enabled in your vault, you can restore the backup data from a secondary region. 2. Apply the restored System State files to a Windows Server using the Windows Server Backup utility. The following steps explain how to roll back your Windows Server configuration t ![Choose this server option to restore the data to the same machine](./media/backup-azure-restore-system-state/samemachine.png) + If you have enabled Cross Region Restore (preview) and want to restore from the secondary region, select **Secondary Region**. Else, select **Primary Region**. ++ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection of the source region of recovery point."::: + 4. On the **Select Recovery Mode** pane, choose **System State** and then select **Next**. ![Browse files](./media/backup-azure-restore-system-state/recover-type-selection.png) The terminology used in these steps includes: 5. Provide the vault credential file that corresponds to the *Sample vault*. If the vault credential file is invalid (or expired), download a new vault credential file from the *Sample vault* in the Azure portal. Once the vault credential file is provided, the Recovery Services vault associated with the vault credential file appears. + If you want to use Cross Region Restore to restore the backup data from the secondary region, you need to download the *Secondary Region vault credential file* from the Azure portal, and then pass the file in the MARS agent. ++ :::image type="content" source="./media/backup-azure-restore-windows-server/pass-vault-credentials-in-mars-agent.png" alt-text="Screenshot shows the secondary vault credentials passed in MARS agent."::: + 6. On the Select Backup Server pane, select the *Source machine* from the list of displayed machines. 7. On the Select Recovery Mode pane, choose **System State** and select **Next**. |
backup | Backup Azure Restore Windows Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-windows-server.md | Title: Restore files to Windows Server using the MARS Agent description: In this article, learn how to restore data stored in Azure to a Windows server or Windows computer with the Microsoft Azure Recovery Services (MARS) Agent.- Previously updated : 09/07/2018+ Last updated : 08/14/2023 This article explains how to restore data from a backup vault. To restore data, * Restore data to the same machine from which the backups were taken. * Restore data to an alternate machine.+* If you have Cross Region Restore enabled on your vault, you can restore the backup data from the secondary region. -Use the Instant Restore feature to mount a writeable recovery point snapshot as a recovery volume. You can then explore the recovery volume and copy files to a local computer, in that way selectively restoring files. +Use the Instant Restore feature to mount a writeable recovery point snapshot as a recovery volume. You can then explore the recovery volume and copy files to a local computer, thus selectively restoring files. > [!NOTE] > The [January 2017 Azure Backup update](https://support.microsoft.com/help/3216528/azure-backup-update-for-microsoft-azure-recovery-services-agent-januar) is required if you want to use Instant Restore to restore data. Also, the backup data must be protected in vaults in locales listed in the support article. Consult the [January 2017 Azure Backup update](https://support.microsoft.com/help/3216528/azure-backup-update-for-microsoft-azure-recovery-services-agent-januar) for the latest list of locales that support Instant Restore. If you accidentally deleted a file and want to restore it to the same machine (f ![Screenshot of Recover Data Wizard Getting Started page (restore to same machine)](./media/backup-azure-restore-windows-server/samemachine_gettingstarted_instantrestore.png) + If you have enabled Cross Region Restore (preview) and want to restore from the secondary region, select **Secondary Region**. Else, select **Primary Region**. ++ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection of the source region of recovery point."::: ++ 4. On the **Select Recovery Mode** page, choose **Individual files and folders** > **Next**. These steps include the following terminology: ![Screenshot of Recover Data Wizard Getting Started page (restore to alternate machine)](./media/backup-azure-restore-windows-server/alternatemachine_gettingstarted_instantrestore.png) -5. Provide the vault credential file that corresponds to the sample vault, and select **Next**. +5. Provide the vault credential file that corresponds to the sample vault. If the vault credential file is invalid (or expired), [download a new vault credential file from the sample vault](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) in the Azure portal. After you provide a valid vault credential, the name of the corresponding backup vault appears. + If you want to use Cross Region Restore to restore backup data from the secondary region, you need to download the *Secondary Region* vault credential file* from the Azure portal, and then pass the file in the MARS agent. ++ :::image type="content" source="./media/backup-azure-restore-windows-server/pass-vault-credentials-in-mars-agent.png" alt-text="Screenshot shows the vault credentials added to MARS agent."::: ++ Select **Next** to continue. + 6. On the **Select Backup Server** page, select the source machine from the list of displayed machines, and provide the passphrase. Then select **Next**. ![Screenshot of Recover Data Wizard Select Backup Server page (restore to alternate machine)](./media/backup-azure-restore-windows-server/alternatemachine_selectmachine_instantrestore.png) |
backup | Backup Azure Vms Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md | Title: Back up and restore encrypted Azure VMs description: Describes how to back up and restore encrypted Azure VMs with the Azure Backup service. Previously updated : 12/14/2022 Last updated : 08/28/2023 Azure Backup needs read-only access to back up the keys and secrets, along with - Your Key Vault is associated with the Azure AD tenant of the Azure subscription. If you're a **Member user**, Azure Backup acquires access to the Key Vault without further action. - If you're a **Guest user**, you must provide permissions for Azure Backup to access the key vault. You need to have access to key vaults to configure Backup for encrypted VMs. +To provide Azure RBAC permissions on Key Vault, see [this article](../key-vault/general/rbac-guide.md?tabs=azure-cli#enable-azure-rbac-permissions-on-key-vault). + To set permissions: 1. In the Azure portal, select **All services**, and search for **Key vaults**. |
backup | Backup Azure Vms Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-troubleshoot.md | The backup operation failed because the VM is in Failed state. For a successful Error code: UserErrorFsFreezeFailed <br/> Error message: Failed to freeze one or more mount-points of the VM to take a file-system consistent snapshot. -**Step 1** +**Step 1:** * Unmount the devices for which the file system state wasn't cleaned, using the **umount** command. * Run a file system consistency check on these devices by using the **fsck** command. MountsToSkip = /mnt/resource SafeFreezeWaitInSeconds=600 ``` -**Step 2** +**Step 2:** * Check if there are duplicate mount points present. Error message: Snapshot operation failed because VSS writers were in a bad state This error occurs because the VSS writers were in a bad state. Azure Backup extensions interact with VSS Writers to take snapshots of the disks. To resolve this issue, follow these steps: -**Step 1**: Check the **Free Disk Space**, **VM resources as RAM and page file**, and **CPU utilization percentage**. +**Step 1:** Check the **Free Disk Space**, **VM resources as RAM and page file**, and **CPU utilization percentage**. - Increase the VM size to increase vCPUs and RAM space. - Increase the disk size if the free disk space is low. -**Step 2**: Restart VSS writers that are in a bad state. +**Step 2:** Restart VSS writers that are in a bad state. * From an elevated command prompt, run `vssadmin list writers`. * The output contains all VSS writers and their state. For every VSS writer with a state that's not **[1] Stable**, restart the respective VSS writer's service. This error occurs because the VSS writers were in a bad state. Azure Backup exte > [!NOTE] > Restarting some services can have an impact on your production environment. Ensure the approval process is followed and the service is restarted at the scheduled downtime. -**Step 3**: If restarting the VSS writers did not resolve the issue, then run the following command from an elevated command-prompt (as an administrator) to prevent the threads from being created for blob-snapshots. +**Step 3:** If restarting the VSS writers did not resolve the issue, then run the following command from an elevated command-prompt (as an administrator) to prevent the threads from being created for blob-snapshots. ```console REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgentPersistentKeys" /v SnapshotWithoutThreads /t REG_SZ /d True /f ``` -**Step 4**: If steps 1 and 2 did not resolve the issue, then the failure could be due to VSS writers timing out due to limited IOPS.<br> +**Step 4:** If steps 1 and 2 did not resolve the issue, then the failure could be due to VSS writers timing out due to limited IOPS.<br> To verify, navigate to ***System and Event Viewer Application logs*** and check for the following error message:<br> *The shadow copy provider timed out while holding writes to the volume being shadow copied. This is probably due to excessive activity on the volume by an application or a system service. Try again later when activity on the volume is reduced.*<br> Error message: Snapshot operation failed due to inadequate VM resources. The backup operation on the VM failed due to delay in network calls while performing the snapshot operation. To resolve this issue, perform Step 1. If the issue persists, try steps 2 and 3. -**Step 1**: Create snapshot through Host +**Step 1:** Create snapshot through Host From an elevated (admin) command-prompt, run the following command: REG ADD "HKLM\SOFTWARE\Microsoft\BcdrAgentPersistentKeys" /v CalculateSnapshotTi This will ensure the snapshots are taken through host instead of Guest. Retry the backup operation. -**Step 2**: Try changing the backup schedule to a time when the VM is under less load (like less CPU or IOPS) +**Step 2:** Try changing the backup schedule to a time when the VM is under less load (like less CPU or IOPS) -**Step 3**: Try [increasing the size of the VM](../virtual-machines/resize-vm.md) and retry the operation +**Step 3:** Try [increasing the size of the VM](../virtual-machines/resize-vm.md) and retry the operation ### 320001, ResourceNotFound - Could not perform the operation as VM no longer exists / 400094, BCMV2VMNotFound - The virtual machine doesn't exist / An Azure virtual machine wasn't found |
backup | Backup Client Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-client-automation.md | Title: Use PowerShell to back up Windows Server to Azure description: In this article, learn how to use PowerShell to set up Azure Backup on Windows Server or a Windows client, and manage backup and recovery.- Previously updated : 08/24/2021 -+ Last updated : 08/29/2021 + $CredsPath = "C:\downloads" $CredsFilename = Get-AzRecoveryServicesVaultSettingsFile -Backup -Vault $Vault1 -Path $CredsPath ``` -### Registering using the PS Az module +### Register using the PowerShell Az module > [!NOTE] > A bug with generation of vault certificate is fixed in Az 3.5.0 release. Use Az 3.5.0 release version or greater to download a vault certificate. Server properties updated successfully All backups from Windows Servers and clients to Azure Backup are governed by a policy. The policy includes three parts: -1. A **backup schedule** that specifies when backups need to be taken and synchronized with the service. -2. A **retention schedule** that specifies how long to retain the recovery points in Azure. -3. A **file inclusion/exclusion specification** that dictates what should be backed up. +- A **backup schedule** that specifies when backups need to be taken and synchronized with the service. +- A **retention schedule** that specifies how long to retain the recovery points in Azure. +- A **file inclusion/exclusion specification** that dictates what should be backed up. In this document, since we're automating backup, we'll assume nothing has been configured. We begin by creating a new backup policy using the [New-OBPolicy](/powershell/module/msonlinebackup/new-obpolicy) cmdlet. Job completed. The recovery operation completed successfully. ``` -## Uninstalling the Azure Backup agent +## Cross Region Restore ++Cross Region Restore (CRR) allows you to restore MARS backup data from a secondary region, which is an Azure paired region. This enables you to conduct drills for audit and compliance, and recover data during the unavailability of the primary region in Azure in the case of a disaster. ++### Original server restore ++If you're performing restore for the original server from the secondary region (Cross Region Restore), use the flag `UseSecondaryRegion` while getting the `OBRecoverableSource` object. ++```azurepowershell +$sources = Get-OBRecoverableSource -UseSecondaryRegion +$RP = Get-OBRecoverableItem -Source $sources[0] +$RO = New-OBRecoveryOption -DestinationPath $RecoveryPath -OverwriteType Overwrite +Start-OBRecovery -RecoverableItem $RP -RecoveryOption $RO -Async | ConvertTo-Json ++``` ++### Alternate server restore ++If you're performing restore for an alternate server from the secondary region (Cross Region Restore), download the *secondary region vault credential file* from the Azure portal and pass the secondary region vault credential for restore. ++```azurepowershell +$serverName = ΓÇÿmyserver.mycompany.comΓÇÖ +$secVaultCred = ΓÇ£C:\Users\myuser\Downloads\myvault_Mon Jul 17 2023.VaultCredentialsΓÇ¥ +$passphrase = ΓÇÿDefault PassphraseΓÇÖ +$alternateServers = Get-OBAlternateBackupServer -VaultCredentials $secVaultCred +$altServer = $alternateServers[2] | Where-Object {$_.ServerName -Like $serverName} +$pwd = ConvertTo-SecureString -String $passphrase -AsPlainText -Force +$sources = Get-OBRecoverableSource $altServer +$RP = Get-OBRecoverableItem -Source $sources[0] +$RO = New-OBRecoveryOption +Start-OBRecoveryMount -RecoverableItem $RP -RecoveryOption $RO -EncryptionPassphrase $pwd -Async | ConvertTo-Json ++``` ++## Uninstall the Azure Backup agent Uninstalling the Azure Backup agent can be done by using the following command: Invoke-Command -Session $Session -Script { param($D, $A) Start-Process -FilePath For more information about Azure Backup for Windows Server/Client: * [Introduction to Azure Backup](./backup-overview.md)-* [Back up Windows Servers](backup-windows-with-mars-agent.md) +* [Back up Windows Servers](backup-windows-with-mars-agent.md) |
backup | Backup Create Recovery Services Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-create-recovery-services-vault.md | Title: Create and configure Recovery Services vaults description: Learn how to create and configure Recovery Services vaults, and how to restore in a secondary region by using Cross Region Restore. Previously updated : 07/21/2023 Last updated : 08/14/2023 Azure Backup automatically handles storage for the vault. You need to specify ho 1. For **Storage replication type**, select **Geo-redundant**, **Locally-redundant**, or **Zone-redundant**. Then select **Save**. - ![Set the storage configuration for new vault](./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png) + :::image type="content" source="./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png" alt-text="Screenshot shows how to set the storage configuration for a new vault." lightbox="./media/backup-create-rs-vault/recovery-services-vault-backup-configuration.png"::: Here are our recommendations for choosing a storage replication type: Before you begin, consider the following information: - Cross Region Restore is supported only for a Recovery Services vault that uses the [GRS replication type](#set-storage-redundancy). - Virtual machines (VMs) created through Azure Resource Manager and encrypted Azure VMs are supported. VMs created through the classic deployment model aren't supported. You can restore the VM or its disk. - SQL Server or SAP HANA databases hosted on Azure VMs are supported. You can restore databases or their files.+- MARS Agent is supported for vaults without private endpoint (preview). - Review the [support matrix](backup-support-matrix.md#cross-region-restore) for a list of supported managed types and regions.-- Using Cross Region Restore will incur additional charges. [Learn more](https://azure.microsoft.com/pricing/details/backup/).-- After you opt in, it might take up to 48 hours for the backup items to be available in secondary regions.+- Using Cross Region Restore will incur additional charges. Once you enable Cross Region restore, it might take up to 48 hours for the backup items to be available in secondary regions. [Learn more about pricing](https://azure.microsoft.com/pricing/details/backup/). - Cross Region Restore currently can't be reverted to GRS or LRS after the protection starts for the first time. - Currently, secondary region RPO is 36 hours. This is because the RPO in the primary region is 24 hours and can take up to 12 hours to replicate the backup data from the primary to the secondary region. - Review the [permissions required to use Cross Region Restore](backup-rbac-rs-vault.md#minimum-role-requirements-for-azure-vm-backup). For more information about backup and restore with Cross Region Restore, see the - [Cross Region Restore for Azure VMs](backup-azure-arm-restore-vms.md#cross-region-restore) - [Cross Region Restore for SQL Server databases](restore-sql-database-azure-vm.md#cross-region-restore) - [Cross Region Restore for SAP HANA databases](sap-hana-db-restore.md#cross-region-restore)+- [Cross Region Restore for MARS (Preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview) ## Set encryption settings |
backup | Backup Managed Disks Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-managed-disks-cli.md | Title: Back up Azure Managed Disks using Azure CLI description: Learn how to back up Azure Managed Disks using Azure CLI. Previously updated : 08/14/2023 Last updated : 08/25/2023 az dataprotection backup-instance list-from-resourcegraph --datasource-type Azur ] ``` -You can specify a rule and tagname while triggering backup. To view the rules in policy, look through the policy JSON. In the below example, the rule with the name BackupDaily, and tag name "default" is displayed and we'll use that rule for the on-demand backup. +You can specify a rule and tagname while triggering backup. To view the rules in policy, look through the policy JSON. In the following example, the rule with the name `"BackupDaily"`, and tag name `"default"` is displayed, and we'll use that rule for the on-demand backup. ```json "name": "BackupDaily", |
backup | Backup Sql Server Azure Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-azure-troubleshoot.md | The VM is not able to contact Azure Backup service due to internet connectivity | Error message | Possible cause | Recommendation | | | | |-| Operation failing with `UserErrorWindowsWLExtFailedToStartPluginService` error. | Azure Backup workload extension is unable to start the workload backup plugin service on the Azure Virtual Machine due to service account misconfiguration. | **Step 1** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** user has **Read** permissions on: <br> - C:\windows\Microsoft.NET \assembly\GAC_32 <br> - C:\windows\Microsoft.NET \assembly\GAC_64 <br> - C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config. <br><br> If the permissions are missing, assign **Read** permissions on these directories. <br><br> **Step 2** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** has the **Bypass traverse chekcing** rights by going to **Local Security Policy** > **User Right Assignment** > **Bypass traverse checking**. **Everyone** must be selected by default. <br><br> If **Everyone** and **NT Service\AzureWLBackupPluginSvc** are missing, add **NT Service\AzureWLBackupPluginSvc** user, and then try to restart the service or trigger a backup or restore operation for a datasource. | +| Operation failing with `UserErrorWindowsWLExtFailedToStartPluginService` error. | Azure Backup workload extension is unable to start the workload backup plugin service on the Azure Virtual Machine due to service account misconfiguration. | **Step 1:** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** user has **Read** permissions on: <br> - C:\windows\Microsoft.NET \assembly\GAC_32 <br> - C:\windows\Microsoft.NET \assembly\GAC_64 <br> - C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config. <br><br> If the permissions are missing, assign **Read** permissions on these directories. <br><br> **Step 2:** <br><br> Verify if **NT Service\AzureWLBackupPluginSvc** has the **Bypass traverse chekcing** rights by going to **Local Security Policy** > **User Right Assignment** > **Bypass traverse checking**. **Everyone** must be selected by default. <br><br> If **Everyone** and **NT Service\AzureWLBackupPluginSvc** are missing, add **NT Service\AzureWLBackupPluginSvc** user, and then try to restart the service or trigger a backup or restore operation for a datasource. | |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 07/05/2023 Last updated : 08/21/2023 Back up Linux Azure VMs with the Linux Azure VM agent | Supported for file-consi Back up Linux Azure VMs with the MARS agent | Not supported.<br/><br/> The MARS agent can be installed only on Windows machines. Back up Linux Azure VMs with DPM or MABS | Not supported. Back up Linux Azure VMs with Docker mount points | Currently, Azure Backup doesn't support exclusion of Docker mount points because these are mounted at different paths every time.+Backup Linux Azure VMs with ZFS Pool Configuration | Not supported ## Operating system support (Linux) Recovery points on DPM or MABS disk | 64 for file servers, and 448 for app serve **Restore disk** | This option restores a VM disk, which can you can then use to create a new VM.<br/><br/> Azure Backup provides a template to help you customize and create a VM. <br/><br> The restore job generates a template that you can download and use to specify custom VM settings and create a VM.<br/><br/> The disks are copied to the resource group that you specify.<br/><br/> Alternatively, you can attach the disk to an existing VM, or create a new VM by using PowerShell.<br/><br/> This option is useful if you want to customize the VM, add configuration settings that weren't there at the time of backup, or add settings that must be configured via the template or PowerShell. **Replace existing** | You can restore a disk and use it to replace a disk on the existing VM.<br/><br/> The current VM must exist. If it has been deleted, you can't use this option.<br/><br/> Azure Backup takes a snapshot of the existing VM before replacing the disk, and it stores the snapshot in the staging location that you specify. Existing disks connected to the VM are replaced with the selected restore point.<br/><br/> The snapshot is copied to the vault and retained in accordance with the retention policy. <br/><br/> After the replace disk operation, the original disk is retained in the resource group. You can choose to manually delete the original disks if they aren't needed. <br/><br/>This option is supported for unencrypted managed VMs and for VMs [created from custom images](https://azure.microsoft.com/resources/videos/create-a-custom-virtual-machine-image-in-azure-resource-manager-with-powershell/). It's not supported for unmanaged disks and VMs, classic VMs, and [generalized VMs](../virtual-machines/windows/capture-image-resource.md).<br/><br/> If the restore point has more or fewer disks than the current VM, the number of disks in the restore point will only reflect the VM configuration.<br><br> This option is also supported for VMs with linked resources, like [user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md) and [Azure Key Vault](../key-vault/general/overview.md). **Cross Region (secondary region)** | You can use cross-region restore to restore Azure VMs in the secondary region, which is an [Azure paired region](../availability-zones/cross-region-replication-azure.md).<br><br> You can restore all the Azure VMs for the selected recovery point if the backup is done in the secondary region.<br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> We don't currently support the [Replace existing disks](./backup-azure-arm-restore-vms.md#replace-existing-disks) option.<br><br> Backup admins and app admins have permissions to perform the restore operation on a secondary region.-**Cross Subscription (preview)** | You can use cross-subscription restore to restore Azure managed VMs in different subscriptions.<br><br> You can restore Azure VMs or disks to any subscription (within the same tenant as the source subscription) from restore points. This is one of the Azure role-based access control (RBAC) capabilities. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-subscription restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) and [secondary region](backup-azure-arm-restore-vms.md#restore-in-secondary-region) restores. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed), [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups), and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup). -**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones.<br><br> You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups) and [trusted launch VMs](backup-support-matrix-iaas.md#tvm-backup). +**Cross Subscription** | Allowed only if the [Cross Subscription Restore property](backup-azure-arm-restore-vms.md#cross-subscription-restore-preview) is enabled for your Recovery Services vault. <br><br> You can restore Azure Virtual Machines or disks to a different subscription within the same tenant as the source subscription (as per the Azure RBAC capabilities) from restore points. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross Subscription Restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) tier recovery points. It's also unsupported for [unmanaged VMs](backup-azure-arm-restore-vms.md#restoring-unmanaged-vms-and-disks-as-managed) and [VMs with disks having Azure Encryptions (ADE)](backup-azure-vms-encryption.md#encryption-support-using-ade). +**Cross Zonal Restore** | You can use cross-zonal restore to restore Azure zone-pinned VMs in available zones. You can restore Azure VMs or disks to different zones (one of the Azure RBAC capabilities) from restore points. Note that when you select a zone to restore, it selects the [logical zone](../reliability/availability-zones-overview.md#availability-zones) (and not the physical zone) as per the Azure subscription you will use to restore to. <br><br> This feature is available for the following options:<br> - [Create a VM](./backup-azure-arm-restore-vms.md#create-a-vm) <br> - [Restore disks](./backup-azure-arm-restore-vms.md#restore-disks) <br><br> Cross-zonal restore is unsupported for [snapshots](backup-azure-vms-introduction.md#snapshot-creation) of restore points. It's also unsupported for [encrypted Azure VMs](backup-azure-vms-introduction.md#encryption-of-azure-vm-backups). ## Support for file-level restore The following table summarizes support for backup during VM management tasks, su **Restore** | **Supported** | -<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore (preview)](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. +<a name="backup-azure-cross-subscription-restore">Restore across a subscription</a> | [Cross-subscription restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. [Restore across a region](backup-azure-arm-restore-vms.md#cross-region-restore) | Supported. <a name="backup-azure-cross-zonal-restore">Restore across a zone</a> | [Cross-zonal restore](backup-azure-arm-restore-vms.md#restore-options) is now supported in Azure VMs. Restore to an existing VM | Use the replace disk option. Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.-<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, Central US, North Central US, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. -<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, Central US, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. +<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, Central US, North Central US, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - ZRS type vaults cannot be used for enabling backup. +<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, Central US, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. <br><br> - ZRS type vaults cannot be used for enabling backup. [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS. |
backup | Backup Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix.md | Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 10/21/2022 Last updated : 08/14/2023 Backup supports the compression of backup traffic, as summarized in the followin ## Cross Region Restore -Azure Backup has added the Cross Region Restore feature to strengthen data availability and resiliency capability, giving you full control to restore data to a secondary region. To configure this feature, visit [the Set Cross Region Restore article.](backup-create-rs-vault.md#set-cross-region-restore). This feature is supported for the following management types: +Azure Backup has added the Cross Region Restore feature to strengthen data availability and resiliency capability, giving you full control to restore data to a secondary region. To configure this feature, see [Set Cross Region Restore](backup-create-rs-vault.md#set-cross-region-restore). This feature is supported for the following management types: | Backup Management type | Supported | Supported Regions | | - | | -- | | Azure VM | Supported for Azure VMs (including encrypted Azure VMs) with both managed and unmanaged disks. Not supported for classic VMs. | Available in all Azure public regions and sovereign regions, except for UG IOWA. | | SQL /SAP HANA | Available | Available in all Azure public regions and sovereign regions, except for France Central and UG IOWA. |-| MARS Agent/On premises | No | N/A | +| MARS Agent (Preview) | Available in preview. <br><br> Not supported for vaults with Private Endpoint enabled. | Available in all Azure public regions. | +| DPM/MABS | No | N/A | | AFS (Azure file shares) | No | N/A | ## Resource health |
backup | Backup Windows With Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-windows-with-mars-agent.md | Title: Back up Windows machines by using the MARS agent description: Use the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines. Previously updated : 06/23/2023 Last updated : 08/18/2023 To run an on-demand backup, follow these steps: [ ![Screenshot shows the Back up now option in Windows Server.](./media/backup-configure-vault/backup-now.png) ](./media/backup-configure-vault/backup-now.png#lightbox) -1. If the MARS agent version is 2.0.9169.0 or newer, then you can set a custom retention date. In the **Retain Backup Till** section, choose a date from the calendar. +1. If the MARS agent version is *2.0.9254.0 or newer*, select a *subset of the volumes backed up periodically* for on-demand backup. Only the files/folders configured for periodic backup can be backed up on demand. ++ :::image type="content" source="./media/backup-configure-vault/select-subset-of-volumes-backed-up-periodically-for-mars-on-demand-backup.png" alt-text="Screenshot shows how to select a subset of volumes backed up periodically for on-demand backup."::: ++ If the MARS agent version is *2.0.9169.0 or newer*, set a custom retention date. In the **Retain Backup Till** section, choose a date from the calendar. [ ![Screenshot shows how to use the calendar to customize a retention date.](./media/backup-configure-vault/mars-ondemand.png) ](./media/backup-configure-vault/mars-ondemand.png#lightbox) |
backup | Disk Backup Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/disk-backup-overview.md | Title: Overview of Azure Disk Backup description: Learn about the Azure Disk backup solution. Previously updated : 07/21/2023 Last updated : 08/17/2023 The retention period for a backup also follows the maximum limit of 450 snapshot For example, if the scheduling frequency for backups is set as Daily, then you can set the retention period for backups at a maximum value of 450 days. Similarly, if the scheduling frequency for backups is set as Hourly with a 1-hour frequency, then you can set the retention for backups at a maximum value of 18 days. +## Why do I see more snapshots than my retention policy? ++If a retention policy is set as *1*, you can find two snapshots. This configuration ensures that at least one latest recovery point is always present in the vault, if all subsequent backups fail due to any issue. This causes the presence of two snapshots. ++So, if the policy is for *n* snapshots, you can find *n+1* snapshots at times. Further, you can even find *n+1+2* snapshots if there is a delay in deletion of recovery points whose retention period is over (garbage collection). This can happen at rare times when: ++- You clean up snapshots, which are past retentions. +- The garbage collector (GC) in the backend is under heavy load. + ## Pricing Azure Backup uses [incremental snapshots](../virtual-machines/disks-incremental-snapshots.md) of the managed disk. Incremental snapshots are charged per GiB of the storage occupied by the delta changes since the last snapshot. For example, if you're using a managed disk with a provisioned size of 128 GiB, with 100 GiB used, the first incremental snapshot is billed only for the used size of 100 GiB. 20 GiB of data is added on the disk before you create the second snapshot. Now, the second incremental snapshot is billed for only 20 GiB. |
backup | Encryption At Rest With Cmk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md | Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 08/02/2023 Last updated : 08/25/2023 -Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK) instead of using platform-managed keys, which are enabled by default. Your keys encrypt the backup data must be stored in [Azure Key Vault](../key-vault/index.yml). +Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK) instead of platform-managed keys, which are enabled by default. Your keys that encrypt the backup data must be stored in [Azure Key Vault](../key-vault/index.yml). -The encryption key used for encrypting backups may be different from the one used for the source. The data is protected using an AES 256 based data encryption key (DEK), which in turn, is protected using your key encryption keys (KEK). This provides you with full control over the data and the keys. To allow encryption, you must grant Recovery Services vault the permissions to access the encryption key in the Azure Key Vault. You can change the key when required. +The encryption key used for encrypting backups may be different from the one used for the source. The data is protected using an AES 256-based data encryption key (DEK), which in turn, is protected using your key encryption keys (KEK). This provides you with full control over the data and the keys. To allow encryption, you must grant Recovery Services vault the permissions to access the encryption key in the Azure Key Vault. You can change the key when required. In this article, you'll learn how to: In this article, you'll learn how to: >Use Az module 5.3.0 or later to use customer managed keys for backups in the Recovery Services vault. >[!Warning]- >If you're using PowerShell for managing encryption keys for Backup, we don't recommend to update the keys from the portal. <br> If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal. + >If you're using PowerShell for managing encryption keys for Backup, we don't recommend to update the keys from the portal. <br> If you update the key from the portal, you can't use PowerShell to update the encryption key further till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal. If you haven't created and configured your Recovery Services vault, see [how to do so here](backup-create-rs-vault.md). To configure a vault, perform the following actions in the given sequence to ac 3. Enable soft-delete and purge protection on Azure Key Vault. -4. Assign the encryption key to the Recovery Services vault, +4. Assign the encryption key to the Recovery Services vault. ### Enable managed identity for your Recovery Services vault Choose a client: # [Azure portal](#tab/portal) -1. Go to your Recovery Services vault -> **Identity** +1. Go to your Recovery Services vault -> **Identity**. ![Identity settings](media/encryption-at-rest-with-cmk/enable-system-assigned-managed-identity-for-vault.png) Choose a client: 3. Change the **Status** to **On**. -4. Click **Save** to enable the identity for the vault. +4. Select **Save** to enable the identity for the vault. An Object ID is generated, which is the system-assigned managed identity of the vault. To assign the user-assigned managed identity for your Recovery Services vault, c # [Azure portal](#tab/portal) -1. Go to your Recovery Services vault -> **Identity** +1. Go to your Recovery Services vault -> **Identity**. ![Assign user-assigned managed identity to the vault](media/encryption-at-rest-with-cmk/assign-user-assigned-managed-identity-to-vault.png) 2. Navigate to the **User assigned** tab. -3. Click **+Add** to add a user-assigned managed identity. +3. Select **+Add** to add a user-assigned managed identity. 4. In the **Add user assigned managed identity** blade that opens, select the subscription for your identity. 5. Select the identity from the list. You can also filter by the name of the identity or the resource group. -6. Once done, click **Add** to finish assigning the identity. +6. Once done, select **Add** to finish assigning the identity. # [PowerShell](#tab/powershell) Choose a client: ![Add Access Policies](./media/encryption-at-rest-with-cmk/access-policies.png) -2. Under **Key Permissions**, select **Get**, **List**, **Unwrap Key** and **Wrap Key** operations. This specifies the actions on the key that will be permitted. +2. Under **Key Permissions**, select **Get**, **List**, **Unwrap Key**, and **Wrap Key** operations. This specifies the actions on the key that will be permitted. ![Assign key permissions](./media/encryption-at-rest-with-cmk/key-permissions.png) You can do this from the Azure Key Vault UI as shown below. Alternatively, you c Set-AzContext -SubscriptionId SubscriptionId ``` -3. Enable soft delete +3. Enable soft delete. ```azurepowershell ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true" You can do this from the Azure Key Vault UI as shown below. Alternatively, you c Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties ``` -4. Enable purge protection +4. Enable purge protection. ```azurepowershell ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enablePurgeProtection" -Value "true" You can do this from the Azure Key Vault UI as shown below. Alternatively, you c az account set --subscription "Subscription1" ``` -3. Enable soft delete +3. Enable soft delete. ```azurecli az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-soft-delete true ``` -4. Enable purge protection +4. Enable purge protection. ```azurecli az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-purge-protection true You can do this from the Azure Key Vault UI as shown below. Alternatively, you c >Before proceeding further, ensure the following: > >- All the steps mentioned above have been completed successfully:-> - The Recovery Services vault's managed identity has been enabled, and has been assigned required permissions -> - The Azure Key Vault has soft-delete and purge-protection enabled ->- The Recovery Services vault for which you want to enable CMK encryption **does not** have any items protected or registered to it +> - The Recovery Services vault's managed identity has been enabled and has been assigned the required permissions. +> - The Azure Key Vault has soft-delete and purge-protection enabled. +>- The Recovery Services vault for which you want to enable CMK encryption **does not** have any items protected or registered to it. Once the above are ensured, continue with selecting the encryption key for your vault. To assign the key and follow the steps, choose a client: # [Azure portal](#tab/portal) -1. Go to your Recovery Services vault -> **Properties** +1. Go to your Recovery Services vault -> **Properties**. ![Encryption settings](./media/encryption-at-rest-with-cmk/encryption-settings.png) To assign the key and follow the steps, choose a client: 1. Enter the **Key URI** with which you want to encrypt the data in this Recovery Services vault. You also need to specify the subscription in which the Azure Key Vault (that contains this key) is present. This key URI can be obtained from the corresponding key in your Azure Key Vault. Ensure the key URI is copied correctly. It's recommended that you use the **Copy to clipboard** button provided with the key identifier. >[!NOTE]- >When specifying the encryption key using the full Key URI, the key will not be auto-rotated, and you need to perform key updates manually by specifying the new key when required. Alternatively, remove the Version component of the Key URI to get automatic rotation. + >When specifying the encryption key using the full Key URI, the key will not be autorotated, and you need to perform key updates manually by specifying the new key when required. Alternatively, remove the Version component of the Key URI to get automatic rotation. ![Enter key URI](./media/encryption-at-rest-with-cmk/key-uri.png) 2. Browse and select the key from the Key Vault in the key picker pane. >[!NOTE]- >When specifying the encryption key using the key picker pane, the key will be auto-rotated whenever a new version for the key is enabled. [Learn more](#enable-auto-rotation-of-encryption-keys) on enabling auto-rotation of encryption keys. + >When specifying the encryption key using the key picker pane, the key will be autorotated whenever a new version for the key is enabled. [Learn more](#enable-autorotation-of-encryption-keys) on enabling autorotation of encryption keys. ![Select key from key vault](./media/encryption-at-rest-with-cmk/key-vault.png) InfrastructureEncryptionState : Disabled # [CLI](#tab/cli) -Use the [az backup vault encryption update](/cli/azure/backup/vault/encryption#az-backup-vault-encryption-update) command to enable encryption using customer-managed keys, and to assign or update the encryption key to be used. +Use the [az backup vault encryption update](/cli/azure/backup/vault/encryption#az-backup-vault-encryption-update) command to enable encryption using customer-managed keys and to assign or update the encryption key to be used. Example: InfrastructureEncryptionState : Disabled ``` >[!NOTE]->This process remains the same when you wish to update or change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), make sure that: +>This process remains the same when you wish to update or change the encryption key. If you wish to update and use a key from another Key Vault (different from the one that's being currently used), ensure that: >->- The key vault is located in the same region as the Recovery Services vault +>- The key vault is located in the same region as the Recovery Services vault. >->- The key vault has soft-delete and purge protection enabled +>- The key vault has soft-delete and purge protection enabled. > >- The Recovery Services vault has the required permissions to access the key Vault. InfrastructureEncryptionState : Disabled ## Back up to a vault encrypted with customer-managed keys -Before proceeding to configure protection, we strongly recommend you ensure the following checklist is adhered to. This is important since once an item has been configured to be backed up (or attempted to be configured) to a non-CMK encrypted vault, encryption using customer-managed keys can't be enabled on it and it will continue to use platform-managed keys. +Before proceeding to configure protection, we strongly recommend you adhere to the following checklist. This is important since once an item has been configured to be backed up (or attempted to be configured) to a non-CMK encrypted vault, encryption using customer-managed keys can't be enabled on it and it will continue to use platform-managed keys. >[!IMPORTANT] > Before proceeding to configure protection, you must have **successfully** completed the following steps: Before proceeding to configure protection, we strongly recommend you ensure the > >If all the above steps have been confirmed, only then proceed with configuring backup. -The process to configure and perform backups to a Recovery Services vault encrypted with customer-managed keys is the same as to a vault that uses platform-managed keys, with **no changes to the experience**. This holds true for [backup of Azure VMs](./quick-backup-vm-portal.md) as well as backup of workloads running inside a VM (for example, [SAP HANA](./tutorial-backup-sap-hana-db.md), [SQL Server](./tutorial-sql-backup.md) databases). +The process to configure and perform backups to a Recovery Services vault encrypted with customer-managed keys is the same as to a vault that uses platform-managed keys with **no changes to the experience**. This holds true for [backup of Azure VMs](./quick-backup-vm-portal.md) as well as backup of workloads running inside a VM (for example, [SAP HANA](./tutorial-backup-sap-hana-db.md), [SQL Server](./tutorial-sql-backup.md) databases). ## Restore data from backup The process to configure and perform backups to a Recovery Services vault encryp Data stored in the Recovery Services vault can be restored according to the steps described [here](./backup-azure-arm-restore-vms.md). When restoring from a Recovery Services vault encrypted using customer-managed keys, you can choose to encrypt the restored data with a Disk Encryption Set (DES). >[!Note]->The experience described in this section only applies to restore of data from CMK encrypted vaults. When you restore data from a vault that isn't using CMK encryption, the restored data would be encrypted using Platform Managed Keys. If you restore from an instant recovery snapshot, it would be encrypted using the mechanism used for encrypting the source disk. +>The experience described in this section only applies when you restore data from CMK encrypted vaults. When you restore data from a vault that isn't using CMK encryption, the restored data would be encrypted using Platform Managed Keys. If you restore from an instant recovery snapshot, it would be encrypted using the mechanism used for encrypting the source disk. #### Restore VM/disk -1. When you recover disk / VM from a *Snapshot* recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks. +1. When you recover disk/VM from a *Snapshot* recovery point, the restored data will be encrypted with the DES used for encrypting the source VM's disks. -1. When restoring disk / VM from a recovery point with Recovery Type as "Vault", you can choose to have the restored data encrypted using a DES, specified at the time of restore. Alternatively, you can choose to continue with the restore the data without specifying a DES, in which case it will be encrypted using Microsoft-managed keys. +1. When restoring disk/VM from a recovery point with Recovery Type as **Vault**, you can choose to have the restored data encrypted using a DES specified at the time of restore. Alternatively, you can choose to continue with the restore the data without specifying a DES, in which case the encryption setting on the VM will be applied. -1. During Cross Region Restore, CMK (customer-managed keys) enabled Azure VMs, which aren't backed-up in a CMK enabled Recovery Services vault, is restored as non-CMK enabled VMs in the secondary region. +1. During Cross Region Restore, CMK (customer-managed keys) enabled Azure VMs, which aren't backed up in a CMK enabled Recovery Services vault, are restored as non-CMK enabled VMs in the secondary region. -You can encrypt the restored disk / VM after the restore is complete, regardless of the selection made while initiating the restore. +You can encrypt the restored disk/VM after the restore is complete, regardless of the selection made while initiating the restore. ![Restore points](./media/encryption-at-rest-with-cmk/restore-points.png) When your subscription is allow-listed, the **Backup Encryption** tab will displ 1. To specify the key to be used for encryption, select the appropriate option. - You can provide the URI for the encryption key, or browse and select the key. When you specify the key using the **Select the Key Vault** option, auto-rotation of the encryption key will enable automatically. [Learn more on auto-rotation](#enable-auto-rotation-of-encryption-keys). + You can provide the URI for the encryption key, or browse and select the key. When you specify the key using the **Select the Key Vault** option, autorotation of the encryption key will enable automatically. [Learn more on autorotation](#enable-autorotation-of-encryption-keys). 1. Specify the user assigned managed identity to manage encryption with customer-managed keys. Click **Select** to browse and select the required identity. 1. Proceed to add Tags (optional) and continue creating the vault. -### Enable auto-rotation of encryption keys +### Enable autorotation of encryption keys When you specify the customer-managed key that must be used to encrypt backups, use the following methods to specify it: - Enter the key URI - Select from Key Vault -Using the **Select from Key Vault** option helps to enable auto-rotation for the selected key. This eliminates the manual effort to update to the next version. However, using this option: +Using the **Select from Key Vault** option helps to enable autorotation for the selected key. This eliminates the manual effort to update to the next version. However, using this option: - Key version update may take up to an hour to take effect. - When a new version of the key takes effect, the old version should also be available (in enabled state) for at least one subsequent backup job after the key update has taken effect. +> [!NOTE] +> When specifying the encryption key using the full Key URI, the key won't be auto rotated, and you need to perform key updates manually by specifying the new key when required. To enable automatic rotation, remove the Version component of the Key URI. + ### Use Azure Policies to audit and enforce encryption with customer-managed keys (in preview) Azure Backup allows you to use Azure Polices to audit and enforce encryption, using customer-managed keys, of data in the Recovery Services vault. Using the Azure Policies: -- The audit policy can be used for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021. For vaults with the CMK encryption enabled before this date, the policy may fail to apply or may show false negative results (that is, these vaults may be reported as non-compliant, despite having **CMK encryption** enabled).+- The audit policy can be used for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021. For vaults with the CMK encryption enabled before this date, the policy may fail to apply or may show false negative results (that is, these vaults may be reported as noncompliant despite having **CMK encryption** enabled). - To use the audit policy for auditing vaults with **CMK encryption** enabled before 04/01/2021, use the Azure portal to update an encryption key. This helps to upgrade to the new model. If you don't want to change the encryption key, provide the same key again through the key URI or the key selection option. >[!Warning]- >If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal. + >If you're using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal. ## Frequently asked questions ### Can I encrypt an existing Backup vault with customer-managed keys? -No, CMK encryption can be enabled for new vaults only. So the vault must never have had any items protected to it. In fact, no attempts to protect any items to the vault must be made before enabling encryption using customer-managed keys. +No, CMK encryption can be enabled for new vaults only. So, the vault must never have had any items protected to it. In fact, no attempts to protect any items to the vault must be made before enabling encryption using customer-managed keys. ### I tried to protect an item to my vault, but it failed, and the vault still doesn't contain any items protected to it. Can I enable CMK encryption for this vault? -No, the vault must haven't had any attempts to protect any items to it in the past. +No, the vault must not have had any attempts to protect any items to it in the past. ### I have a vault that's using CMK encryption. Can I later revert to encryption using platform-managed keys even if I have backup items protected to the vault? Using CMK encryption for Backup doesn't incur any additional costs to you. You m ## Next steps -- [Overview of security features in Azure Backup](security-overview.md)+[Overview of security features in Azure Backup](security-overview.md). |
backup | Install Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/install-mars-agent.md | Title: Install the Microsoft Azure Recovery Services (MARS) agent description: Learn how to install the Microsoft Azure Recovery Services (MARS) agent to back up Windows machines.- Previously updated : 11/15/2022+ Last updated : 08/18/2023 To modify the storage replication type: > You can't modify the storage replication type after the vault is set up and contains backup items. If you want to do this, you need to re-create the vault. > +## Configure Recovery Services vault to save passphrase to Recovery Services vault (preview) ++Azure Backup using the Recovery Services agent (MARS) allows you to back up file or folder and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase provided during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location, such as Azure Key Vault. ++We recommend you to create a Key Vault and provide permissions to your Recovery Services vault to save the passphrase to the Key Vault. [Learn more](save-backup-passphrase-securely-in-azure-key-vault.md). + ### Verify internet access [!INCLUDE [Configuring network connectivity](../../includes/backup-network-connectivity.md)] If you've already installed the agent on any machines, ensure you're running the * Save the passphrase in a secure location. You need it to restore a backup. * If you lose or forget the passphrase, Microsoft can't help you recover the backup data. + The MARS agent can automatically save the passphrase securely to Azure Key Vault. So, we recommend you create a Key Vault and grant permissions to your Recovery Services vault to save the passphrase to the Key Vault before registering your first MARS agent to the vault. [Learn more](save-backup-passphrase-securely-in-azure-key-vault.md). ++ After granting the required permissions, you can save the passphrase to the Key Vault by copying the *Key Vault URI* from the Azure portal and to the Register Server Wizard. + :::image type="content" source="./media/backup-configure-vault/encryption-settings-passphrase-to-encrypt-decrypt-backups.png" alt-text="Screenshot showing to specify a passphrase to be used to encrypt and decrypt backups for machines."::: 1. Select **Finish**. The agent is now installed, and your machine is registered to the vault. You're ready to configure and schedule your backup. If you've already installed the agent on any machines, ensure you're running the If you are running into issues during vault registration, see the [troubleshooting guide](backup-azure-mars-troubleshoot.md#invalid-vault-credentials-provided). >[!Note]- >We strongly recommend you save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](../key-vault/secrets/quick-create-portal.md) how to store a secret in a key vault. + >We recommend to save your passphrase in an alternate secure location, such as the Azure key vault. Microsoft can't recover the data without the passphrase. [Learn](save-backup-passphrase-securely-in-azure-key-vault.md) how to store a secret in a Key Vault. ## Next steps |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
backup | Restore All Files Volume Mars | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-all-files-volume-mars.md | Title: Restore all files in a volume with MARS description: Learn how to restore all the files in a volume using the MARS Agent.- Previously updated : 01/17/2021+ Last updated : 08/14/2023 This article explains how to restore all backed-up files in an entire volume usi - Restore all backed-up files in a volume to the same machine from which the backups were taken. - Restore all backed-up files in a volume to an alternate machine.+- If you have Cross Region Restore enabled in your vault, you can restore the backup data from the secondary region. +- If you want to use Cross Region Restore to restore the backup data from the secondary region, you need to download the Secondary Region vault credential file from the Azure portal, and then pass the file in the MARS agent. >[!TIP]->The **Volume** option recovers all backed up data in a specified volume. This option provides faster transfer speeds (up to 40 MBps), and is recommended for recovering large-sized data or entire volumes. +>The **Volume** option recovers all backed up data in a specified volume. This option provides faster transfer speeds (up to 40 Mbps), and is recommended for recovering large-sized data or entire volumes. >->The **Individual files and folders option** allows for quick access to the recovery point data. It's suitable for recovering individual files, and is recommended for a total size of less than 80 GB. It offers transfer or copy speeds up to 6 MBps during recovery. +>The **Individual files and folders option** allows for quick access to the recovery point data. It's suitable for recovering individual files, and is recommended for a total size of less than 80 GB. It offers transfer or copy speeds of up to 6 MBps during recovery. ## Volume level restore to the same machine The following steps will help you recover all backed-up files in a volume: ![Getting started page](./media/restore-all-files-volume-mars/same-machine-instant-restore.png) + If you have enabled Cross Region Restore (preview) and want to restore from the secondary region, select **Secondary Region**. Otherwise, select **Primary Region**. ++ :::image type="content" source="./media/backup-azure-restore-windows-server/select-source-region-for-restore.png" alt-text="Screenshot shows the selection of the source region of recovery point."::: + 1. On the **Select Recovery Mode** page, choose **Volume** > **Next**. ![Select recovery mode](./media/restore-all-files-volume-mars/select-recovery-mode.png) These steps include the following terminology: 1. On the **Getting Started** page, select **Another server**. - ![Screenshot of Recover Data Wizard Getting Started page (restore to alternate machine)](./media/backup-azure-restore-windows-server/alternatemachine_gettingstarted_instantrestore.png) + ![Screenshot of Recover Data Wizard Getting Started page (restore to alternate machine).](./media/backup-azure-restore-windows-server/alternatemachine_gettingstarted_instantrestore.png) ++1. Provide the vault credential file that corresponds to the sample vault. ++ If the vault credential file is invalid (or expired), [download a new vault credential file from the sample vault](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) in the Azure portal. After you provide a valid vault credential, the name of the corresponding backup vault appears. -1. Provide the vault credential file that corresponds to the sample vault, and select **Next**. + >[!Note] + >If you want to use Cross Region Restore to restore the backup data from the secondary region, you need to download the *Secondary Region vault credential file* from the Azure portal, and then pass the file in the MARS agent. + > + > :::image type="content" source="./media/backup-azure-restore-windows-server/pass-vault-credentials-in-mars-agent.png" alt-text="Screenshot shows the secondary vault credentials passed in MARS agent."::: - If the vault credential file is invalid (or expired), [download a new vault credential file from the sample vault](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) in the Azure portal. After you provide a valid vault credential, the name of the corresponding backup vault appears. + Select **Next** to continue. 1. On the **Select Backup Server** page, select the source machine from the list of displayed machines, and provide the passphrase. Then select **Next**. |
backup | Restore Azure Encrypted Virtual Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-encrypted-virtual-machines.md | Encrypted VMs can only be restored by restoring the VM disk and creating a virtu Follow below steps to restore encrypted VMs: -### **Step 1**: Restore the VM disk +### Step 1: Restore the VM disk 1. In **Restore configuration** > **Create new** > **Restore Type** select **Restore disks**. 1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name. When your virtual machine uses unmanaged disks, they're restored as blobs to the > [!NOTE] > After you restore the VM disk, you can manually swap the OS disk of the original VM with the restored VM disk without re-creating it. [Learn more](https://azure.microsoft.com/blog/os-disk-swap-managed-disks/). -### **Step 2**: Recreate the virtual machine instance +### Step 2: Recreate the virtual machine instance Do one of the following actions: Do one of the following actions: >While deploying the template, verify the storage account containers and the public/private settings. - Create a new VM from the restored disks using PowerShell. [Learn more](backup-azure-vms-automation.md#create-a-vm-from-restored-disks). -### **Step 3**: Restore an encrypted Linux VM +### Step 3: Restore an encrypted Linux VM Reinstall the ADE extension so the data disks are open and mounted. |
backup | Save Backup Passphrase Securely In Azure Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/save-backup-passphrase-securely-in-azure-key-vault.md | + + Title: Save and manage MARS agent passphrase securely in Azure Key Vault (preview) +description: Learn how to save MARS agent passphrase securely in Azure Key Vault and retrieve them during restore. + Last updated : 08/18/2023+++++++# Save and manage MARS agent passphrase securely in Azure Key Vault (preview) ++Azure Backup using the Recovery Services agent (MARS) allows you back up files/folders and system state data to Azure Recovery Services vault. This data is encrypted using a passphrase you provide during the installation and registration of the MARS agent. This passphrase is required to retrieve and restore the backup data and needs to be saved in a secure external location. +++>[!Important] +>If this passphrase is lost, Microsoft will not be able retrieve backup data stored in the Recovery Services vault. We recommend that you store this passphrase in a secure external location, such as Azure Key Vault. ++Now, you can save your encryption passphrase securely in Azure Key Vault as a Secret from the MARS console during installation for new machines and by changing the passphrase for existing machines. To allow saving the passphrase to Azure Key Vault, you must grant Recovery Services vault the permissions to create a Secret in the Azure Key Vault. ++## Before you start ++- [Create a Recovery Services vault](backup-create-recovery-services-vault.md) in case you don't have one. +- You should use a single Azure Key Vault to store all your passphrases. [Create a Key Vault](../key-vault/general/quick-create-portal.md) in case you don't have one. + - [Azure Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/) is applicable when you create a new Azure Key Vault to store your passphrase. + - After you create the Key Vault, to protect against accidental or malicious deletion of passphrase, [ensure that soft-delete and purge protection is turned on](../key-vault/general/soft-delete-overview.md). +- This feature is supported only in Azure public regions with MARS agent version *2.0.9254.0* or above. ++## Configure the Recovery Services vault to store passphrase to Azure Key Vault ++Before you can save your passphrase to Azure Key Vault, configure your Recovery Services vault and Azure Key Vault, ++To configure a vault, follow these steps in the given sequence to achieve the intended results. Each action is discussed in detail in the sections below: ++1. Enabled system-assigned managed identity for the Recovery Services vault. +2. Assign permissions to the Recovery Services vault to save the passphrase as a Secret in Azure Key Vault. +3. Enable soft-delete and purge protection on the Azure Key Vault. ++>[!Note] +>- Once you enable this feature, you must not disable the managed identity (even temporarily). Disabling the managed identity may lead to inconsistent behavior. +>- User-assigned managed identity is currently not supported for saving passphrase in Azure Key Vault. +++### Enable system-assigned managed identity for the Recovery Services vault ++**Choose a client**: ++# [Azure portal](#tab/azure-portal) ++Follow these steps: ++1. Go to your *Recovery Services vault* > **Identity**. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/recovery-services-vault-identity.png" alt-text="Screenshot shows how to go to Identity in Recovery Services vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/recovery-services-vault-identity.png"::: ++2. Select the **System assigned** tab. +3. Change the **Status** to **On**. +4. Select **Save** to enable the identity for the vault. ++An Object ID is generated, which is the system-assigned managed identity of the vault. +++# [PowerShell](#tab/powershell) ++To enable system-assigned managed identity for the Recovery Services vault, use the [Update-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/update-azrecoveryservicesvault) cmdlet. ++**Example** ++```azurepowershell +$vault=Get-AzRecoveryServicesVault -ResourceGroupName "testrg" -Name "testvault" +Update-AzRecoveryServicesVault -IdentityType SystemAssigned -ResourceGroupName TestRG -Name TestVault +$vault.Identity | fl ++``` ++```Output +PrincipalId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx +TenantId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx +Type : SystemAssigned ++``` +++# [CLI](#tab/cli) ++To enable system-assigned managed identity for the Recovery Services vault, use the `az backup vault identity assign` command. ++**Example** ++```azurecli +az backup vault identity assign --system-assigned --resource-group MyResourceGroup --name MyVault ++``` ++```Output +PrincipalId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx +TenantId : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx +Type : SystemAssigned ++``` ++++### Assign permissions to save the passphrase in Azure Key Vault ++Based on the Key Vault permission model (either role-based access permissions or access policy-based permission model) configured for Key Vault, refer to the following sections. ++#### Enable permissions using role-based access permission model for Key Vault ++**Choose a client:** ++# [Azure portal](#tab/azure-portal) ++To assign the permissions, follow these steps: ++1. Go to your *Azure Key Vault* > **Settings** > **Access Configuration** to ensure that the permission model is **RBAC**. + +2. Select **Access control (IAM)** > **+Add** to add role assignment. ++3. The Recovery Services vault identity requires the **Set permission on Secret** to create and add the passphrase as a Secret to the Key Vault. ++ You can select a *built-in role* such as **Key Vault Secrets Officer** that has the permission (along with other permissions not required for this feature) or [create a custom role](../key-vault/general/rbac-guide.md?tabs=azurepowershell#creating-custom-roles) with only Set permission on Secret. ++ Select **Details** to view the permissions granted by the role and ensure Set permission on Secret is available. + +4. Select **Next** to proceed to select Members for assignment. ++5. Select **Managed identity** and then **+ Select members**. choose the **Subscription** of the target Recovery Services vault, select Recovery Services vault under **System-assigned managed identity**. ++ Search and select the *name of the Recovery Services vault*. + +6. Select **Next**, review the assignment, and select **Review + assign**. + +7. Go to **Access control (IAM)** in the Key Vault, select **Role assignments** and ensure that the Recovery Services vault is listed. + ++# [PowerShell](#tab/powershell) ++To assign the permissions, run the following cmdlet: ++```azurepowershell +#Find the application id for your recovery services vault +Get-AzADServicePrincipal -SearchString <principalName> +#Identify a role with Set permission on Secret, like Key Vault Secret Office +Get-AzRoleDefinition | Format-Table -Property Name, IsCustom, Id +#Assign role to Recovery Services Vault identity +Get-AzRoleDefinition -Name <roleName> +#Assign by Service Principal ApplicationId +New-AzRoleAssignment -RoleDefinitionName 'Key Vault Secrets Officer' -ApplicationId {i.e 8ee5237a-816b-4a72-b605-446970e5f156} -Scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name} ++``` ++# [CLI](#tab/cli) ++To assign the permissions, run the following command: ++```azurecli +#Find the application id for your recovery services vault +az ad sp list --all --filter "displayname eq '<my recovery vault name>' and servicePrincipalType eq 'ManagedIdentity'" +#Identify a role with Set permission on Secret, like Key Vault Secret Office +az role definition list --query "[].{name:name, roleType:roleType, roleName:roleName}" --output tsv +az role definition list --name "{roleName}" +#Assign role to Recovery Services Vault identity +az role assignment create --role "Key Vault Secrets Officer" --assignee "<application id>" {i.e "55555555-5555-5555-5555-555555555555"} --scope /subscriptions/{subscriptionid}/resourcegroups/{resource-group-name}/providers/Microsoft.KeyVault/vaults/{key-vault-name} ++``` ++++#### Enable permissions using Access Policy permission model for Key Vault ++**Choose a client**: ++# [Azure portal](#tab/azure-portal) ++Follow these steps: ++1. Go to your *Azure Key Vault* > **Access Policies** > **Access policies**, and then select **+ Create**. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/create-access-policies.png" alt-text="Screenshot shows how to start creating a Key Vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/create-access-policies.png"::: ++2. Under **Secret Permissions**, select **Set operation**. ++ This specifies the allowed actions on the Secret. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/set-secret-permissions.png" alt-text="Screenshot shows how to start setting permissions." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/set-secret-permissions.png"::: ++3. Go to **Select Principal** and search for your *vault* in the search box using its name or managed identity. ++ Select the *vault* from the search result and choose **Select**. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/assign-principal.png" alt-text="Screenshot shows the assignment of permission to a selected vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/assign-principal.png"::: ++4. Go to **Review + create**, ensure that **Set permission** is available and **Principal** is the correct *Recovery Services vault*, and then select **Create**. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/review-and-create-access-policy.png" alt-text="Screenshot shows the verification of the assigned Recovery Services vault and create the Key Vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/review-and-create-access-policy.png"::: ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/check-access-policies.png" alt-text="Screenshot shows how to verify the access present." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/check-access-policies.png"::: +++# [PowerShell](#tab/powershell) ++To get the Principal ID of the Recovery Services vault, use the [Get-AzADServicePrincipal](/powershell/module/az.resources/get-azadserviceprincipal) cmdlet. Then use this ID in the [Get-AzADServicePrincipal](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) cmdlet to set an access policy for the Key vault. ++**Example** ++```azurepowershell +$sp = Get-AzADServicePrincipal -DisplayName MyVault +$Set-AzKeyVaultAccessPolicy -VaultName myKeyVault -ObjectId $sp.Id -PermissionsToSecrets set ++``` ++# [CLI](#tab/cli) ++To get the principal ID of the Recovery Services vault, use the `az ad sp list` command. Then use this ID in the `az keyvault set-policy` command to set an access policy for the Key vault. ++**Example** ++```azurecli +az ad sp list --display-name MyVault +az keyvault set-policy --name myKeyVault --object-id <object-id> --secret-permissions set ++``` +++++### Enable soft-delete and purge protection on Azure Key Vault ++You need to enable soft-delete and purge protection on your Azure Key Vault that stores your encryption key. ++*Choose a client** ++# [Azure portal](#tab/azure-portal) ++You can enable soft-delete and purge protection from the Azure Key Vault. ++Alternatively, you can set these properties while creating the Key Vault. [Learn more](../key-vault/general/soft-delete-overview.md) about these Key Vault properties. +++# [PowerShell](#tab/powershell) +++1. Sign in to your Azure account. ++ ```azurepowershell + Login-AzAccount + ``` ++2. Select the *subscription* that contains your vault. ++ ```azurepowershell + Set-AzContext -SubscriptionId SubscriptionId + ``` ++3. Enable *soft-delete*. ++ ```azurepowershell + ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enableSoftDelete" -Value "true" + Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties + ``` ++4. Enable *purge protection*. ++ ```azurepowershell + ($resource = Get-AzResource -ResourceId (Get-AzKeyVault -VaultName "AzureKeyVaultName").ResourceId).Properties | Add-Member -MemberType "NoteProperty" -Name "enablePurgeProtection" -Value "true" + Set-AzResource -resourceid $resource.ResourceId -Properties $resource.Properties ++ ``` ++# [CLI](#tab/cli) ++1. Sign in to your Azure Account. ++ ```azurecli + az login + ``` ++2. Select the subscription that contains your vault. ++ ```azurecli + az account set --subscription "Subscription1" + ``` ++3. Enable soft delete ++ ```azurecli + az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-soft-delete true + ``` ++4. Enable purge protection ++ ```azurecli + az keyvault update --subscription {SUBSCRIPTION ID} -g {RESOURCE GROUP} -n {VAULT NAME} --enable-purge-protection true + ``` ++++++## Save passphrase to Azure Key Vault for a new MARS installation ++Before proceeding to install the MARS agent, ensure that you have [configured the Recovery Services vault to store passphrase to Azure Key Vault](#configure-the-recovery-services-vault-to-store-passphrase-to-azure-key-vault) and you have successfully: ++1. Created your Recovery Services vault. +2. Enabled the Recovery Services vault's system-assigned managed identity. +3. Assigned permissions to your Recovery Services vault to create Secret in your Key Vault. +4. Enabled soft delete and purge protection for your Key Vault. ++5. To install the MARS agent on a machine, download the MARS installer from the Azure portal, and then [use installation wizard](install-mars-agent.md). ++6. After providing the *Recovery Services vault credentials* during registration, in the **Encryption Setting**, select the option to save the passphrase to Azure Key Vault. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase.png" alt-text="Screenshot shows the option to save the passphrase to Azure Key Vault to be selected." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase.png"::: ++7. Enter your *passphrase* or select **Generate Passphrase**. +4. In the *Azure portal*, open your *Key Vault*, copy the *Key Vault URI*. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png" alt-text="Screenshot shows how to copy the Key Vault URI." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png"::: ++5. Paste the *Key Vault URI* in the *MARS console*, and then select **Register**. ++ If you encounter an error, [check the troubleshooting section](#troubleshoot-common-scenarios) for more information. ++8. Once the registration succeeds, the option to *copy the identifier to the Secret* is created and the passphrase is NOT saved to a file locally. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/server-registration-success.png" alt-text="Screenshot shows the option to copy the identifier to the Secret gets creates." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/server-registration-success.png"::: ++ If you change the passphrase in the future for this MARS agent, a new version of the Secret will be added with the latest passphrase. ++You can automate this process by using the new KeyVaultUri option in `Set-OBMachineSetting command` in the [installation script](./scripts/register-microsoft-azure-recovery-services-agent.md). ++## Save passphrase to Azure Key Vault for an existing MARS installation ++If you have an existing MARS agent installation and want to save your passphrase to Azure Key Vault, [update your agent](upgrade-mars-agent.md) to version *2.0.9254.0* or above and perform a change passphrase operation. ++After updating your MARS agent, ensure that you have [configured the Recovery Services vault to store passphrase to Azure Key Vault](#configure-the-recovery-services-vault-to-store-passphrase-to-azure-key-vault) and you have successfully: ++1. Created your Recovery Services vault. +2. Enabled the Recovery Services vault's system-assigned managed identity. +3. Assigned permissions to your Recovery Services vault to create Secret in your Key Vault. +4. Enabled soft delete and purge protection for your Key Vault ++To save the passphrase to Key Vault: ++1. Open the *MARS agent console*. ++ You should see a banner asking you to select a link to save the passphrase to Azure Key Vault. ++ Alternatively, select **Change Properties** > **Change Passphrase** to proceed. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase-key-vault.png" alt-text="Screenshot shows how to start changing passphrase for an existing MARS installation." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/save-passphrase-key-vault.png"::: ++2. In the **Change Properties** dialog box, the option to *save passphrase to Key Vault by providing a Key Vault URI* appears. ++ >[!Note] + >If the machine is already configured to save passphrase to Key Vault, the Key Vault URI will be populated in the text box automatically. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/enter-key-vault-url.png" alt-text="Screenshot shows the option to save passphrase to Key Vault by providing a Key Vault URI gets generated." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/enter-key-vault-url.png"::: ++3. Open the *Azure portal*, open your *Key Vault*, and then *copy the Key Vault URI*. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png" alt-text="Screenshot shows how to copy the Key Vault URI." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-key-vault-url.png"::: ++4. *Paste the Key Vault URI* in the *MARS console*, and then select **OK**. ++ If you encounter an error, [check the troubleshooting section](#troubleshoot-common-scenarios) for more information. ++5. Once the change passphrase operation succeeds, an option to *copy the identifier to the Secret* gets created and the passphrase is NOT saved to a file locally. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/passphrase-saved-to-key-vault.png" alt-text="Screenshot shows an option to copy the identifier to the Secret gets created." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/passphrase-saved-to-key-vault.png"::: ++ If you change the passphrase in the future for this MARS agent, a new version of the *Secret* will be added with the latest passphrase. ++You can automate this step by using the new KeyVaultUri option in [Set-OBMachineSetting](/powershell/module/msonlinebackup/set-obmachinesetting?view=msonlinebackup-ps&preserve-view=true) cmdlet. ++## Retrieve passphrase from Azure Key Vault for a machine ++If your machine becomes unavailable and you need to restore backup data from the Recovery Services vault via [alternate location restore](restore-all-files-volume-mars.md#volume-level-restore-to-an-alternate-machine), you need the machineΓÇÖs passphrase to proceed. ++The passphrase is saved to Azure Key Vault as a Secret. One Secret is created per machine and a new version is added to the Secret when the passphrase for the machine is changed. The Secret is named as `AzBackup-machine fully qualified name-vault name`. ++To locate the machineΓÇÖs passphrase: ++1. In the *Azure portal*, open the *Key Vault used to save the passphrase for the machine*. ++ We recommend you to use one Key Vault to save all your passphrases. ++2. Select **Secrets** and search for the secret named `AzBackup-<machine name>-<vaultname>`. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/locate-passphrase.png" alt-text="Screenshot shows bow to check for the secret name." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/locate-passphrase.png"::: + +3. Select the **Secret**, open the latest version and *copy the value of the Secret*. ++ This is the passphrase of the machine to be used during recovery. ++ :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-passphrase-from-secret.png" alt-text="Screenshot shows selection of the secret." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/copy-passphrase-from-secret.png"::: ++ If you have a large number of Secrets in the Key Vault, use the Key Vault CLI to list and search for the secret. ++```azurecli +az keyvault secret list --vault-name 'myvaultnameΓÇÖ | jq '.[] | select(.name|test("AzBackup-<myvmname>"))' ++``` ++## Troubleshoot common scenarios ++This section lists commonly encountered errors when saving the passphrase to Azure Key Vault. ++### System identity isn't configured ΓÇô 391224 ++**Cause**: This error occurs if the Recovery Services vault doesn't have a system-assigned managed identity configured. ++**Recommended action**: Ensure that system-assigned managed identity is configured correctly for the Recovery Services vault as per the [prerequisites](#before-you-start). ++### Permissions aren't configured ΓÇô 391225 ++**Cause**: The Recovery Services vault has a system-assigned managed identity, but it doesn't have **Set permission** to create a Secret in the target Key Vault. ++**Recommended action**: ++1. Ensure that the vault credential used corresponds to the intended recovery services vault. +2. Ensure that the Key Vault URI corresponds to the intended Key Vault. +3. Ensure that the Recovery Services vault name is listed under Key Vault -> Access policies -> Application, with Secret Permissions as Set. + + :::image type="content" source="./media/save-backup-passphrase-securely-in-azure-key-vault/check-secret-permissions-is-set.png" alt-text="Screenshot shows the Recovery Services vault name is listed under Key Vault." lightbox="./media/save-backup-passphrase-securely-in-azure-key-vault/check-secret-permissions-is-set.png"::: ++ If it's not listed, [configure the permission again](#assign-permissions-to-save-the-passphrase-in-azure-key-vault). ++### Azure Key Vault URI is incorrect - 100272 ++**Cause**: The Key Vault URI entered isn't in the right format. ++**Recommended action**: Ensure that you have entered a Key Vault URI copied from the Azure portal. For example, `https://myvault.vault.azure.net/`. + ++### Registration is incomplete ++**Cause**: You didn't complete the MARS registration by registering the passphrase. So, you'll not be able to configure backups until you register. ++**Recommended action**: Select the warning message and complete the registration. +++ΓÇâ ++++++ |
backup | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
backup | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md | Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 07/14/2023 Last updated : 08/30/2023 You can learn more about the new releases by bookmarking this page or by [subscr ## Updates summary +- August 2023 + - [Save your MARS backup passphrase securely to Azure Key Vault (preview)](#save-your-mars-backup-passphrase-securely-to-azure-key-vault-preview) + - [Cross Region Restore for MARS Agent (preview)](#cross-region-restore-for-mars-agent-preview) - July 2023 - [SAP HANA System Replication database backup support is now generally available](#sap-hana-system-replication-database-backup-support-is-now-generally-available) - [Cross Region Restore for PostgreSQL (preview)](#cross-region-restore-for-postgresql-preview) You can learn more about the new releases by bookmarking this page or by [subscr - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview) +## Save your MARS backup passphrase securely to Azure Key Vault (preview) ++Azure Backup now enables you to save the MARS passphrase to Azure Key Vault automatically from the MARS console during registration or changing passphrase. ++The MARS agent from Azure Backup requires a passphrase provided by the user to encrypt the backups sent to and stored on Azure Recovery Services Vault. This passphrase is not shared with Microsoft and needs to be saved in a secure location to ensure that the backups can be retrieved if the server backed up with MARS goes down. ++For more information, see [Save and manage MARS agent passphrase securely in Azure Key Vault](save-backup-passphrase-securely-in-azure-key-vault.md). ++## Cross Region Restore for MARS Agent (preview) ++You can now restore data from the secondary region for MARS Agent backups using Cross Region Restore on Recovery Services vaults with Geo-redundant storage (GRS) replication. You can use this capability to do recovery drills from the secondary region for audit or compliance. If disasters cause partial or complete unavailability of the primary region, you can directly access the backup data from the secondary region. ++For more information, see [Cross Region Restore for MARS (preview)](about-restore-microsoft-azure-recovery-services.md#cross-region-restore-preview). + ## SAP HANA System Replication database backup support is now generally available Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection, |
baremetal-infrastructure | Solution Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-on-azure/solution-design.md | The following table describes the network topologies supported by each network f |Connectivity over Active/Passive VPN gateways| Yes | |Connectivity over Active/Active VPN gateways| No | |Connectivity over Active/Active Zone Redundant gateways| No |-|Transit connectivity via vWAN for Spoke Delegated VNETS| No | +|Transit connectivity via vWAN for Spoke Delegated VNETS| Yes | |On-premises connectivity to Delegated subnet via vWAN attached SD-WAN| No| |On-premises connectivity via Secured HUB(Az Firewall NVA) | No| |Connectivity from UVMs on NC2 nodes to Azure resources|Yes| |
bastion | Bastion Connect Vm Rdp Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-rdp-linux.md | - Title: 'Connect to a Linux VM using RDP'- -description: Learn how to use Azure Bastion to connect to Linux VM using RDP. --- Previously updated : 04/26/2023-----# Create an RDP connection to a Linux VM using Azure Bastion --This article shows you how to securely and seamlessly create an RDP connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also [connect to a Linux VM using SSH](bastion-connect-vm-ssh-linux.md). --Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see [What is Azure Bastion?](bastion-overview.md) --## Prerequisites --Before you begin, verify that you've met the following criteria: --* Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network. --* To use RDP with a Linux virtual machine, you must also ensure that you have xrdp installed and configured on the Linux VM. To learn how to do this, see [Use xrdp with Linux](../virtual-machines/linux/use-remote-desktop.md). --* This configuration isn't available for the **Basic** SKU. To use this feature, [Upgrade the SKU](upgrade-sku.md) to the Standard SKU tier. --* You must use username/password authentication. --### Required roles --In order to make a connection, the following roles are required: --* Reader role on the virtual machine -* Reader role on the NIC with private IP of the virtual machine -* Reader role on the Azure Bastion resource -* Reader role on the virtual network of the target virtual machine (if the Bastion deployment is in a peered virtual network). --### Ports --To connect to the Linux VM via RDP, you must have the following ports open on your VM: --* Inbound port: RDP (3389) *or* -* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion) --## <a name="rdp"></a>Connect ---## Next steps --Read the [Bastion FAQ](bastion-faq.md) for more information. |
bastion | Bastion Connect Vm Ssh Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md | -This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Linux VM using RDP. For information, see [Create an RDP connection to a Linux VM](bastion-connect-vm-rdp-linux.md). +This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md) overview article. |
bastion | Bastion Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-faq.md | description: Learn about frequently asked questions for Azure Bastion. Previously updated : 08/08/2023 Last updated : 08/16/2023 # Azure Bastion FAQ No. You don't need to install an agent or any software on your browser or your A See [About VM connections and features](vm-about.md) for supported features. +### <a name="shareable-links-passwords"></a>Is Reset Password available for local users connecting via shareable link? ++No. Some organizations have company policies that require a password reset when a user logs into a local account for the first time. When using shareable links, the user can't change the password, even though a "Reset Password" button may appear. + ### <a name="audio"></a>Is remote audio available for VMs? Yes. See [About VM connections and features](vm-about.md#audio). This may be due to the Private DNS zone for privatelink.azure.com linked to the ## Next steps -For more information, see [What is Azure Bastion](bastion-overview.md). +For more information, see [What is Azure Bastion](bastion-overview.md). |
bastion | Connect Ip Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-ip-address.md | description: Learn how to connect to your virtual machines using a specified pri Previously updated : 06/26/2023 Last updated : 08/23/2023 IP-based connection lets you connect to your on-premises, non-Azure, and Azure v **Limitations** -IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing. +* IP-based connection wonΓÇÖt work with force tunneling over VPN, or when a default route is advertised over an ExpressRoute circuit. Azure Bastion requires access to the Internet and force tunneling, or the default route advertisement will result in traffic blackholing. ++* Azure Active Directory authentication and custom ports and protocols aren't currently supported when connecting to a VM via native client. ## Prerequisites Before you begin these steps, verify that you have the following environment set ## Connect to VM - native client -You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunnelling. Note that this feature does not support Azure Active Directory authentication or custom port and protocol at the moment. To learn more about configuring native client support, see [Configure Bastion native client support](native-client.md). Use the following commands as examples: -- **RDP:** - - ```azurecli - az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress> - ``` - - **SSH:** - - ```azurecli - az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" - ``` - - **Tunnel:** - - ```azurecli - az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" - ``` +You can connect to VMs using a specified IP address with native client via SSH, RDP, or tunneling. To learn more about configuring native client support, see [Configure Bastion native client support](native-client.md). ++> [!NOTE] +> This feature does not currently support Azure Active Directory authentication or custom port and protocol. ++Use the following commands as examples: ++**RDP:** ++```azurecli +az network bastion rdp --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress> +``` ++**SSH:** ++```azurecli +az network bastion ssh --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --auth-type "ssh-key" --username "<Username>" --ssh-key "<Filepath>" +``` ++**Tunnel:** +```azurecli +az network bastion tunnel --name "<BastionName>" --resource-group "<ResourceGroupName>" --target-ip-address "<VMIPAddress>" --resource-port "<TargetVMPort>" --port "<LocalMachinePort>" +``` ## Next steps |
batch | Batch Automatic Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-automatic-scaling.md | Title: Autoscale compute nodes in an Azure Batch pool description: Enable automatic scaling on an Azure Batch cloud pool to dynamically adjust the number of compute nodes in the pool. Previously updated : 05/26/2023 Last updated : 08/23/2023 pendingTaskSamples = pendingTaskSamplePercent < 70 ? startingNumberOfVMs : avg($ $TargetDedicatedNodes=min(maxNumberofVMs, pendingTaskSamples); $NodeDeallocationOption = taskcompletion; ```+> [!IMPORTANT] +> Currently, Batch Service has limitations with the resolution of the pending tasks. When a task is added to the job, it's also added into a internal queue used by Batch service for scheduling. If the task is deleted before it can be scheduled, the task might persist within the queue, causing it to still be counted in `$PendingTasks`. This deleted task will eventually be cleared from the queue when Batch gets chance to pull tasks from the queue to schedule with idle nodes in the Batch pool. #### Preempted nodes |
batch | Batch Docker Container Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md | You should be familiar with container concepts and how to create a Batch pool an > - Batch Python SDK version 14.0.0 > - Batch Java SDK version 11.0.0 > - Batch Node.js SDK version 11.0.0-> the `containerConfiguration` requires `Type` property to be passed and the supported values are: `ContainerType.DockerCompatible` and `ContainerType.CriCompatible`. ++Currently, the `containerConfiguration` requires `Type` property to be passed and the supported values are: `ContainerType.DockerCompatible` and `ContainerType.CriCompatible`. Keep in mind the following limitations: |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
batch | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
cdn | Cdn Create A Storage Account With Cdn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-a-storage-account-with-cdn.md | To create a storage account, you must be either the service administrator or a c ## Enable Azure CDN for the storage account -1. On the page for your storage account, select **Blob service** > **Azure CDN** from the left menu. The **Azure CDN** page appears. +1. On the page for your storage account, select **Security + Networking** > **Front Door and CDN** from the left menu. The **Front Door and CDN** page appears. - :::image type="content" source="./media/cdn-create-a-storage-account-with-cdn/cdn-storage-endpoint-configuration.png" alt-text="Screenshot of create a CDN endpoint."::: + :::image type="content" source="./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-endpoint-configuration.png" alt-text="Screenshot of create a CDN endpoint." lightbox="./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-endpoint-configuration.png"::: 1. In the **New endpoint** section, enter the following information: | Setting | Value | | -- | -- |- | **CDN profile** | Select **Create new** and enter your profile name, for example, *cdn-profile-123*. A profile is a collection of endpoints. | - | **Pricing tier** | Select one of the **Standard** options, such as **Microsoft CDN (classic)**. | - | **CDN endpoint name** | Enter your endpoint hostname, such as *cdn-endpoint-123*. This name must be globally unique across Azure because it's to access your cached resources at the URL _<endpoint-name>_.azureedge.net. | - | **Origin hostname** | By default, a new CDN endpoint uses the hostname of your storage account as the origin server. | -+ | **Service type** | **Azure CDN** | + | **Create new/use existing profile** | **Create new** | + | **Profile name** | Enter your profile name, for example, *cdn-profile-123*. A profile is a collection of endpoints. | + | **CDN endpoint name** | Enter your endpoint hostname, such as *cdn-endpoint-123*. This name must be globally unique across Azure because it's to access your cached resources at the URL _<endpoint-name>_.azureedge.net. | + | **Origin hostname** | By default, a new CDN endpoint uses the hostname of your storage account as the origin server. | + | **Pricing tier** | Select one of the options, such as **Microsoft CDN (classic)**. | + 1. Select **Create**. After the endpoint is created, it appears in the endpoint list. - ![Storage new CDN endpoint](./media/cdn-create-a-storage-account-with-cdn/cdn-storage-new-endpoint-list.png) + [ ![Screenshot of a storage new CDN endpoint.](./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-new-endpoint-list.png) ](./media/cdn-create-a-storage-account-with-cdn/azure-cdn-storage-new-endpoint-list.png#lightbox) > [!TIP] > If you want to specify advanced configuration settings for your CDN endpoint, such as [large file download optimization](cdn-optimization-overview.md#large-file-download), you can instead use the [Azure CDN extension](cdn-create-new-endpoint.md) to create a CDN profile and endpoint. |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates ## August 2023 Guest OS ->[!NOTE] -->The August Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the August Guest OS. This list is subject to change. | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-08 | [5029247] | Latest Cumulative Update(LCU) | 6.61 | Aug 8, 2023 | -| Rel 23-08 | [5029250] | Latest Cumulative Update(LCU) | 7.29 | Aug 8, 2023 | -| Rel 23-08 | [5029242] | Latest Cumulative Update(LCU) | 5.85 | Aug 8, 2023 | -| Rel 23-08 | [5028969] | .NET Framework 3.5 Security and Quality Rollup | 2.141 | Aug 8, 2023 | -| Rel 23-08 | [5028963] | .NET Framework 4.7.2 Security and Quality Rollup | 2.141 | Aug 8, 2023 | -| Rel 23-08 | [5028970] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.121 | Aug 8, 2023 | -| Rel 23-08 | [5028962] | .NET Framework 4.7.2 Cumulative Update LKG | 4.121 | Aug 8, 2023 | -| Rel 23-08 | [5028967] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.129 | Aug 8, 2023 | -| Rel 23-08 | [5028961] | .NET Framework 4.7.2 Cumulative Update LKG | 3.129 | Aug 8, 2023 | -| Rel 23-08 | [5028960] | .NET Framework DotNet | 6.61 | Aug 8, 2023 | -| Rel 23-08 | [5028956] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.29 | Aug 8, 2023 | -| Rel 23-08 | [5029296] | Monthly Rollup | 2.141 | Aug 8, 2023 | -| Rel 23-08 | [5029295] | Monthly Rollup | 3.129 | Aug 8, 2023 | -| Rel 23-08 | [5029312] | Monthly Rollup | 4.121 | Aug 8, 2023 | -| Rel 23-08 | [5029369] | Servicing Stack Update | 3.129 | Aug 8, 2023 | -| Rel 23-08 | [5029368] | Servicing Stack Update LKG | 4.121 | Aug 8, 2023 | -| Rel 23-08 | [4578013] | OOB Standalone Security Update | 4.121 | Aug 19, 2020 | -| Rel 23-08 | [5023788] | Servicing Stack Update LKG | 5.85 | Mar 14, 2023 | -| Rel 23-08 | [5028264] | Servicing Stack Update LKG | 2.141 | Jul 11, 2023 | -| Rel 23-08 | [4494175] | Microcode | 5.85 | Sep 1, 2020 | -| Rel 23-08 | [4494174] | Microcode | 6.61 | Sep 1, 2020 | -| Rel 23-08 | 5029395 | Servicing Stack Update | 7.29 | | -| Rel 23-08 | 5028316 | Servicing Stack Update | 6.61 | | +| Rel 23-08 | [5029247] | Latest Cumulative Update(LCU) | [6.61] | Aug 8, 2023 | +| Rel 23-08 | [5029250] | Latest Cumulative Update(LCU) | [7.30] | Aug 8, 2023 | +| Rel 23-08 | [5029242] | Latest Cumulative Update(LCU) | [5.85] | Aug 8, 2023 | +| Rel 23-08 | [5028969] | .NET Framework 3.5 Security and Quality Rollup | [2.141] | Aug 8, 2023 | +| Rel 23-08 | [5028963] | .NET Framework 4.7.2 Security and Quality Rollup | [2.141] | Aug 8, 2023 | +| Rel 23-08 | [5028970] | .NET Framework 3.5 Security and Quality Rollup LKG | [4.121] | Aug 8, 2023 | +| Rel 23-08 | [5028962] | .NET Framework 4.7.2 Cumulative Update LKG | [4.121] | Aug 8, 2023 | +| Rel 23-08 | [5028967] | .NET Framework 3.5 Security and Quality Rollup LKG | [3.129] | Aug 8, 2023 | +| Rel 23-08 | [5028961] | .NET Framework 4.7.2 Cumulative Update LKG | [3.129] | Aug 8, 2023 | +| Rel 23-08 | [5028960] | .NET Framework DotNet | [6.61] | Aug 8, 2023 | +| Rel 23-08 | [5028956] | .NET Framework 4.8 Security and Quality Rollup LKG | [7.30] | Aug 8, 2023 | +| Rel 23-08 | [5029296] | Monthly Rollup | [2.141] | Aug 8, 2023 | +| Rel 23-08 | [5029295] | Monthly Rollup | [3.129] | Aug 8, 2023 | +| Rel 23-08 | [5029312] | Monthly Rollup | [4.121] | Aug 8, 2023 | +| Rel 23-08 | [5029369] | Servicing Stack Update | [3.129] | Aug 8, 2023 | +| Rel 23-08 | [5029368] | Servicing Stack Update LKG | [4.121] | Aug 8, 2023 | +| Rel 23-08 | [4578013] | OOB Standalone Security Update | [4.121] | Aug 19, 2020 | +| Rel 23-08 | [5023788] | Servicing Stack Update LKG | [5.85] | Mar 14, 2023 | +| Rel 23-08 | [5028264] | Servicing Stack Update LKG | [2.141] | Jul 11, 2023 | +| Rel 23-08 | [4494175] | Microcode | [5.85] | Sep 1, 2020 | +| Rel 23-08 | [4494174] | Microcode | [6.61] | Sep 1, 2020 | +| Rel 23-08 | 5029395 | Servicing Stack Update | [7.30] | | +| Rel 23-08 | 5028316 | Servicing Stack Update | [6.61] | | [5029247]: https://support.microsoft.com/kb/5029247 [5029250]: https://support.microsoft.com/kb/5029250 The following tables show the Microsoft Security Response Center (MSRC) updates [4494174]: https://support.microsoft.com/kb/4494174 [5029395]: https://support.microsoft.com/kb/5029395 [5028316]: https://support.microsoft.com/kb/5028316+[2.141]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.129]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.121]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.85]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.61]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.30]: ./cloud-services-guestos-update-matrix.md#family-7-releases ## July 2023 Guest OS |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **August 21, 2023** +The August Guest OS has released. + ###### **July 27, 2023** The July Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.30 | -| WA-GUEST-OS-7.27_202306-02 | July 8, 2023 | Post 7.29 | +| WA-GUEST-OS-7.30_202308-01 | August 21, 2023 | Post 7.32 | +| WA-GUEST-OS-7.28_202307-01 | July 27, 2023 | Post 7.31 | +|~~WA-GUEST-OS-7.27_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-7.25_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-7.24_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-7.23_202303-01~~| March 28, 2023 | May 19, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-6.61_202308-01 | August 21, 2023 | Post 6.63 | | WA-GUEST-OS-6.60_202307-01 | July 27, 2023 | Post 6.62 |-| WA-GUEST-OS-6.59_202306-02 | July 8, 2023 | Post 6.61 | +|~~WA-GUEST-OS-6.59_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-6.57_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-6.56_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-6.55_202303-01~~| March 28, 2023 | May 19, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-5.85_202308-01 | August 21, 2023 | Post 5.87 | | WA-GUEST-OS-5.84_202307-01 | July 27, 2023 | Post 5.86 | -| WA-GUEST-OS-5.83_202306-02 | July 8, 2023 | Post 5.85 | +|~~WA-GUEST-OS-5.83_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-5.81_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-5.80_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-5.79_202303-01~~| March 28, 2023 | May 19, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-4.121_202308-01 | August 21, 2023 | Post 4.123 | | WA-GUEST-OS-4.120_202307-01 | July 27, 2023 | Post 4.122 |-| WA-GUEST-OS-4.119_202306-02 | July 8, 2023 | Post 4.121 | +|~~WA-GUEST-OS-4.119_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-4.117_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-4.116_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-4.115_202303-01~~| March 28, 2023 | May 19, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-3.129_202308-01 | August 21, 2023 | Post 3.131 | | WA-GUEST-OS-3.128_202307-01 | July 27, 2023 | Post 3.130 |-| WA-GUEST-OS-3.127_202306-02 | July 8, 2023 | Post 3.129 | +|~~WA-GUEST-OS-3.127_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-3.125_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-3.124_202304-02~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-3.122_202303-01~~| March 28, 2023 | May 19, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |+| WA-GUEST-OS-2.141_202308-01 | August 21, 2023 | Post 2.143 | | WA-GUEST-OS-2.140_202307-01 | July 27, 2023 | Post 2.142 |-| WA-GUEST-OS-2.139_202306-02 | July 8, 2023 | Post 2.141 | +|~~WA-GUEST-OS-2.139_202306-02~~| July 8, 2023 | August 21, 2023 | |~~WA-GUEST-OS-2.137_202305-01~~| May 19, 2023 | July 27, 2023 | |~~WA-GUEST-OS-2.136_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-2.135_202303-01~~| March 28, 2023 | May 19, 2023 | |
cloud-shell | Embed Cloud Shell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/embed-cloud-shell.md | Large sized button The location of these image files is subject to change. We recommend that you download the files for use in your applications. -## Customize experience --Set a specific shell experience by augmenting your URL. --| Experience | URL | -| | | -| Most recently used shell | `https://shell.azure.com` | -| Bash | `https://shell.azure.com/bash` | -| PowerShell | `https://shell.azure.com/powershell` | - ## Next steps - [Bash in Cloud Shell quickstart][07] - [PowerShell in Cloud Shell quickstart][06] <!-- updated link references -->-[01]: https://shell.azure.com [06]: quickstart-powershell.md [07]: quickstart.md |
cloud-shell | Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/features.md | Cloud Shell comes with the following Azure command-line tools preinstalled: | Tool | Version | Command | | - | -- | |-| [Azure CLI][08] | 2.45.0 | `az --version` | -| [Azure PowerShell][06] | 9.4.0 | `Get-Module Az -ListAvailable` | +| [Azure CLI][08] | 2.51.0 | `az --version` | +| [Azure PowerShell][06] | 10.2.0 | `Get-Module Az -ListAvailable` | | [AzCopy][04] | 10.15.0 | `azcopy --version` |-| [Azure Functions CLI][01] | 4.0.3971 | `func --version` | +| [Azure Functions CLI][01] | 4.0.5198 | `func --version` | | [Service Fabric CLI][03] | 11.2.0 | `sfctl --version` | | [Batch Shipyard][09] | 3.9.1 | `shipyard --version` | | [blobxfer][10] | 1.11.0 | `blobxfer --version` | Cloud Shell comes with the following languages preinstalled: | Language | Version | Command | | - | - | |-| .NET Core | [6.0.405][16] | `dotnet --version` | -| Go | 1.17.13 | `go version` | -| Java | 11.0.18 | `java --version` | -| Node.js | 16.18.1 | `node --version` | -| PowerShell | [7.3.2][07] | `pwsh -Version` | +| .NET Core | [7.0.400][16] | `dotnet --version` | +| Go | 1.19.11 | `go version` | +| Java | 17.0.8 | `java --version` | +| Node.js | 16.20.1 | `node --version` | +| PowerShell | [7.3.6][07] | `pwsh -Version` | | Python | 3.9.14 | `python --version` |-| Ruby | 3.1.3p185 | `ruby --version` | +| Ruby | 3.1.4p223 | `ruby --version` | You can verify the version of the language using the command listed in the table. You can verify the version of the language using the command listed in the table [13]: https://docs.cloudfoundry.org/cf-cli/ [14]: https://docs.d2iq.com/dkp/2.3/azure-quick-start [15]: https://docs.docker.com/desktop/-[16]: https://dotnet.microsoft.com/download/dotnet/6.0 +[16]: https://dotnet.microsoft.com/download/dotnet/7.0 [17]: https://github.com/Azure/CloudShell/issues [18]: https://github.com/microsoft/mssql-scripter/blob/dev/doc/usage_guide.md [19]: https://helm.sh/docs/ |
cloud-shell | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/troubleshooting.md | ms.contributor: jahelmic Last updated 05/03/2023 tags: azure-resource-manager+ Title: Azure Cloud Shell troubleshooting # Troubleshooting & Limitations of Azure Cloud Shell |
communication-services | Azure Communication Services Azure Cognitive Services Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md | ->[!IMPORTANT] ->Functionality described on this document is currently in private preview. Private preview includes access to SDKs and documentation for testing purposes that are not yet available publicly. ->Apply to become an early adopter by filling out the form for [preview access to Azure Communication Services](https://aka.ms/acs-tap-invite). Azure Communication Services Call Automation APIs provide developers the ability to steer and control the Azure Communication Services Telephony, VoIP or WebRTC calls using real-time event triggers to perform actions based on custom business logic specific to their domain. Within the Call Automation APIs developers can use simple AI powered APIs, which can be used to play personalized greeting messages, recognize conversational voice inputs to gather information on contextual questions to drive a more self-service model with customers, use sentiment analysis to improve customer service overall. These content specific APIs are orchestrated through **Azure Cognitive Services** with support for customization of AI models without developers needing to terminate media streams on their services and streaming back to Azure for AI functionality. BYO Azure AI services can be easily integrated into any application regardless o ### Build applications that can play and recognize speech -With the ability to, connect your Azure AI services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience. +With the ability to, connect your Azure AI services to Azure Communication Services, you can enable custom play functionality, using [Text-to-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md) and [SSML](../../../../articles/cognitive-services/Speech-Service/speech-synthesis-markup.md) configuration, to play more customized and natural sounding audio to users. Through the Azure AI services connection, you can also use the Speech-To-Text service to incorporate recognition of voice responses that can be converted into actionable tasks through business logic in the application. These functions can be further enhanced through the ability to create custom models within Azure AI services that are bespoke to your domain and region, through the ability to choose which languages are spoken and recognized, custom voices and custom models built based on your experience. ## Run time flow-[![Run time flow](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox) +[![Screen shot of integration run time flow.](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox) ## Azure portal experience-You can also configure and bind your Communication Services and Azure AI services through the Azure portal. +You can configure and bind your Communication Services and Azure AI services through the Azure portal. -### Add a Managed Identity to the Azure Communication Services Resource +## Prerequisites +- Azure account with an active subscription and access to Azure portal, for details see [Create an account for free](https://azure.microsoft.com/free/). +- Azure Communication Services resource. See [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp). +- An Azure Cognitive Services resource. -1. Navigate to your Azure Communication Services Resource in the Azure portal. -2. Select the Identity tab. -3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed. +### Connecting through the Azure portal -[![Enable managed identiy](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox) +1. Open your Azure Communication Services resource and click on the Cognitive Services tab. +2. If system-assigned managed identity isn't enabled, there are two ways to enable it. -<a name='option-1-add-role-from-azure-cognitive-services-in-the-azure-portal'></a> + 2.1. In the Cognitive Services tab, click on "Enable Managed Identity" button. + + [![Screenshot of Enable Managed Identity button.](./media/enabled-identity.png)](./media/enabled-identity.png#lightbox) -### Option 1: Add role from Azure AI services in the Azure portal -1. Navigate to your Azure Cognitive Service resource. -2. Select the "Access control (IAM)" tab. -3. Click the "+ Add" button. -4. Select "Add role assignments" from the menu. + or -[![Add role from IAM](./media/add-role.png)](./media/add-role.png#lightbox) + 2.2. Navigate to the identity tab. + + 2.3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed. + [![Screen shot of enable managed identiy.](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox) -5. Choose the "Cognitive Services User" role to assign, then click "Next". + 2.4. Once the identity is enabled you should see something similar. + [![Screenshot of enabled identity.](./media/identity-saved.png)](./media/identity-saved.png#lightbox) -[![Cognitive Services User](./media/cognitive-service-user.png)](./media/cognitive-service-user.png#lightbox) +3. When managed identity is enabled the Cognitive Service tab should show a button 'Connect cognitive service' to connect the two services. +[![Screenshot of Connect cognitive services button.](./media/cognitive-services.png)](./media/cog-svc.png#lightbox) -6. For the field "Assign access to" choose the "User, group or service principal". -7. Press "+ Select members" and a side tab opens. -8. Search for your Azure Communication Services resource name in the text box and click it when it shows up, then click "Select". +4. Click on 'Connect cognitive service', select the Subscription, Resource Group and Resource and click 'Connect' in the context pane that opens up. + [![Screenshot of Subscription, Resource Group and Resource in pane.](./media/choose-options.png)](./media/choose-options.png#lightbox) +5. If connection is successful, you should see a green banner confirming successful connection. -[![Select ACS resource](./media/select-acs-resource.png)](./media/select-acs-resource.png#lightbox) + [![Screenshot of successful connection.](./media/connected.png)](./media/connected.png#lightbox) -9. Click ΓÇ£Review + assignΓÇ¥, this assigns the role to the managed identity. --### Option 2: Add role through Azure Communication Services Identity tab --1. Navigate to your Azure Communication Services resource in the Azure portal. -2. Select Identity tab. -3. Click on "Azure role assignments". --[![ACS role assignment](./media/add-role-acs.png)](./media/add-role-acs.png#lightbox) --4. Click the "Add role assignment (Preview)" button, which opens the "Add role assignment (Preview)" tab. -5. Select the "Resource group" for "Scope". -6. Select the "Subscription". -7. Select the "Resource Group" containing the Cognitive Service. -8. Select the "Role" "Cognitive Services User". --[![ACS role information](./media/acs-roles-cognitive-services.png)](./media/acs-roles-cognitive-services.png#lightbox) --10. Click Save. --Your Communication Service has now been linked to your Azure Cognitive Service resource. --<a name='azure-cognitive-services-regions-supported'></a> +6. Now in the Cognitive Service tab you should see your connected services showing up. +[![Screenshot of connected cognitive service on main page.](./media/new-entry-created.png)](./media/new-entry-created.png#lightbox) ## Azure AI services regions supported -This integration between Azure Communication Services and Azure AI services is only supported in the following regions at this point in time: +This integration between Azure Communication Services and Azure AI services is only supported in the following regions: - westus - westus2 - westus3 This integration between Azure Communication Services and Azure AI services is o - southcentralus - westcentralus - westeu+- uksouth ## Next Steps-- Learn about [playing audio](../../concepts/call-automation/play-ai-action.md) to callers using Text-to-Speech.-- Learn about [gathering user input](../../concepts/call-automation/recognize-ai-action.md) with Speech-to-Text.+- Learn about [playing audio](../../concepts/call-automation/play-action.md) to callers using Text-to-Speech. +- Learn about [gathering user input](../../concepts/call-automation/recognize-action.md) with Speech-to-Text. |
communication-services | Call Automation Teams Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation-teams-interop.md | |
communication-services | Incoming Call Notification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/incoming-call-notification.md | -Azure Communication Services Call Automation provides developers the ability to build applications, which can make and receive calls. Azure Communication Services relies on Event Grid subscriptions to deliver each `IncomingCall` event, so setting up your environment to receive these notifications is critical to your application being able to redirect or answer a call. +Azure Communication Services Call Automation enables developers to create applications that can make and receive calls. It leverages Event Grid subscriptions to deliver `IncomingCall` events, making it crucial to configure your environment to receive these notifications for your application to redirect or answer a call effectively. Therefore, understanding the fundamentals of incoming calls is essential for leveraging the full potential of Azure Communication Services Call Automation. ## Calling scenarios -First, we need to define which scenarios can trigger an `IncomingCall` event. The primary concept to remember is that a call to an Azure Communication Services identity or Public Switched Telephone Network (PSTN) number triggers an `IncomingCall` event. The following are examples of these resources: +Before setting up your environment, it's important to understand the scenarios that can trigger an `IncomingCall` event. To trigger an `IncomingCall` event, a call must be made to either an Azure Communication Services identity or a Public Switched Telephone Network (PSTN) number associated with your Azure Communication Services resource. The following are examples of these resources: 1. An Azure Communication Services identity 2. A PSTN phone number owned by your Azure Communication Services resource Given these examples, the following scenarios trigger an `IncomingCall` event se | Public PSTN | PSTN number owned by your Azure Communication Services resource | Call, Redirect, Add Participant, Transfer > [!NOTE]-> An important concept to remember is that an Azure Communication Services identity can be a user or application. Although there is no ability to explicitly assign an identity to a user or application in the platform, this can be done by your own application or supporting infrastructure. Please review the [identity concepts guide](../identity-model.md) for more information on this topic. +> It's important to understand that an Azure Communication Services identity can represent either a user or an application. While the platform does not have a built-in feature to explicitly assign an identity to a user or application, your application or supporting infrastructure can accomplish this. To learn more about this topic, refer to the [identity concepts guide](../identity-model.md). ## Register an Event Grid resource provider If you haven't previously used Event Grid in your Azure subscription, you might ## Receiving an incoming call notification from Event Grid -Since Azure Communication Services relies on Event Grid to deliver the `IncomingCall` notification through a subscription, how you choose to handle the notification is up to you. Additionally, since the Call Automation API relies specifically on Webhook callbacks for events, a common Event Grid subscription used would be a 'Webhook'. However, you could choose any one of the available subscription types offered by the service. +In Azure Communication Services, receiving an `IncomingCall` notification is made possible through an Event Grid subscription. As the receiver of the notification, you have the flexibility to choose how to handle it. Since the Call Automation API leverages Webhook callbacks for events, it's common to use a 'Webhook' Event Grid subscription. However, the service offers various subscription types, and you have the liberty to choose the most suitable one for your needs. This architecture has the following benefits: This architecture has the following benefits: - PSTN number assignment and routing logic can exist in your application versus being statically configured online. - As identified in the [calling scenarios](#calling-scenarios) section, your application can be notified even when users make calls between each other. You can then combine this scenario together with the [Call Recording APIs](../voice-video-calling/call-recording.md) to meet compliance needs. -To check out a sample payload for the event and to learn about other calling events published to Event Grid, check out this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall). +For a sample payload of the event and more information on other calling events published to Event Grid, refer to this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall). Here is an example of an Event Grid Webhook subscription where the event type filter is listening only to the `IncomingCall` event. ![Image showing IncomingCall subscription.](./media/subscribe-incoming-call-event-grid.png) -## Call routing in Call Automation or Event Grid +## Call routing options with Call Automation and Event Grid -You can use [advanced filters](../../../event-grid/event-filtering.md) in your Event Grid subscription to subscribe to an `IncomingCall` notification for a specific source/destination phone number or Azure Communication Services identity and sent it to an endpoint such as a Webhook subscription. That endpoint application can then make a decision to **redirect** the call using the Call Automation SDK to another Azure Communication Services identity or to the PSTN. +In Call Automation and Event Grid, call routing can be tailored to your specific needs. By using [advanced filters](../../../event-grid/event-filtering.md) within your Event Grid subscription, you can subscribe to an `IncomingCall` notification that pertains to a specific source/destination phone number or an Azure Communication Services identity. This notification can then be directed to an endpoint, such as a Webhook subscription. Using the Call Automation SDK, the endpoint application can then make a decision to **redirect** the call to another Azure Communication Services identity or to the PSTN. > [!NOTE]-> In many cases you will want to configure filtering in Event Grid due to the scenarios described above generating an `IncomingCall` event so that your application only receives events it should be responding to. For example, if you want to redirect an inbound PSTN call to an ACS endpoint and you don't use a filter, your Event Grid subscription will receive two `IncomingCall` events; one for the PSTN call and one for the ACS user even though you had not intended to receive the second notification. Failure to handle these scenarios using filters or some other mechanism in your application can cause infinite loops and/or other undesired behavior. +> To ensure that your application receives only the necessary events, it is recommended to configure filtering in Event Grid. This is particularly crucial in scenarios that generate `IncomingCall` events, such as redirecting an inbound PSTN call to an Azure Communication Services endpoint. If a filter isn't used, your Event Grid subscription receives two `IncomingCall` events - one for the PSTN call and one for the Azure Communication Services user - even though you intended to receive only the first notification. Neglecting to handle such scenarios using filters or other mechanisms in your application can result in infinite loops and other undesirable behavior. Here is an example of an advanced filter on an Event Grid subscription watching for the `data.to.PhoneNumber.Value` string starting with a PSTN phone number of `+18005551212. Here is an example of an advanced filter on an Event Grid subscription watching ## Number assignment -Since the `IncomingCall` notification doesn't have a specific destination other than the Event Grid subscription you've created, you're free to associate any particular number to any endpoint in Azure Communication Services. For example, if you acquired a PSTN phone number of `+14255551212` and want to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you can maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent matching the phone number in the **to** field, invoke the `Redirect` API and supply the identity of the user. In other words, you maintain the number assignment within your application and route or answer calls at runtime. +When using the `IncomingCall` notification in Azure Communication Services, you have the freedom to associate any particular number with any endpoint. For example, if you obtained a PSTN phone number of `+14255551212` and wish to assign it to a user with an identity of `375f0e2f-e8db-4449-9bf7-2054b02e42b4` in your application, you should maintain a mapping of that number to the identity. When an `IncomingCall` notification is sent that matches the phone number in the **to** field, you can invoke the `Redirect` API and provide the user's identity. In other words, you can manage the number assignment within your application and route or answer calls at runtime. ## Best Practices-1. Event Grid requires you to prove ownership of your Webhook endpoint before it starts delivering events to that endpoint. This requirement prevents a malicious user from flooding your endpoint with events. If you're facing issues with receiving events, ensure the webhook configured is verified by handling `SubscriptionValidationEvent`. For more information, see this [guide](../../../event-grid/webhook-event-delivery.md). -2. Upon the receipt of an incoming call event, if your application doesn't respond back with 200Ok to Event Grid in time, Event Grid uses exponential backoff retry to send the again. However, an incoming call only rings for 30 seconds, and acting on a call after that won't work. To avoid retries for expired or stale calls, we recommend setting the retry policy as - Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. These settings can be found under Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md). --3. We recommend you to enable logging for your Event Grid resource to monitor events that failed to deliver. Navigate to the system topic under Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in 'AegDeliveryFailureLogs' table. +1. To ensure that Event Grid delivers events to your Webhook endpoint and prevents malicious users from flooding your endpoint with events, you need to prove ownership of your endpoint. To address any issues with receiving events, confirm that the Webhook you configured is verified by handling `SubscriptionValidationEvent`. For more information, refer to this [guide](../../../event-grid/webhook-event-delivery.md). +2. When an incoming call event is received, if your application fails to respond back with a 200Ok status code to Event Grid within the required time frame, Event Grid utilizes exponential backoff retry to send the event again. However, an incoming call only rings for 30 seconds, and responding to a call after that time won't be effective. To prevent retries for expired or stale calls, we recommend setting the retry policy as Max Event Delivery Attempts to 2 and Event Time to Live to 1 minute. You can find these settings under the Additional Features tab of the event subscription. Learn more about retries [here](../../../event-grid/delivery-and-retry.md). +3. We recommend you to enable logging for your Event Grid resource to monitor events that fail to deliver. To do this, navigate to the system topic under the Events tab of your Communication resource and enable logging from the Diagnostic settings. Failure logs can be found in the 'AegDeliveryFailureLogs' table. ```sql AegDeliveryFailureLogs |
communication-services | Play Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/play-action.md | description: Conceptual information about playing audio in call using Call Autom Previously updated : 09/06/2022 Last updated : 08/11/2023 # Playing audio in call -The play action provided through the call automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. The play action allows you to provide Azure Communication Services access to your pre-recorded audio files with support for authentication. +The play action provided through the Azure Communication Services Call Automation SDK allows you to play audio prompts to participants in the call. This action can be accessed through the server-side implementation of your application. You can play audio to call participants through one of two methods; +- Providing Azure Communication Services access to prerecorded audio files of WAV format, that ACS can access with support for authentication +- Regular text that can be converted into speech output through the integration with Azure AI services. ++You can use the newly announced integration between [Azure Communication Services and Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md) to play personalized responses using Azure [Text-To-Speech](../../../../articles/cognitive-services/Speech-Service/text-to-speech.md). You can use human like prebuilt neural voices out of the box or create custom neural voices that are unique to your product or brand. For more information on supported voices, languages and locales see [Language and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). (Supported in public preview) > [!NOTE] > Azure Communication Services currently only supports WAV files formatted as mono channel audio recorded at 16KHz. You can create your own audio files using [Speech synthesis with Audio Content Creation tool](../../../ai-services/Speech-Service/how-to-audio-content-creation.md). -The Play action allows you to provide access to a pre-recorded audio file of WAV format that Azure Communication Services can access with support for authentication. +## Prebuilt Neural Text to Speech voices +Microsoft uses deep neural networks to overcome the limits of traditional speech synthesis with regard to stress and intonation in spoken language. Prosody prediction and voice synthesis occur simultaneously, resulting in a more fluid and natural sounding output. You can use these neural voices to make interactions with your chatbots and voice assistants more natural and engaging. There are over 100 prebuilt voices to choose from. Learn more about [Azure Text-to-Speech voices](../../../../articles/cognitive-services/Speech-Service/language-support.md). ## Common use cases -The play action can be used in many ways, below are some examples of how developers may wish to use the play action in their applications. +The play action can be used in many ways, some examples of how developers may wish to use the play action in their applications are listed here. ### Announcements Your application might want to play some sort of announcement when a participant joins or leaves the call, to notify other users. In scenarios with IVRs and virtual assistants, you can use your application or b The play action can also be used to play hold music for callers. This action can be set up in a loop so that the music keeps playing until an agent is available to assist the caller. ### Playing compliance messages-As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call will be recorded for quality purposesΓÇ¥. +As part of compliance requirements in various industries, vendors are expected to play legal or compliance messages to callers, for example, ΓÇ£This call is recorded for quality purposes.ΓÇ¥. ++## Sample architecture for playing audio in call using Text-To-Speech (Public preview) ++![Diagram showing sample architecture for Play with AI.](./media/play-ai.png) ## Sample architecture for playing audio in a call As part of compliance requirements in various industries, vendors are expected t ## Next Steps - Check out our how-to guide to learn [how-to play custom voice prompts](../../how-tos/call-automation/play-action.md) to users. - Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.+- Learn about [gathering customer input](./recognize-action.md). |
communication-services | Recognize Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md | description: Conceptual information about using Recognize action to gather user Previously updated : 09/16/2022 Last updated : 08/09/2023 # Gathering user input -With the Recognize action developers will be able to enhance their IVR or contact center applications to gather user input. One of the most common scenarios of recognition is to play a message and request user input. This input is received in the form of DTMF (input via the digits on their calling device) which then allows the application to navigate the user to the next action. +With the release of ACS Call Automation Recognize action, developers can now enhance their IVR or contact center applications to recognize user input. One of the most common scenarios of recognition is playing a message for the user, which prompts them to provide a response that then gets recognized by the application, once recognized the application then carries out a corresponding action. Input from callers can be received in several ways, which include DTMF (user input via the digits on their calling device), speech or a combination of both DTMF and speech. ++**Voice recognition with speech-to-text (Public Preview)** ++[Azure Communications services integration with Azure AI services](./azure-communication-services-azure-cognitive-services-integration.md), allows you through the Recognize action to analyze audio in real-time to transcribe spoken word into text. Out of the box Microsoft utilizes a Universal Language Model as a base model that is trained with Microsoft-owned data and reflects commonly used spoken language. This model is pretrained with dialects and phonetics representing various common domains. For more information about supported languages, see [Languages and voice support for the Speech service](../../../../articles/cognitive-services/Speech-Service/language-support.md). + **DTMF**+ Dual-tone multifrequency (DTMF) recognition is the process of understanding tones/sounds generated by a telephone when a number is pressed. Equipment at the receiving end listening for the specific tone then converts them into commands. These commands generally signal user intent when navigating a menu in an IVR scenario or in some cases can be used to capture important information that the user needs to provide via their phones keypad. **DTMF events and their associated tones** Dual-tone multifrequency (DTMF) recognition is the process of understanding tone ## Common use cases -The recognize action can be used for many reasons, below are a few examples of how developers can use the recognize action in their application. +The recognize action can be used for many reasons, here are a few examples of how developers can use the recognize action in their application. ### Improve user journey with self-service prompts - **Users can control the call** - By enabling input recognition you allow the caller to navigate your IVR menu and provide information that can be used to resolve their query. - **Gather user information** - By enabling input recognition your application can gather input from the callers. This can be information such as account numbers, credit card information, etc.+- **Transcribe caller response** - With voice recognition you can collect user input and transcribe the audio to text and analyze it to carry out specific business action. ### Interrupt audio prompts **User can exit from an IVR menu and speak to a human agent** - With DTMF interruption your application can allow users to interrupt the flow of the IVR menu and be able to chat to a human agent. +## Sample architecture for gathering user input in a call with voice recognition ++[ ![Diagram showing sample architecture for Recognize AI Action.](./media/recognize-ai-flow.png) ](./media/recognize-ai-flow.png#lightbox) ## Sample architecture for gathering user input in a call |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | Last updated 12/01/2021 + # Calling capabilities supported for Teams users in Calling SDK |
communication-services | Phone Number Management For Argentina | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-argentina.md | + + Title: Phone Number Management for Argentina ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Argentina. +++++ Last updated : 03/30/2023++++++# Phone number management for Argentina +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Argentina. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Argentina phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Australia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-australia.md | + + Title: Phone Number Management for Australia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Australia. +++++ Last updated : 03/30/2023++++++# Phone number management for Australia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Australia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Alphanumeric Sender ID\** | Public Preview | - | - | - | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Australia phone numbers are available +| Country/Region | +| :- | +| Australia | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Austria | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-austria.md | + + Title: Phone Number Management for Austria ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Austria. +++++ Last updated : 03/30/2023++++++# Phone number management for Austria +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Austria. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +| Alphanumeric Sender ID\** | Public Preview | - | - | - | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Austria phone numbers are available +| Country/Region | +| :- | +|Austria| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Belgium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-belgium.md | + + Title: Phone Number Management for Belgium ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Belgium. +++++ Last updated : 03/30/2023++++++# Phone number management for Belgium +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Belgium. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Belgium phone numbers are available +| Country/Region | +| :- | +|Belgium| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Brazil | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-brazil.md | + + Title: Phone Number Management for Brazil ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Brazil. +++++ Last updated : 03/30/2023++++++# Phone number management for Brazil +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Brazil. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Brazil phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Canada | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-canada.md | + + Title: Phone Number Management for Canada ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Canada. +++++ Last updated : 03/30/2023++++++# Phone number management for Canada +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Canada. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |General Availability | General Availability | General Availability | General Availability\* | +| Local | - | - | General Availability | General Availability\* | +| Alphanumeric Sender ID\** | Public Preview | - | - | - | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | +| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Canada phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Chile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-chile.md | + + Title: Phone Number Management for Chile ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Chile. +++++ Last updated : 03/30/2023++++++# Phone number management for Chile +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Chile. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Chile phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For China | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-china.md | + + Title: Phone Number Management for China ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in China. +++++ Last updated : 03/30/2023++++++# Phone number management for China +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in China. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where China phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Colombia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-colombia.md | + + Title: Phone Number Management for Colombia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Colombia. +++++ Last updated : 03/30/2023++++++# Phone number management for Colombia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Colombia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Colombia phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Denmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-denmark.md | + + Title: Phone Number Management for Denmark ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Denmark. +++++ Last updated : 03/30/2023++++++# Phone number management for Denmark +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Denmark. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +| Alphanumeric Sender ID\** | Public Preview | - | - | - | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Denmark phone numbers are available +| Country/region | +| :- | +|Denmark| +|Ireland| +|Italy| +|Sweden| +|United States| ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Estonia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-estonia.md | + + Title: Phone Number Management for Estonia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Estonia. +++++ Last updated : 03/30/2023++++++# Phone number management for Estonia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Estonia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Alphanumeric Sender ID\* | Public Preview | - | - | - | ++\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Alphanumeric Sender ID is available +| Country/Region | +| :- | +|Australia| +|Austria| +|Denmark| +|Estonia| +|France| +|Germany| +|Italy| +|Latvia| +|Lithuania| +|Netherlands| +|Poland| +|Portugal| +|Spain| +|Sweden| +|Switzerland| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Finland | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-finland.md | + + Title: Phone Number Management for Finland ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Finland. +++++ Last updated : 03/30/2023++++++# Phone number management for Finland +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Finland. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Finland phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For France | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-france.md | + + Title: Phone Number Management for France ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in France. +++++ Last updated : 03/30/2023++++++# Phone number management for France +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in France. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where France phone numbers are available +| Country/Region | +| :- | +|France| +|Italy| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Germany | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-germany.md | + + Title: Phone Number Management for Germany ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Germany. +++++ Last updated : 03/30/2023++++++# Phone number management for Germany +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Germany. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Germany phone numbers are available +| Country/Region | +| :- | +|Germany| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Hong Kong | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-hong-kong.md | + + Title: Phone Number Management for Hong Kong ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Hong Kong. +++++ Last updated : 03/30/2023++++++# Phone number management for Hong Kong +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Hong Kong. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Hong Kong phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Indonesia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-indonesia.md | + + Title: Phone Number Management for Indonesia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Indonesia. +++++ Last updated : 03/30/2023++++++# Phone number management for Indonesia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Indonesia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Indonesia phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Ireland | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-ireland.md | + + Title: Phone Number Management for Ireland ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Ireland. +++++ Last updated : 03/30/2023++++++# Phone number management for Ireland +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Ireland. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | General Availability | General Availability\* | +| Local | - | - | General Availability | General Availability\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Ireland phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Israel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-israel.md | + + Title: Phone Number Management for Israel ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Israel. +++++ Last updated : 03/30/2023++++++# Phone number management for Israel +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Israel. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Israel phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Italy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-italy.md | + + Title: Phone Number Management for Italy ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Italy. +++++ Last updated : 03/30/2023++++++# Phone number management for Italy +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Italy. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free*** |- | - | General Availability | General Availability\* | +| Local*** | - | - | General Availability | General Availability\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++\*** Phone numbers from Italy can only be purchased for own use. Reselling or suballocating to another party is not allowed. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed. ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Italy phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Japan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-japan.md | + + Title: Phone Number Management for Japan ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Japan. +++++ Last updated : 03/30/2023++++++# Phone number management for Japan +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Japan. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| National | - | - | Public Preview | Public Preview\* | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Japan phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| ++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Latvia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-latvia.md | + + Title: Phone Number Management for Latvia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Latvia. +++++ Last updated : 03/30/2023++++++# Phone number management for Latvia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Latvia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Alphanumeric Sender ID\* | Public Preview | - | - | - | +++\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | ++\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Alphanumeric Sender ID is available +| Country/Region | +| :- | +|Australia| +|Austria| +|Denmark| +|Estonia| +|France| +|Germany| +|Italy| +|Latvia| +|Lithuania| +|Netherlands| +|Poland| +|Portugal| +|Spain| +|Sweden| +|Switzerland| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Lithuania | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-lithuania.md | + + Title: Phone Number Management for Lithuania ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Lithuania. +++++ Last updated : 03/30/2023++++++# Phone number management for Lithuania +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Lithuania. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Alphanumeric Sender ID\* | Public Preview\* | - | - | - | +++\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | ++\* Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Alphanumeric Sender ID is available +| Country/Region | +| :- | +|Australia| +|Austria| +|Denmark| +|Estonia| +|France| +|Germany| +|Italy| +|Latvia| +|Lithuania| +|Netherlands| +|Poland| +|Portugal| +|Spain| +|Sweden| +|Switzerland| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Luxembourg | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-luxembourg.md | + + Title: Phone Number Management for Luxembourg ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Luxembourg. +++++ Last updated : 03/30/2023++++++# Phone number management for Luxembourg +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Luxembourg. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +++## Azure subscription billing locations where Luxembourg phone numbers are available +| Country/Region | +| :- | +|Luxembourg| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Malaysia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-malaysia.md | + + Title: Phone Number Management for Malaysia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Malaysia. +++++ Last updated : 03/30/2023++++++# Phone number management for Malaysia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Malaysia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Malaysia phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Mexico | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-mexico.md | + + Title: Phone Number Management for Mexico ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Mexico. +++++ Last updated : 03/30/2023++++++# Phone number management for Mexico +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Mexico. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Mexico phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Netherlands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-netherlands.md | + + Title: Phone Number Management for Netherlands ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Netherlands. +++++ Last updated : 03/30/2023++++++# Phone number management for Netherlands +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Netherlands. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Netherlands phone numbers are available +| Country/Region | +| :- | +|Netherlands| +|United States*| ++\*Alphanumeric Sender ID only ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For New Zealand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-new-zealand.md | + + Title: Phone Number Management for New Zealand ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in New Zealand. +++++ Last updated : 03/30/2023++++++# Phone number management for New Zealand +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in New Zealand. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where New Zealand phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Norway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-norway.md | + + Title: Phone Number Management for Norway ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Norway. +++++ Last updated : 03/30/2023++++++# Phone number management for Norway +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Norway. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | General Availability | General Availability\* | +| Local | - | - | General Availability | General Availability\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Norway phone numbers are available +| Country/Region | +| :- | +|Norway| +|France| +|Sweden| ++++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Philippines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-philippines.md | + + Title: Phone Number Management for Philippines ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Philippines. +++++ Last updated : 03/30/2023++++++# Phone number management for Philippines +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Philippines. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Philippines phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Poland | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-poland.md | + + Title: Phone Number Management for Poland ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Poland. +++++ Last updated : 03/30/2023++++++# Phone number management for Poland +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Poland. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +| Alphanumeric Sender ID\** | Public Preview | - | - | - | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Poland phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Portugal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-portugal.md | + + Title: Phone Number Management for Portugal ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Portugal. +++++ Last updated : 03/30/2023++++++# Phone number management for Portugal +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Portugal. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Portugal phone numbers are available +| Country/Region | +| :- | +|Portugal| +|United States*| ++\*Alphanumeric Sender ID only +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Saudi Arabia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-saudi-arabia.md | + + Title: Phone Number Management for Saudi Arabia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Saudi Arabia. +++++ Last updated : 03/30/2023++++++# Phone number management for Saudi Arabia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Saudi Arabia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. +More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Saudi Arabia phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Singapore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-singapore.md | + + Title: Phone Number Management for Singapore ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Singapore. +++++ Last updated : 03/30/2023++++++# Phone number management for Singapore +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Singapore. +++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Singapore phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Slovakia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-slovakia.md | + + Title: Phone Number Management for Slovakia ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Slovakia. +++++ Last updated : 03/30/2023++++++# Phone number management for Slovakia +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Slovakia. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +++## Azure subscription billing locations where Slovakia phone numbers are available +| Country/Region | +| :- | +|Slovakia| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For South Africa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-south-africa.md | + + Title: Phone Number Management for South Africa ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in South Africa. +++++ Last updated : 03/30/2023++++++# Phone number management for South Africa +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in South Africa. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where South Africa phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For South Korea | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-south-korea.md | + + Title: Phone Number Management for South Korea ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in South Korea. +++++ Last updated : 03/30/2023++++++# Phone number management for South Korea +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in South Korea. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where South Korea phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Spain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-spain.md | + + Title: Phone Number Management for Spain ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Spain. +++++ Last updated : 03/30/2023++++++# Phone number management for Spain +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Spain. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Spain phone numbers are available +| Country/Region | +| :- | +|Spain| +|United States*| ++\* Alphanumeric Sender ID only ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Sweden | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-sweden.md | + + Title: Phone Number Management for Sweden ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Sweden. +++++ Last updated : 03/30/2023++++++# Phone number management for Sweden +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Sweden. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Sweden phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| ++++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Switzerland | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-switzerland.md | + + Title: Phone Number Management for Switzerland ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Switzerland. +++++ Last updated : 03/30/2023++++++# Phone number management for Switzerland +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Switzerland. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |- | - | Public Preview | Public Preview\* | +| Local | - | - | Public Preview | Public Preview\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Switzerland phone numbers are available +| Country/Region | +| :- | +|Switzerland| +|United States*| ++\* Alphanumeric Sender ID +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Taiwan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-taiwan.md | + + Title: Phone Number Management for Taiwan ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Taiwan. +++++ Last updated : 03/30/2023++++++# Phone number management for Taiwan +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Taiwan. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Taiwan phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions ++++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For Thailand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-thailand.md | + + Title: Phone Number Management for Thailand ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in Thailand. +++++ Last updated : 03/30/2023++++++# Phone number management for Thailand +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in Thailand. +++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where Thailand phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For United Arab Emirates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-arab-emirates.md | + + Title: Phone Number Management for United Arab Emirates ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United Arab Emirates. +++++ Last updated : 03/30/2023++++++# Phone number management for United Arab Emirates +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United Arab Emirates. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | - | Public Preview\* | +++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. +++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where United Arab Emirates phone numbers are available +| Country/Region | +| :- | +|Australia| +|Canada| +|France| +|Germany| +|Italy| +|Japan| +|Spain| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For United Kingdom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-kingdom.md | + + Title: Phone Number Management for United Kingdom ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United Kingdom. +++++ Last updated : 03/30/2023++++++# Phone number management for United Kingdom +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United Kingdom. +++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free | - | - | General Availability | General Availability\* | +| Local | - | - | General Availability | General Availability\* | +|Alphanumeric Sender ID\**|Public Preview|-|-|-| ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic/National) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement, Pay-As-You-Go | +| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where United Kingdom phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| +++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Phone Number Management For United States | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-states.md | + + Title: Phone Number Management for United States ++description: Learn about subscription Eligibility and Number Capabilities for PSTN and SMS Numbers in United States. +++++ Last updated : 03/30/2023++++++# Phone number management for United States +Use the below tables to find all the relevant information on number availability, eligibility and restrictions for phone numbers in United States. ++## Number types and capabilities availability ++| Number Type | Send SMS | Receive SMS | Make Calls | Receive Calls | +| :- | :- | :- | :- | : | +| Toll-Free |General Availability | General Availability | General Availability | General Availability\* | +| Local | - | - | General Availability | General Availability\* | ++\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. ++## Subscription eligibility ++To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired by Azure free credits. Also, due to regulatory reasons phone number availability is dependent on your Azure subscription billing location. ++More details on eligible subscription types are as follows: ++| Number Type | Eligible Azure Agreement Type | +| :- | :-- | +| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | +| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go | ++\** Applications from all other subscription types are reviewed and approved on a case-by-case basis. Reach out to acstns@microsoft.com for assistance with your application. +++## Azure subscription billing locations where United States phone numbers are available +| Country/Region | +| :- | +|Canada| +|Denmark| +|Ireland| +|Italy| +|Puerto Rico| +|Sweden| +|United Kingdom| +|United States| ++## Find information about other countries/regions +++## Next steps ++For more information about Azure Communication Services' telephony options, see the following pages: ++- [Learn more about Telephony](../telephony/telephony-concept.md) +- Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Sub Eligibility Number Capability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md | -Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them. +Numbers can be purchased on eligible Azure subscriptions and in geographies where Communication Services is legally eligible to provide them. The capabilities and numbers that are available to you depend on the country that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements. +++**Use the drop-down to select the country/region where you're getting numbers. You'll find information about availability, restrictions and other related info on the country specific page** +> [!div class="op_single_selector"] +> +> - [Australia](../numbers/phone-number-management-for-australia.md) +> - [Austria](../numbers/phone-number-management-for-austria.md) +> - [Belgium](../numbers/phone-number-management-for-belgium.md) +> - [Canada](../numbers/phone-number-management-for-canada.md) +> - [China](../numbers/phone-number-management-for-china.md) +> - [Denmark](../numbers/phone-number-management-for-denmark.md) +> - [Estonia](../numbers/phone-number-management-for-estonia.md) +> - [Finland](../numbers/phone-number-management-for-finland.md) +> - [France](../numbers/phone-number-management-for-france.md) +> - [Germany](../numbers/phone-number-management-for-germany.md) +> - [Hong Kong](../numbers/phone-number-management-for-hong-kong.md) +> - [Ireland](../numbers/phone-number-management-for-ireland.md) +> - [Israel](../numbers/phone-number-management-for-israel.md) +> - [Italy](../numbers/phone-number-management-for-italy.md) +> - [Latvia](../numbers/phone-number-management-for-latvia.md) +> - [Lithuania](../numbers/phone-number-management-for-lithuania.md) +> - [Luxembourg](../numbers/phone-number-management-for-luxembourg.md) +> - [Malaysia](../numbers/phone-number-management-for-malaysia.md) +> - [Netherlands](../numbers/phone-number-management-for-netherlands.md) +> - [New Zealand](../numbers/phone-number-management-for-new-zealand.md) +> - [Norway](../numbers/phone-number-management-for-norway.md) +> - [Philippines](../numbers/phone-number-management-for-philippines.md) +> - [Poland](../numbers/phone-number-management-for-poland.md) +> - [Portugal](../numbers/phone-number-management-for-portugal.md) +> - [Saudi Arabia](../numbers/phone-number-management-for-saudi-arabia.md) +> - [Singapore](../numbers/phone-number-management-for-singapore.md) +> - [Slovakia](../numbers/phone-number-management-for-slovakia.md) +> - [Spain](../numbers/phone-number-management-for-spain.md) +> - [Sweden](../numbers/phone-number-management-for-sweden.md) +> - [Switzerland](../numbers/phone-number-management-for-switzerland.md) +> - [Taiwan](../numbers/phone-number-management-for-taiwan.md) +> - [Thailand](../numbers/phone-number-management-for-thailand.md) +> - [United Kingdom](../numbers/phone-number-management-for-united-kingdom.md) +> - [United States](../numbers/phone-number-management-for-united-states.md) -## Subscription eligibility --To acquire a phone number, you need to be on a paid Azure subscription. Phone numbers can't be acquired on trial accounts or by Azure free credits. --More details on eligible subscription types are as follows: --| Number Type | Eligible Azure Agreement Type | -| :- | :-- | -| Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement*, Pay-As-You-Go | -| Short-Codes | Modern Customer Agreement (Field Led), Enterprise Agreement**, Pay-As-You-Go | -| Alphanumeric Sender ID | Modern Customer Agreement (Field Led and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement**, Pay-As-You-Go | --\* In some countries/regions, number purchases are only allowed for own use. Reselling or suballcoating to another parties is not allowed. Due to this, purchases for CSP and LSP customers is not allowed. --\** Applications from all other subscription types will be reviewed and approved on a case-by-case basis. Create a support ticket or reach out to acstns@microsoft.com for assistance with your application. --## Number capabilities and availability --The capabilities and numbers that are available to you depend on the country/region that you're operating within, your use case, and the phone number type that you've selected. These capabilities vary by country/region due to regulatory requirements. --The following tables summarize current availability: --## Customers with Australia Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Australia, Germany, Netherlands, United Kingdom, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - | --\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Austria Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Austria | Toll-Free** | - | - | Public Preview | Public Preview\* | -| Austria | Local** | - | - | Public Preview | Public Preview\* | -| Austria, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Phone numbers in Austria can only be purchased for own use. Reselling or suballocating to another party is not allowed. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Belgium Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Belgium | Toll-Free | - | - | Public Preview | Public Preview\* | -| Belgium | Local | - | - | Public Preview | Public Preview\* | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --## Customers with Canada Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| UK | Toll-Free | - | - | General Availability | General Availability\* | -| UK | Local | - | - | General Availability | General Availability\* | -| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Denmark Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* | -| Denmark | Local | - | - | Public Preview | Public Preview\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| UK | Toll-Free | - | - | General Availability | General Availability\* | -| UK | Local | - | - | General Availability | General Availability\* | -| Italy | Toll-Free** | - | - | General Availability | General Availability\* | -| Italy | Local** | - | - | General Availability | General Availability\* | -| Sweden | Toll-Free | - | - | General Availability | General Availability\* | -| Sweden | Local | - | - | General Availability | General Availability\* | -| Ireland | Toll-Free | - | - | General Availability | General Availability\* | -| Ireland | Local | - | - | General Availability | General Availability\* | -| Denmark, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --## Customers with Estonia Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Estonia, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia | Alphanumeric Sender ID \* | Public Preview | - | - | - | --\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with France Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| France | Local** | - | - | Public Preview | Public Preview\* | -| France | Toll-Free** | - | - | Public Preview | Public Preview\* | -| Norway | Local** | - | - | Public Preview | Public Preview\* | -| Norway | Toll-Free | - | - | Public Preview | Public Preview\* | -| France, Germany, Netherlands, United Kingdom, Australia, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Phone numbers in France can only be purchased for own use. Reselling or suballocating to another party is not allowed. --\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Germany Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Germany | Local | - | - | Public Preview | Public Preview\* | -| Germany | Toll-Free | - | - | Public Preview | Public Preview\* | -| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Alphanumeric sender ID in Netherlands can only be purchased for own use. Reselling or suballocating to another party is not allowed. Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Ireland Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Ireland | Toll-Free | - | - | General Availability | General Availability\* | -| Ireland | Local | - | - | General Availability | General Availability\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| UK | Toll-Free | - | - | General Availability | General Availability\* | -| UK | Local | - | - | General Availability | General Availability\* | -| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* | -| Denmark | Local | - | - | Public Preview | Public Preview\* | -| Italy | Toll-Free** | - | - | General Availability | General Availability\* | -| Italy | Local** | - | - | General Availability | General Availability\* | -| Sweden | Toll-Free | - | - | General Availability | General Availability\* | -| Sweden | Local | - | - | General Availability | General Availability\* | -| Ireland, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Italy Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| : | :-- | : | : | :- | : | -| Italy | Toll-Free** | - | - | General Availability | General Availability\* | -| Italy | Local** | - | - | General Availability | General Availability\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| UK | Toll-Free | - | - | General Availability | General Availability\* | -| UK | Local | - | - | General Availability | General Availability\* | -| Sweden | Toll-Free | - | - | General Availability | General Availability\* | -| Sweden | Local | - | - | General Availability | General Availability\* | -| Ireland | Toll-Free | - | - | General Availability | General Availability\* | -| Ireland | Local | - | - | General Availability | General Availability\* | -| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* | -| Denmark | Local | - | - | Public Preview | Public Preview\* | -| France | Local** | - | - | Public Preview | Public Preview\* | -| France | Toll-Free** | - | - | Public Preview | Public Preview\* | -| Italy, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Phone numbers from Italy, France can only be purchased for own use. Reselling or suballocating to another party is not allowed. --\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Latvia Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Latvia, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - | --\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Lithuania Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Lithuania, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - | --\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Luxembourg Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Luxembourg | Toll-Free | - | - | Public Preview | Public Preview\* | -| Luxembourg | Local | - | - | Public Preview | Public Preview\* | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --## Customers with Netherlands Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Netherlands | Toll-Free | - | - | Public Preview | Public Preview\* | -| Netherlands | Local | - | - | Public Preview | Public Preview\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| Netherlands, Germany, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Alphanumeric sender ID in Netherlands can only be purchased for own use. Reselling or suballocating to another party is not allowed. Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Norway Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Norway | Local** | - | - | Public Preview | Public Preview\* | -| Norway | Toll-Free | - | - | Public Preview | Public Preview\* | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Phone numbers in Norway can only be purchased for own use. Reselling or suballocating to another party is not allowed. --## Customers with Poland Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Poland, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \* | Public Preview | - | - | - | --\* Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Portugal Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Portugal | Toll-Free** | - | - | Public Preview | Public Preview\* | -| Portugal | Local** | - | - | Public Preview | Public Preview\* | -| Portugal, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \*** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Phone numbers in Portugal can only be purchased for own use. Reselling or suballocating to another party is not allowed. --\*** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Slovakia Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Slovakia | Local | - | - | Public Preview | Public Preview\* | -| Slovakia | Toll-Free | - | - | Public Preview | Public Preview\* | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --## Customers with Spain Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Spain | Toll-Free | - | - | Public Preview | Public Preview\* | -| Spain | Local | - | - | Public Preview | Public Preview\* | -| Spain, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Sweden Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Sweden | Toll-Free | - | - | General Availability | General Availability\* | -| Sweden | Local | - | - | General Availability | General Availability\* | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| Ireland | Toll-Free | - | - | General Availability | General Availability\* | -| Ireland | Local | - | - | General Availability | General Availability\* | -| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* | -| Denmark | Local | - | - | Public Preview | Public Preview\* | -| Italy | Toll-Free** | - | - | General Availability | General Availability\* | -| Italy | Local** | - | - | General Availability | General Availability\* | -| Norway | Local** | - | - | Public Preview | Public Preview\* | -| Norway | Toll-Free | - | - | Public Preview | Public Preview\* | -| Sweden, Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with Switzerland Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :-- | :- | :- | :- | : | -| Switzerland | Toll-Free | - | - | Public Preview | Public Preview\* | -| Switzerland | Local | - | - | Public Preview | Public Preview\* | -| Switzerland, Germany, Netherlands, United Kingdom, Australia, France, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with United Kingdom Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :-- | :- | :- | :- | : | : | -| UK | Toll-Free | - | - | General Availability | General Availability\* | -| UK | Local | - | - | General Availability | General Availability\* | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| United Kingdom, Germany, Netherlands, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID \** | Public Preview | - | - | - | ---\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. --## Customers with United States Azure billing addresses --| Number | Type | Send SMS | Receive SMS | Make Calls | Receive Calls | -| :- | :- | :- | :- | :- | : | -| USA & Puerto Rico | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| USA & Puerto Rico | Local | - | - | General Availability | General Availability\* | -| USA | Short-Codes\** | General Availability | General Availability | - | - | -| UK | Toll-Free | - | - | General Availability | General Availability\* | -| UK | Local | - | - | -| Canada | Toll-Free | General Availability | General Availability | General Availability | General Availability\* | -| Canada | Local | - | - | General Availability | General Availability\* | -| Denmark | Toll-Free | - | - | Public Preview | Public Preview\* | -| Denmark | Local | - | - | Public Preview | Public Preview\* | -| Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia | Alphanumeric Sender ID\** | Public Preview | - | - | - | --\* Please refer to [Inbound calling capabilities page](../telephony/inbound-calling-capabilities.md) for details. --\** Please refer to [SMS Concepts page](../sms/concepts.md) for supported destinations for this service. ## Next steps -For more information about Azure Communication Services' telephony options please see the following pages: +For more information about Azure Communication Services' telephony options, see the following pages - [Learn more about Telephony](../telephony/telephony-concept.md) - Get a Telephony capable [phone number](../../quickstarts/telephony/get-phone-number.md) |
communication-services | Pstn Pricing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/pstn-pricing.md | All prices shown below are in USD. \* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv) +## Australia telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 16.00/mo | ++### Usage charges +|Number type |To make calls* |To receive calls| +|-||-| +|Geographic |Starting at USD 0.0240/min |USD 0.0100/min | +|Toll-free |Starting at USD 0.0240/min |USD 0.1750/min | ++\* For destination-specific pricing for making outbound calls, refer to details [here](https://github.com/Azure/Communication/blob/master/pricing/communication-services-pstn-rates.csv) ++## China telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 54.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |USD 0.3168/min | ++## Finland telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 40.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |Starting at USD 0.1888/min | ++## Hong Kong telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 25.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |USD 0.0672/min | ++## Israel telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 15.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |USD 0.1344/min | ++## New Zealand telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 40.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |Starting at USD 0.0666/min | ++## Poland telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 22.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |Starting at USD 0.1125/min | ++## Singapore telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 22.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |USD 0.0650/min | ++## Taiwan telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 5.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |USD 0.2718/min | ++## Thailand telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 25.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |USD 0.2377/min | ++## Philippines telephony offers ++### Phone number leasing charges +|Number type |Monthly fee | +|--|--| +|Toll-Free |USD 25.00/mo | ++### Usage charges +|Number type |To make calls |To receive calls | +|-||--| +|Toll-free |N/A |Starting at USD 0.3345/min | +++ *** Note: Pricing for all countries/regions is subject to change as pricing is market-based and depends on third-party suppliers of telephony services. Additionally, pricing may include requisite taxes and fees. |
communication-services | Room Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md | The tables below provide detailed capabilities mapped to the roles. At a high le | - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Render remote video stream | ✔️ | ✔️ | ✔️ |+| **Captions (Public Preview)** | | | +| - Start captions | ✔️ | ❌ | ❌ | +| - Toggle local captions | ✔️ | ✔️ | ✔️ | +| - Set spoken language of live captions | ✔️ | ✔️ | ✔️ | *) Only available on the web calling SDK. Not available on iOS and Android calling SDKs |
communication-services | Troubleshooting Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md | To help you troubleshoot certain types of issues, you may be asked for any of th * **Short Code Program Brief ID**: This ID is used to identify a short code program brief application. * **Email message ID**: This ID is used to identify Send Email requests. * **Correlation ID**: This ID is used to identify requests made using Call Automation. -* **Call logs**: These logs contain detailed information that are used to troubleshoot calling and network issues. +* **Call logs**: These logs contain detailed information can be used to troubleshoot calling and network issues. Also take a look at our [service limits](service-limits.md) documentation for more information on throttling and limitations. Console.WriteLine($"Email operation id = {emailSendOperation.Id}"); ``` +## Accessing Support Files in the Calling SDK +++Calling SDK provides convenience methods to get access to the Log Files. To actively collect, it is encouraged to pair this functionality with your applications support tooling. ++[Log File Access Conceptual Document](../concepts/voice-video-calling/retrieve-support-files.md) +[Log File Access Tutorials](../tutorials/log-file-retrieval-tutorial.md) + ## Enable and access call logs # [JavaScript](#tab/javascript) const callClient = new CallClient({ logger }); ``` You can use AzureLogger to redirect the logging output from Azure SDKs by overriding the `AzureLogger.log` method:-You can log to the browser console, a file, buffer, send to our own service, etc... If you are going to send logs over +You can log to the browser console, a file, buffer, send to our own service, etc. If you are going to send logs over the network to your own service, do not send a request per log line because this will affect browser performance. Instead, accumulate logs lines and send them in batches. ```javascript // Redirect log output AzureLogger.log = (...args) => { # [iOS](#tab/ios) -When developing for iOS, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted. +In an iOS Application, logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted. These can be accessed by opening Xcode. Go to Windows > Devices and Simulators > Devices. Select your device. Under Installed Apps, select your application and click on "Download container". This process gives you a `xcappdata` file. Right-click on this file and select # [Android](#tab/android) -When developing for Android, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted. +In an Android application, logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted. On Android Studio, navigate to the Device File Explorer by selecting View > Tool Windows > Device File Explorer from both the simulator and the device. The `.blog` file is located within your application's directory, which should look something like `/data/data/[app_name_space:com.contoso.com.acsquickstartapp]/files/acs_sdk.blog`. You can attach this file to your support request. On Android Studio, navigate to the Device File Explorer by selecting View > Tool ## Enable and access call logs (Windows) -When developing for Windows, your logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted. +In a Windows application, logs are stored in `.blog` files. Note that you can't view the logs directly because they're encrypted. These are accessed by looking at where your app is keeping its local data. There are many ways to figure out where a UWP app keeps its local data, the following steps are just one of these ways: 1. Open a Windows Command Prompt (Windows Key + R) These are accessed by looking at where your app is keeping its local data. There 5. Open the folder with the logs by typing `start ` followed by the path returned by the step 3. For example: `start C:\Users\myuser\AppData\Local\Packages\e84000dd-df04-4bbc-bf22-64b8351a9cd9_k2q8b5fxpmbf6` 6. Please attach all the `*.blog` and `*.etl` files to your Azure support request. + ## Finding Azure Active Directory information * **Getting Directory ID** The Azure Communication Services SMS SDK uses the following error codes to help | 9999 | Message failed to deliver due to unknown error/failure| Try resending the message | + ## Related information - Access logs for [voice and video](./analytics/logs/voice-and-video-logs.md), [chat](./analytics/logs/chat-logs.md), [email](./analytics/logs/email-logs.md), [network traversal](./analytics/logs/network-traversal-logs.md), [recording](./analytics/logs/recording-logs.md), [SMS](./analytics/logs/sms-logs.md) and [call automation](./analytics/logs/call-automation-logs.md).+- Log Filename APIs for Calling SDK - [Metrics](metrics.md) - [Service limits](service-limits.md)++ |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | The Azure Communication Services Calling SDK supports the following streaming co | Limit | Web | Windows/Android/iOS | | - | | -- | | **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing |-| **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing | -| **Maximum # of incoming remote streams that can be rendered simultaneousl - Public preview WebSDK or greater [1.14.1](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1141-beta1-2023-06-01)** | 9 videos + 1 screen sharing | +| **Maximum # of incoming remote streams that can be rendered simultaneously** | 9 videos + 1 screen sharing WebSDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24) or greater | 6 videos + 1 screen sharing | -While the Calling SDK don't enforce these limits, your users may experience performance degradation if they're exceeded. +While the Calling SDK don't enforce these limits, your users may experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](https://learn.microsoft.com/azure/communication-services/how-tos/calling-sdk/manage-video?branch=main&branchFallbackFrom=pr-en-us-249591&pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support. ## Calling SDK timeouts |
communication-services | Closed Captions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/closed-captions.md | Here are main scenarios where Closed Captions are useful: ## Availability-Closed Captions are supported in Private Preview only in ACS to ACS calls on all platforms. +Closed Captions are supported in Private Preview only in Azure Communication Services to Azure Communication Services calls on all platforms. - Android - iOS - Web |
communication-services | Media Quality Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/media-quality-sdk.md | Title: Azure Communication Services Media Quality metrics- -description: Provides an overview of the Azure Communication Services media quality statics SDK. + Title: Azure Communication Services media quality statistics ++description: Get an overview of the Azure Communication Services media quality statics SDK. -# Media quality statistics -To help understand media quality in VoIP and Video calls using Azure Communication Services, we have a feature called "Media quality statistics" that you can use to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics. +# Media quality statistics ++To help you understand media quality in VoIP and video calls that use Azure Communication Services, there's a feature called *media quality statistics*. Use it to examine the low-level audio, video, and screen-sharing quality metrics for incoming and outgoing call metrics. + ::: zone pivot="platform-web" [!INCLUDE [Media Stats for Web](./includes/media-stats/media-stats-web.md)] |
communication-services | Retrieve Support Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/retrieve-support-files.md | + + Title: Retrieving support files from calling SDK applications +description: Understand how to access Log Files for the creation of effective support tools ++++ Last updated : 07/17/2023+++# Overview of Log File Access ++Modern sandboxed applications can sometimes face challenges that hinder user experience. Key among these challenges is the difficulty for an end-user to access the applications internal files such as log files. The Log File Access API offers a solution, helping facilitate access to support files, and eventual export from the device into the support process. ++### For Third-party Application Developers: +Log file access eliminates the process of manually finding logs or requiring a development environment to extract them. Instead, it paves the way for a direct method to hand off crucial information. This not only speeds up the troubleshooting process but also enhances the overall user experience, as issues can be diagnosed and rectified more efficiently. ++### From Microsoft's Perspective: +For Microsoft, the primary aim is to ensure that any issues arising from our platforms are addressed swiftly and effectively. Seamless log handoffs between your support team and ours enable our engineering teams to get a clear picture of the challenge at hand, diagnose it accurately, and set about resolving it. ++## Integrating Log Collection in Third-Party Applications ++**Developer Considerations:** +As a developer, it's crucial to understand how and when to capture logs. When issues arise, timely delivery of log files aids in faster diagnostics and resolutions. ++1. **Timeliness**: Always prioritize the immediate retrieval of logs. The closer to the time of the incident, the more relevant and insightful the data will be. +2. **User Interaction**: Determine the most intuitive way for your users to report problems. A seamless user experience can encourage more accurate and timely reporting. +3. **Support Integration**: Consider how your support teams access these logs. Integration should be straightforward, ensuring efficient troubleshooting. +4. **Collaboration with Azure**: Ensure easy accessibility for Azure's teams, perhaps through a direct link or a streamlined request mechanism. ++By addressing these elements, you can craft a system that not only serves the immediate needs of your users but also sets the stage for effective collaboration with Microsoft's support infrastructure. ++## Implementing Log Collection in Your Application ++When you are incorporating log collection strategies into your application, the responsibility to ensure the privacy and security of these logs lies with the developers. However, we're here to provide some suggestions to enhance your implementation process. ++### "Report an Issue" Dialog ++A simple yet effective method is the "Report an Issue" feature. Think of it as a direct line between the user and support. After a user encounters an issue, a prompt can ask users if they wish to report the problem. If they agree, logs can be automatically attached and sent to the relevant support channels. ++### Feedback After the Call ++Right after a call might be an opportune time to gather feedback. Using an end-of-call survey can be beneficial. Here, users can provide feedback on the call quality and, if needed, attach logs of any issues faced. This feedback ensures timely and relevant data collection. ++### Shake-to-Report Feature ++Taking inspiration from Microsoft Teams, consider integrating a shake-to-report feature. The user can shake their device and initiate the process to report an issue. It's a user-friendly method, but remember to inform users about this feature to ensure its effective use. ++### Proactive Auto-Detection ++For a more advanced approach, consider having the system automatically detect potential call issues. Upon detection, users can be prompted to share logs. It's a proactive measure, ensuring issues are caught early, but it's crucial to strike a balance to avoid unnecessary prompts. ++## Choosing the Best Strategy ++User consent is paramount. Always inform and ensure users are aware of what they're sharing and why. Each application and its user base are unique. Reflect on past interactions, and consider the resources at hand. These considerations guide you to select the best strategy for your application, ensuring a smooth user experience and efficient troubleshooting. ++## Further Reading ++- [End of Call Survey Conceptual Document](../voice-video-calling/end-of-call-survey-concept.md) +- [Troubleshooting Info](../troubleshooting-info.md) +- [Log Sharing Tutorial](../../tutorials/log-file-retrieval-tutorial.md) |
communication-services | Play Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/play-action.md | This guide will help you get started with playing audio files to participants by |Status|Code|Subcode|Message| |-|--|--|--| |PlayCompleted|200|0|Action completed successfully.|+|PlayCanceled|400|8508|Action failed, the operation was canceled.| |PlayFailed|400|8535|Action failed, file format is invalid.| |PlayFailed|400|8536|Action failed, file could not be downloaded.|-|PlayCanceled|400|8508|Action failed, the operation was canceled.| +|PlayFailed|400|8565 | Action failed, bad request to Azure AI services. Check input parameters. | +|PlayFailed | 401 | 8565 | Action failed, Azure AI services authentication error. | +|PlayFailed | 403 | 8565 | Action failed, forbidden request to Azure AI services, free subscription used by the request ran out of quota. | +|PlayFailed | 429 | 8565 | Action failed, requests exceeded the number of allowed concurrent requests for the Azure AI services subscription. | +|PlayFailed | 408 | 8565 | Action failed, request to Azure AI services timed out. | +|PlayFailed | 500 | 9999 | Unknown internal server error | +|PlayFailed | 500 | 8572 | Action failed due to play service shutdown. | + ## Clean up resources |
communication-services | Recognize Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md | This guide will help you get started with recognizing DTMF input provided by par |RecognizeCompleted|200|8514|Action completed as stop tone was detected.| |RecognizeCompleted|400|8508|Action failed, the operation was canceled.| |RecognizeCompleted|400|8532|Action failed, inter-digit silence timeout reached.|+|RecognizeCanceled|400|8508|Action failed, the operation was canceled.| |RecognizeFailed|400|8510|Action failed, initial silence timeout reached.| |RecognizeFailed|500|8511|Action failed, encountered failure while trying to play the prompt.| |RecognizeFailed|500|8512|Unknown internal server error.|-|RecognizeCanceled|400|8508|Action failed, the operation was canceled.| --+| RecognizeFailed | 400 | 8510 | Action failed, initial silence timeout reached | +| RecognizeFailed | 400 | 8532 | Action failed, inter-digit silence timeout reached. | +| RecognizeFailed | 400 | 8565 | Action failed, bad request to Azure AI services. Check input parameters. | +| Recognize Failed | 400 | 8565 | Action failed, bad request to Azure AI services. Unable to process payload provided, check the play source input | +| RecognizeFailed | 401 | 8565 | Action failed, Azure AI services authentication error. | +| RecognizeFailed | 403 | 8565 | Action failed, forbidden request to Azure AI services, free subscription used by the request ran out of quota. | +| RecognizeFailed | 429 | 8565 | Action failed, requests exceeded the number of allowed concurrent requests for the Azure AI services subscription. | +| RecognizeFailed | 408 | 8565 | Action failed, request to Azure AI services timed out. | +| RecognizeFailed | 500 | 8511 | Action failed, encountered failure while trying to play the prompt. | +| RecognizeFailed | 500 | 8512 | Unknown internal server error. | ## Clean up resources |
communication-services | Teams Interop Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md | Title: Azure Communication Services Call Automation how-to for adding Microsoft Teams User into an existing call -description: ProvIDes a how-to for adding a Microsoft Teams user to a call with Call Automation. +description: Provides a how-to for adding a Microsoft Teams user to a call with Call Automation. -You need to be part of the Azure Communication Services TAP program. It's likely that youΓÇÖre already part of this program, and if you aren't, sign-up using https://aka.ms/acs-tap-invite. To access to the specific Teams Interop functionality for Call Automation, submit your Teams Tenant IDs and Azure Communication Services Resource IDs by filling this form ΓÇô https://aka.ms/acs-ca-teams-tap. You need to fill the form every time you need a new tenant ID and new resource ID allow-listed. - ## Prerequisites - An Azure account with an active subscription. |
communication-services | Orientation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/ui-library-sdk/orientation.md | + + Title: Screen orientation over the UI Library ++description: Use Azure Communication Services UI library for Mobile native to set orientation for different library screen. ++++ Last updated : 05/24/2022++zone_pivot_groups: acs-plat-ios-android ++#Customer intent: As a developer, I want to set the orientation of my pages in my application +++# Orientation ++Azure Communication Services UI Library enables developers to set the orientation of the UI Library screens. Developers can now specify screen orientation mode in call setup screen and in call screen of the UI Library. ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md). +- A `User Access Token` to enable the call client. For more information on [how to get a `User Access Token`](../../quickstarts/identity/access-tokens.md) +- Optional: Complete the quickstart for [getting started with the UI Library composites](../../quickstarts/ui-library/get-started-composites.md) ++++## Next steps ++- [Learn more about UI Library](../../concepts/ui-library/ui-library-overview.md) |
communication-services | Manage Teams Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md | |
communication-services | Get Started Rooms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md | In this section you learned how to: You may also want to: - Learn about [rooms concept](../../concepts/rooms/room-concept.md) - Learn about [voice and video calling concepts](../../concepts/voice-video-calling/about-call-types.md)+ - Review Azure Communication Services [samples](../../samples/overview.md) |
communication-services | Join Rooms Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/join-rooms-call.md | Title: Quickstart - Join a room call description: In this quickstart, you'll learn how to join a room call using web or native mobile calling SDKs --++ - Previously updated : 07/27/2022+ Last updated : 07/20/2023 -zone_pivot_groups: acs-web-ios-android +zone_pivot_groups: acs-plat-web-ios-android-windows - # Quickstart: Join a room call + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md). - Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)+- A created room and participant added to it. [Create and manage rooms](get-started-rooms.md) + ## Obtain user access token If you have already created users and have added them as participants in the room following the "Set up room participants" section in [this page](./get-started-rooms.md), then you can directly use those users to join the room. az communication identity token issue --scope voip --connection-string "yourConn For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli). ----## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md) -## Obtain user access token -You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. -```azurecli-interactive -az communication identity token issue --scope voip --connection-string "yourConnectionString" -``` -For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli). [!INCLUDE [Join a room call from iOS calling SDK](./includes/rooms-quickstart-call-ios.md)]+ ::: zone-end ::: zone pivot="platform-android" --## Prerequisites --- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An active Communication Services resource and connection string. [Create a Communication Services resource](../create-communication-resource.md).-- Two or more Communication User Identities. [Create and manage access tokens](../identity/access-tokens.md) or [Quick-create identities for testing](../identity/quick-create-identity.md).-- A room resource. [Create and manage rooms](get-started-rooms.md)--## Obtain user access token -You'll need to create a User Access Token for each call participant. [Learn how to create and manage user access tokens](../identity/access-tokens.md). You can also use the Azure CLI and run the command below with your connection string to create a user and an access token. -```azurecli-interactive -az communication identity token issue --scope voip --connection-string "yourConnectionString" -``` -For details, see [Use Azure CLI to Create and Manage Access Tokens](../identity/access-tokens.md?pivots=platform-azcli). ::: zone-end ## Next steps |
communication-services | Get Started Teams Auto Attendant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant.md | |
communication-services | Get Started Teams Call Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue.md | |
communication-services | Call Automation Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/call-automation-ai.md | + + Title: Call Automation AI sample ++description: Overview of Call Automation AI hero sample using Azure Communication Services to enable developers to learn how to incorporate AI into their workflows. ++++ Last updated : 08/11/2023++++zone_pivot_groups: acs-csharp-java +++# Get started with the Azure Communication Services Call Automation OpenAI sample ++The Azure Communication Services Call Automation OpenAI sample demonstrates how you can use Call Automation SDK and the recently announced public preview integration with Azure AI services to build intelligent virtual assistants. +++In this sample, we'll cover off what this sample does and what you need as pre-requisites before we run this sample locally on your machine. +++ |
communication-services | Add Voip Push Notifications Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-voip-push-notifications-event-grid.md | This tutorial explains how to deliver VOIP push notifications to native applicat ## Current Limitations The current limitations of using the ACS Native Calling SDK are that - * There's a 24-hour limit after the register push notification API is called when the device token information is saved. After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices will not be delivered to the devices if those devices don't call the register push notification API again. + * There's a 24-hour limit after the register push notification API is called when the device token information is saved. + After 24 hours, the device endpoint information is deleted. Any incoming calls on those devices will not be delivered to the devices if those devices don't call the register push notification API again. * Can't deliver push notifications using Baidu or any other notification types supported by Azure Notification Hub but not yet supported in the ACS SDK. ## Setup for listening the events from Event Grid Notification-To listen to the `Microsoft.Communication.IncomingCall` event from Event Grid notifications of the Azure Communication Calling resource in Azure. 1. Azure functions with APIs 1. Save device endpoint information. 2. Delete device endpoint information. Here are the steps to deliver the push notification: 6. VOIP push is successfully delivered to the device and `CallAgent.handlePush` API should be called. ## Sample-Code sample is provided [here](https://github.com/Azure-Samples/communication-services-ios-quickstarts/tree/main/add-calling-push-notifications-event-grid). +The sample provided below works for any Native platforms (iOS, Android , Windows). +Code sample is provided [here](https://github.com/Azure-Samples/azure-communication-services-calling-event-grid/tree/main/add-calling-push-notifications-event-grid). |
communication-services | Events Playbook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/events-playbook.md | Title: Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services + Title: Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services description: Learn how to use Microsoft Teams, Graph and Azure Communication Services to build a custom event management platform. -# Build a custom event management platform with Microsoft Teams, Graph and Azure Communication Services +# Build a custom event management platform with Microsoft Teams, Microsoft Graph and Azure Communication Services -The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Graph APIs and ACS UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation. +The goal of this document is to reduce the time it takes for Event Management Platforms to apply the power of Microsoft Teams Webinars through integration with Microsoft Graph APIs and Azure Communication Services UI Library. The target audience is developers and decision makers. To achieve the goal, this document provides the following two functions: 1) an aid to help event management platforms quickly decide what level of integration would be right for them, and 2) a step-by-step end-to-end QuickStart to speed up implementation. ## What are virtual events and event management platforms? -Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios. +Microsoft empowers event platforms to integrate event capabilities using [Microsoft Teams](/microsoftteams/quick-start-meetings-live-events), [Microsoft Graph](/graph/api/application-post-onlinemeetings?tabs=http&view=graph-rest-beta&preserve-view=true) and [Azure Communication Services](../overview.md). Virtual Events are a communication modality where event organizers schedule and configure a virtual environment for event presenters and participants to engage with content through voice, video, and chat. Event management platforms enable users to configure events and for attendees to participate in those events, within their platform, applying in-platform capabilities and gamification. Learn more about [Teams Meetings, Webinars and Live Events](/microsoftteams/quick-start-meetings-live-events) that are used throughout this article to enable virtual event scenarios. ## What are the building blocks of an event management platform? Throughout the rest of this tutorial, we will focus on how using Azure Communica Microsoft Graph enables event management platforms to empower organizers to schedule and manage their events directly through the event management platform. For attendees, event management platforms can build custom registration flows right on their platform that registers the attendee for the event and generates unique credentials for them to join the Teams hosted event. >[!NOTE]->For each required Graph API has different required scopes, ensure that your application has the correct scopes to access the data. +>For each required Microsoft Graph API has different required scopes, ensure that your application has the correct scopes to access the data. ### Scheduling registration-enabled events with Microsoft Graph -1. Authorize application to use Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees. +1. Authorize application to use Microsoft Graph APIs on behalf of service account. This authorization is required in order to have the application use credentials to interact with your tenant to schedule events and register attendees. 1. Create an account that will own the meetings and is branded appropriately. This is the account that will create the events and which will receive notifications for it. We recommend to not user a personal production account given the overhead it might incur in the form of reminders. 2. As part of the application setup, the service account is used to login into the solution once. With this permission the application can retrieve and store an access token on behalf of the service account that will own the meetings. Your application will need to store the tokens generated from the login and place them in a secure location such as a key vault. The application will need to store both the access token and the refresh token. Learn more about [auth tokens](../../active-directory/develop/access-tokens.md). and [refresh tokens](../../active-directory/develop/refresh-tokens.md). - 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs. + 3. The application will require "on behalf of" permissions with the [offline scope](../../active-directory/develop/v2-permissions-and-consent.md#offline_access) to act on behalf of the service account for the purpose of creating meetings. Individual Microsoft Graph APIs require different scopes, learn more in the links detailed below as we introduce the required APIs. 4. Refresh tokens can be revoked in the event of a breach or account termination |
communication-services | Log File Retrieval Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/log-file-retrieval-tutorial.md | + + Title: Log file retrieval ++description: Learn how to retrieve Log Files from the Calling SDK for enhanced supportability. +++++ Last updated : 06/30/2021++++zone_pivot_groups: acs-programming-languages-java-swift-csharp +++# Log File Access tutorial ++In this tutorial, you learn how to access the Log Files stored on the device. ++## Prerequisites ++- Access to a `CallClient` instance +++++## Next steps ++To add enhanced log collection capabilities to your app, consider the following points. ++1. Explore support features: + - "Report an Issue" prompts + - End-of-call surveys + - Shake-to-report + - Proactive autodetection +2. Always obtain user consent before submitting data. +3. Customize strategies based on your users. ++Refer to the [Conceptual Document](../concepts/voice-video-calling/retrieve-support-files.md) for more in-depth guidance. ++## You may also like ++- [Retrieve log files Conceptual Document](../concepts/voice-video-calling/retrieve-support-files.md) +- [End of call Survey](./end-of-call-survey-tutorial.md) +- [User Facing Diagnostics](../concepts/voice-video-calling/user-facing-diagnostics.md) |
communication-services | Virtual Visits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits.md | The rest of this tutorial focuses on Microsoft 365 and Azure hybrid solutions. T ## Building a virtual appointment sample In this section, weΓÇÖre going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile friendly browser experience, with code that you can use to explore and for production. -### Step 1 - Configure bookings +### Step 1: Configure bookings This sample uses takes advantage of the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar. And then make sure "Add online meeting" is enabled. ![Screenshot of Booking services online meeting configuration experience.](./media/virtual-visits/bookings-services-online-meeting.png) -### Step 2 ΓÇô Sample Builder +### Step 2: Sample Builder Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder) or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard: select Industry template, configure the call experience (Chat or Screen Sharing availability), change themes and text to match your application style and get valuable feedback through post-call survey options. You can preview your configuration live from the page in both Desktop and Mobile browser form-factors. [ ![Screenshot of Sample builder start page.](./media/virtual-visits/sample-builder-themes.png)](./media/virtual-visits/sample-builder-themes.png#lightbox) -### Step 3 - Deploy +### Step 3: Deploy At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js). [ ![Screenshot of Sample builder deployment page.](./media/virtual-visits/sample-builder-landing.png)](./media/virtual-visits/sample-builder-landing.png#lightbox) After walking through the ARM template, you can **Go to resource group**. ![Screenshot of a completed Azure Resource Manager Template.](./media/virtual-visits/azure-complete-deployment.png) -### Step 4 - Test +### Step 4: Test The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services. ![Screenshot of produced azure resources in azure portal.](./media/virtual-visits/azure-resources.png) Opening the App ServiceΓÇÖs URL and navigating to `https://<YOUR URL>/VISIT` all ![Screenshot of final view of azure app service.](./media/virtual-visits/azure-resource-final.png) -### Step 5 - Set deployed app URL in Bookings +### Step 5: Set deployed app URL in Bookings Enter the application url followed by "/visit" in the "Deployed App URL" field in https://outlook.office.com/bookings/businessinformation. |
communication-services | Sample Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/virtual-visits/sample-builder.md | This tutorial focuses on Microsoft 365 and Azure hybrid solutions. These hybrid ## Building a virtual appointment sample In this section, we're going to use a Sample Builder tool to deploy a Microsoft 365 + Azure hybrid virtual appointments application to an Azure subscription. This application is a desktop and mobile-friendly browser experience, with code that you can use to explore and make the final product. -### Step 1 - Configure bookings +### Step 1: Configure bookings This sample uses the Microsoft 365 Bookings app to power the consumer scheduling experience and create meetings for providers. Thus the first step is creating a Bookings calendar and getting the Booking page URL from https://outlook.office.com/bookings/calendar. And then, make sure "Add online meeting" is enabled. ![Screenshot of Booking services online meeting configuration experience.](../media/virtual-visits/bookings-services-online-meeting.png) -### Step 2 ΓÇô Sample Builder +### Step 2: Sample Builder Use the Sample Builder to customize the consumer experience. You can reach the Sampler Builder using this [link](https://aka.ms/acs-sample-builder) or navigating to the page within the Azure Communication Services resource in the Azure portal. Step through the Sample Builder wizard: 1. Select the Industry template. 1. Configure the call experience (Chat or Screen Sharing availability). You can preview your configuration live from the page in both Desktop and Mobile [ ![Screenshot of Sample builder start page.](../media/virtual-visits/sample-builder-themes.png)](../media/virtual-visits/sample-builder-themes.png#lightbox) -### Step 3 - Deploy +### Step 3: Deploy At the end of the Sample Builder wizard, you can **Deploy to Azure** or download the code as a zip. The sample builder code is publicly available on [GitHub](https://github.com/Azure-Samples/communication-services-virtual-visits-js). [ ![Screenshot of Sample builder deployment page.](../media/virtual-visits/sample-builder-landing.png)](../media/virtual-visits/sample-builder-landing.png#lightbox) After walking through the ARM template, you can **Go to resource group**. ![Screenshot of a completed Azure Resource Manager Template.](../media/virtual-visits/azure-complete-deployment.png) -### Step 4 - Test +### Step 4: Test The Sample Builder creates three resources in the selected Azure subscriptions. The **App Service** is the consumer front end, powered by Azure Communication Services. ![Screenshot of produced azure resources in azure portal.](../media/virtual-visits/azure-resources.png) Opening the App Service's URL and navigating to `https://<YOUR URL>/VISIT` allow ![Screenshot of final view of azure app service.](../media/virtual-visits/azure-resource-final.png) -### Step 5 - Set deployed app URL in Bookings +### Step 5: Set deployed app URL in Bookings Enter the application URL followed by "/visit" in the "Deployed App URL" field at https://outlook.office.com/bookings/businessinformation. |
communications-gateway | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md | description: This article guides you through how to deploy an Azure Communicatio + Last updated 05/05/2023 You now need to wait for your resource to be provisioned and connected to the Mi ## Next steps -- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md)+- [Prepare for live traffic with Azure Communications Gateway](prepare-for-live-traffic.md) |
communications-gateway | Prepare To Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md | description: Learn how to complete the prerequisite tasks required to deploy Azu + Last updated 05/05/2023 Wait for confirmation that Azure Communications Gateway is enabled before moving ## Next steps -- [Create an Azure Communications Gateway resource](deploy.md)+- [Create an Azure Communications Gateway resource](deploy.md) |
confidential-computing | Concept Skr Attestation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/concept-skr-attestation.md | Make sure to set the value of [--sku] to "premium". A Secure Key Release Policy is a json format release policy as defined [here](/rest/api/keyvault/keys/create-key/create-key?tabs=HTTP#keyreleasepolicy) that specifies a set of claims required in addition to authorization to release the key. The claims here are MAA based claims as referenced [here for SGX](/azure/attestation/attestation-token-examples#sample-jwt-generated-for-sgx-attestation) and here for [AMD SEV-SNP CVM](/azure/attestation/attestation-token-examples#sample-jwt-generated-for-sev-snp-attestation). -Visit the TEE specific [examples page for more details](skr-policy-examples.md) +Visit the TEE specific [examples page for more details](skr-policy-examples.md). For more information on the SKR policy grammar, see [Azure Key Vault secure key release policy grammar](../key-vault/keys/policy-grammar.md). Before you set an SKR policy make sure to run your TEE application through the remote attestation flow. Remote attestation isn't covered as part of this tutorial. No. Not at this time. [AKV REST API With SKR Details](/rest/api/keyvault/keys/create-key/create-key?tabs=HTTP) +[Azure Key Vault secure key release policy grammar](../key-vault/keys/policy-grammar.md) + [AKV SDKs](../key-vault/general/client-libraries.md) |
confidential-computing | Harden A Linux Image To Remove Azure Guest Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/harden-a-linux-image-to-remove-azure-guest-agent.md | + + Title: Harden a Linux image to remove Azure guest agent +description: Learn how to use the Azure CLI to harden a linux image to remove Azure guest agent. +++m ++ Last updated : 8/03/2023+++++# Harden a Linux image to remove Azure guest agent ++**Applies to:** :heavy_check_mark: Linux Images ++Azure supports two provisioning agents [cloud-init](https://github.com/canonical/cloud-init), and the [Azure Linux Agent](https://github.com/Azure/WALinuxAgent) (WALA), which forms the prerequisites for creating the [generalized images](/azure/virtual-machines/generalize#linux) (Azure Compute Gallery or Managed Image). The Azure Linux Agent contains Provisioning Agent code and Extension Handling code in one package. ++It's crucial to comprehend what functionalities the VM loses before deciding to remove the Azure Linux Agent. Removal of the guest agent removes the functionality enumerated atΓÇ»[Azure Linux Agent](/azure/virtual-machines/extensions/agent-linux?branch=pr-en-us-247336). ++This "how to" shows you steps to remove guest agent from the Linux image. +## Prerequisites ++- If you don't have an Azure subscription, [create a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +- An Ubuntu image - you can choose one from the [Azure Marketplace](/azure/virtual-machines/linux/cli-ps-findimage). ++### Remove Azure Linux Agent and prepare a generalized Linux image ++Steps to create an image that removes the Azure Linux Agent are as follows: ++1. Download an Ubuntu image. ++ [How to download a Linux VHD from Azure](/azure/virtual-machines/linux/download-vhd?tabs=azure-portal) ++2. Mount the image. ++ Follow the instructions in step 2 of [remove sudo users from the Linux Image](/azure/confidential-computing/harden-the-linux-image-to-remove-sudo-users) to mount the image. ++3. Remove the Azure Linux agent ++ Run as root to [remove the Azure Linux Agent](/azure/virtual-machines/linux/disable-provisioning) ++ For Ubuntu 18.04+ + ``` + sudo chroot /mnt/dev/$imagedevice/ apt -y remove walinuxagent + ``` +++> [!NOTE] +> If you know you will not reinstall the Linux Agent again [remove the Azure Linux Agent artifacts](/azure/virtual-machines/linux/disable-provisioning#:~:text=Step%202%3A%20(Optional)%20Remove%20the%20Azure%20Linux%20Agent%20artifacts), you can run the following steps. +++4. (Optional) Remove the Azure Linux Agent artifacts. ++ If you know you will not reinstall the Linux Agent again, then you can run the following else skip this step: ++ For Ubuntu 18.04+ + ``` + sudo chroot /mnt/dev/$imagedevice/ rm -rf /var/lib/walinuxagent + sudo chroot /mnt/dev/$imagedevice/ rm -rf /etc/ walinuxagent.conf + sudo chroot /mnt/dev/$imagedevice/ rm -rf /var/log/ walinuxagent.log + ``` ++5. Create a systemd service to provision the VM. ++ Since we are removing the Azure Linux Agent, we need to provide a mechanism to report ready. Copy the contents of the bash script or python script located [here](/azure/virtual-machines/linux/no-agent?branch=pr-en-us-247336#add-required-code-to-the-vm) to the mounted image and make the file executable (i.e, grant execute permission on the file - chmod). + ``` + sudo chmod +x /mnt/dev/$imagedevice/usr/local/azure-provisioning.sh + ``` ++ To ensure report ready mechanism, create a [systemd service unit](/azure/virtual-machines/linux/no-agent#:~:text=Automating%20running%20the%20code%20at%20first%20boot) + and add the following to the /etc/systemd/system (this example names the unit file azure-provisioning.service) + ``` + sudo chroot /mnt/dev/$imagedevice/ systemctl enable azure-provisioning.service + ``` + Now the image is generalized and can be used to create a VM. ++6. Unmount the image. + ``` + umount /mnt/dev/$imagedevice + ``` ++ The image prepared does not include Azure Linux Agent anymore. ++7. Use the prepared image to deploy a confidential VM. ++ Follow the steps starting from 4 in the [Create a custom image for Azure confidential VM](/azure/confidential-computing/how-to-create-custom-image-confidential-vm) document to deploy the agent-less confidential VM. ++> [!NOTE] +> If you are looking to deploy cvm scaled scale using the custom image, please note that some features related to auto scaling will be restricted. Will manual scaling rules continue to work as expected, the autoscaling ability will be limited due to the agentless custom image. More details on the restrictions can be found here for the [provisioning agent](/azure/virtual-machines/linux/disable-provisioning). Alternatively, you can navigate to the metrics tab on the azure portal and confirm the same. ++## Next Steps ++[Create a custom image for Azure confidential VM](/azure/confidential-computing/how-to-create-custom-image-confidential-vm) |
confidential-computing | How To Leverage Virtual Tpms In Azure Confidential Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/how-to-leverage-virtual-tpms-in-azure-confidential-vms.md | Title: How to leverage virtual TPMs in Azure confidential VMs + Title: Leverage virtual TPMs in Azure confidential VMs description: Learn how to use the vTPM benefits after building trust in a confidential VM. -# How to leverage virtual TPMs in Azure confidential VMs +# Leverage virtual TPMs in Azure confidential VMs **Applies to:** :heavy_check_mark: Linux VMs These steps list out which artifacts you need and how to get them: The AMD Versioned Chip Endorsement Key (VCEK) is used to sign the AMD SEV-SNP report. The VCEK certificate allows you to verify that the report was signed by a genuine AMD CPU key. There are two ways retrieve the certificate: - a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known IMDS endpoint: + a. Obtain the VCEK certificate by running the following command ΓÇô it obtains the cert from a well-known [Azure Instance Metadata Service](/azure/virtual-machines/instance-metadata-service) (IMDS) endpoint: ```bash curl -H Metadata:true http://169.254.169.254/metadat/certification > vcek cat ./vcek | jq -r '.vcekCert , .certificateChain' > ./vcek.pem |
confidential-computing | Quick Create Confidential Vm Arm Amd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-arm-amd.md | |
confidential-computing | Quick Create Confidential Vm Portal Amd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal-amd.md | |
confidential-ledger | Quickstart Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-python.md | Microsoft Azure confidential ledger is a new and highly secure service for manag ## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 3.6+](/azure/developer/python/configure-local-development-environment)-- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell)+- Python versions that are [supported by the Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python#prerequisites). +- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-azure-powershell). ## Set up |
confidential-ledger | Verify Node Quotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/verify-node-quotes.md | + + Title: Establish trust on Azure confidential ledger +description: Learn how to establish trust on Azure confidential ledger by verifying the node quote ++ Last updated : 08/18/2023++++++# Establish trust on Azure confidential ledger ++An Azure confidential ledger node executes on top of a Trusted Execution Environment(TEE), such as Intel SGX, which guarantees the confidentiality of the data while in process. The trustworthiness of the platform and the binaries running inside it is guaranteed through a remote attestation process. An Azure confidential ledger requires a node to present a quote before joining the network. The quote report data contains the cryptographic hash of the node's identity public key and the MRENCLAVE value. The node is allowed to join the network if the quote is found to be valid and the MRENCLAVE value is one of the allowed values in the auditable governance. ++## Prerequisites ++- Install [CCF](https://microsoft.github.io/CCF/main/build_apps/install_bin.html) or the [CCF Python package](https://pypi.org/project/ccf/). +- An Azure confidential ledger instance. ++## Verify node quote ++The node quote can be downloaded from `https://<ledgername>.confidential-ledger.azure.com` and verified by using the `oeverify` tool that ships with the [Open Enclave SDK](https://github.com/openenclave/openenclave/blob/master/tools/oeverify/README.md) or with the `verify_quote.sh` script. It is installed with the CCF installation or the CCF Python package. For complete details about the script and the supported parameters, refer to [verify_quote.sh](https://microsoft.github.io/CCF/main/use_apps/verify_quote.html). ++```bash +verify_quote.sh https://<ledgername>.confidential-ledger.azure.com:443 +``` +The script checks if the cryptographic hash of the node's identity public key (DER encoded) matches the SGX report data and that the MRENCLAVE value present in the quote is trusted. A list of trusted MRENCLAVE values in the network can be downloaded from the `https://<ledgername>.confidential-ledger.azure.com/node/code` endpoint. An optional `mrenclave` parameter can be supplied to check if the node is running the trusted code. If supplied, the mreclave value in the quote must match it exactly. ++## Next steps ++* [Overview of Microsoft Azure confidential ledger](overview.md) +* [Azure confidential ledger architecture](architecture.md) |
connectors | Connectors Create Api Mq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md | The MQ connector provides a wrapper around a Microsoft MQ client, which includes * MQ 7.5 * MQ 8.0-* MQ 9.0, 9.1, and 9.2 +* MQ 9.0, 9.1, 9.2, and 9.3 ## Connector technical reference These steps use the Azure portal, but with the appropriate Azure Logic Apps exte 1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. -1. On the designer, select **Choose an operation**, if not selected. +1. [Follow these general steps to add the MQ built-in trigger that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger). For more information, see [MQ built-in connector triggers](/azure/logic-apps/connectors/built-in/reference/mq/#triggers). -1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**. --1. From the triggers list, select the [MQ trigger](/azure/logic-apps/connectors/built-in/reference/mq/#triggers) that you want to use. --1. Provide the [information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**. +1. Provide the [required information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**. 1. When the trigger information box appears, provide the required [information for your trigger](/azure/logic-apps/connectors/built-in/reference/mq/#triggers). The following steps use the Azure portal, but with the appropriate Azure Logic A 1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer. -1. In your workflow where you want to add an MQ action, follow one of these steps: -- * To add an action under the last step, select **New step**. -- * To add an action between steps, move your pointer over the connecting arrow so that the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**. +1. [Follow these general steps to add the MQ action that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). For more information, see [MQ connector actions](/connectors/mq/#actions). -1. Under the **Choose an operation** search box, select **Enterprise**. In the search box, enter **mq**. --1. From the actions list, select the [MQ action](/connectors/mq/#actions) that you want to use. --1. Provide the [information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**. +1. Provide the [required information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**. 1. When the action information box appears, provide the required [information for your action](/connectors/mq/#actions). The steps to add and use an MQ action differ based on whether your workflow uses 1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer. -1. In your workflow where you want to add an MQ action, follow one of these steps: -- * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**. +1. [Follow these general steps to add the MQ built-in action that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action). For more information, see [MQ built-in connector actions](/azure/logic-apps/connectors/built-in/reference/mq/#actions). - * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**. --1. On the **Add an action** pane, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**. --1. From the actions list, select the [MQ action](/azure/logic-apps/connectors/built-in/reference/mq/#actions) that you want to use. --1. Provide the [information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**. +1. Provide the [required information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**. 1. When the action information box appears, provide the required [information for your action](/azure/logic-apps/connectors/built-in/reference/mq/#actions). The steps to add and use an MQ action differ based on whether your workflow uses 1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer. -1. In your workflow where you want to add an MQ action, follow one of these steps: -- * To add an action under the last step, select **New step**. -- * To add an action between steps, move your mouse over the connecting arrow between those steps, select the plus sign (**+**) that appears between those steps, and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **mq**. --1. From the actions list, select the [MQ action](/connectors/mq/#actions) that you want to use. +1. [Follow these general steps to add the MQ action that you want](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action). For more information, see [MQ connector actions](/connectors/mq/#actions). -1. Provide the [information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**. +1. Provide the [required information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**. 1. When the action information box appears, provide the required [information for your action](/connectors/mq/#actions). |
connectors | Connectors Create Api Office365 Outlook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md | Title: Connect to Office 365 Outlook -description: Automate tasks and workflows that manage email, contacts, and calendars in Office 365 Outlook by using Azure Logic Apps. +description: Connect to Office 365 Outlook from workflows in Azure Logic Apps. ms.suite: integration Previously updated : 08/11/2021 Last updated : 08/23/2023 tags: connectors -# Connect to Office 365 Outlook using Azure Logic Apps +# Connect to Office 365 Outlook from Azure Logic Apps -With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Office 365 Outlook connector](/connectors/office365connector/), you can create automated tasks and workflows that manage your work or school account by building logic apps. For example, you can automate these tasks: ++To automate tasks for your Office 365 Outlook account in workflows using Azure Logic Apps, you can add operations from the [Office 365 Outlook connector](/connectors/office365connector/) to your workflow. For example, your workflow can perform the following tasks: * Get, send, and reply to email. * Schedule meetings on your calendar. * Add and edit contacts. -You can use any trigger to start your workflow, for example, when a new email arrives, when a calendar item is updated, or when an event happens in a difference service, such as Salesforce. You can use actions that respond to the trigger event, for example, send an email or create a new calendar event. +This guide shows how to add an Office 365 Outlook trigger or action to your workflow in Azure Logic Apps. ++> [!NOTE] +> +> The Office 365 Outlook connector works only with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool), for example, @fabrikam.onmicrosoft.com. +> If you have an @outlook.com or @hotmail.com account, use the [Outlook.com connector](../connectors/connectors-create-api-outlook.md). +> To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). ++## Connector technical reference ++For information about this connector's operations and any limits, based on the connector's Swagger file, see the [connector's reference page](/connectors/office365/). ## Prerequisites -* Your Microsoft Office 365 account for Outlook where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool). +* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - You need these credentials so that you can authorize your workflow to access your Outlook account. +* Your Microsoft Office 365 account for Outlook where you sign in with a [work or school account](https://support.microsoft.com/office/what-account-to-use-with-office-and-you-need-one-914e6610-2763-47ac-ab36-602a81068235#bkmk_msavsworkschool). > [!NOTE]- > If you have an @outlook.com or @hotmail.com account, use the [Outlook.com connector](../connectors/connectors-create-api-outlook.md). - > To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). >- > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn), Azure Active Directory (Azure AD) authentication - > works only with an account for Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts. + > If you're using [Microsoft Azure operated by 21Vianet](https://portal.azure.cn), + > Azure Active Directory (Azure AD) authentication works only with an account for + > Microsoft Office 365 operated by 21Vianet (.cn), not .com accounts. -* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +* The logic app workflow from where you want to access your Outlook account. To add an Office 365 Outlook trigger, you have to start with a blank workflow. To add an Office 365 Outlook action, your workflow can start with any trigger. ++## Add an Office 365 Outlook trigger ++Based on whether you have a Consumption or Standard logic app workflow, follow the corresponding steps: ++### [Consumption](#tab/consumption) -* The logic app where you want to access your Outlook account. To start your workflow with an Office 365 Outlook trigger, you need to have a blank logic app workflow. To add an Office 365 Outlook action to your workflow, your logic app workflow needs to already have a trigger. +1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. -## Connector reference +1. [Follow these general steps to add the Office 365 Outlook trigger](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger) that you want to your workflow. -For technical details about this connector, such as triggers, actions, and limits, as described by the connector's Swagger file, see the [connector's reference page](/connectors/office365/). + This example continues with the trigger named **When an upcoming event is starting soon**. This *polling* trigger regularly checks for any updated calendar event in your email account, based on the specified schedule. -## Add a trigger +1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). -A [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an event that starts the workflow in your logic app. This example logic app uses a "polling" trigger that checks for any updated calendar event in your email account, based on the specified interval and frequency. + > [!NOTE] + > + > Your connection doesn't expire until revoked, even if you change your sign-in credentials. + > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md). ++1. In the trigger information box, provide the required information, for example: ++ | Parameter | Required | Value | Description | + |--|-|-|-| + | **Calendar Id** | Yes | **Calendar** | The calendar to check | + | **Interval** | Yes | **15** | The number of intervals | + | **Frequency** | Yes | **Minute** | The unit of time | ++ To add other available parameters, such as **Time zone**, open the **Add new parameter** list, and select the parameters that you want. -1. In the [Azure portal](https://portal.azure.com), open your blank logic app in the visual designer. + ![Screenshot shows Azure portal, Consumption workflow, and trigger parameters.](./media/connectors-create-api-office365-outlook/calendar-settings-consumption.png) -1. In the search box, enter `office 365 outlook` as your filter. This example selects **When an upcoming event is starting soon**. +1. Save your workflow. On the designer toolbar, select **Save**. - ![Select trigger to start your logic app](./media/connectors-create-api-office365-outlook/office365-trigger.png) +### [Standard](#tab/standard) -1. If you don't have an active connection to your Outlook account, you're prompted to sign in and create that connection. To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). Otherwise, provide the information for the trigger's properties. +1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. ++1. [Follow these general steps to add the Office 365 Outlook trigger](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger) that you want to your workflow. ++ This example continues with the trigger named **When an upcoming event is starting soon**. This *polling* trigger regularly checks for any updated calendar event in your email account, based on the specified schedule. ++1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). > [!NOTE]+ > > Your connection doesn't expire until revoked, even if you change your sign-in credentials. > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md). - This example selects the calendar that the trigger checks, for example: +1. In the trigger information box, provide the required information, for example: - ![Configure the trigger's properties](./media/connectors-create-api-office365-outlook/select-calendar.png) + | Parameter | Required | Value | Description | + |--|-|-|-| + | **Calendar Id** | Yes | **Calendar** | The calendar to check | + | **Interval** | Yes | **15** | The number of intervals | + | **Frequency** | Yes | **Minute** | The unit of time | -1. In the trigger, set the **Frequency** and **Interval** values. To add other available trigger properties, such as **Time zone**, select those properties from the **Add new parameter** list. + To add other available parameters, such as **Time zone**, open the **Add new parameter** list, and select the parameters that you want. - For example, if you want the trigger to check the calendar every 15 minutes, set **Frequency** to **Minute**, and set **Interval** to `15`. + ![Screenshot shows Azure portal, Standard workflow, and trigger parameters.](./media/connectors-create-api-office365-outlook/calendar-settings-standard.png) - ![Set frequency and interval for the trigger](./media/connectors-create-api-office365-outlook/calendar-settings.png) +1. Save your workflow. On the designer toolbar, select **Save**. -1. On the designer toolbar, select **Save**. + -Now add an action that runs after the trigger fires. For example, you can add the Twilio **Send message** action, which sends a text when a calendar event starts in 15 minutes. +You can now add any other actions that your workflow requires. For example, you can add the Twilio **Send message** action, which sends a text when a calendar event starts in 15 minutes. -## Add an action +## Add an Office 365 Outlook action -An [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) is an operation that's run by the workflow in your logic app. This example logic app creates a new contact in Office 365 Outlook. You can use the output from another trigger or action to create the contact. For example, suppose your logic app uses the Salesforce trigger, **When a record is created**. You can add the Office 365 Outlook **Create contact** action and use the outputs from the trigger to create the new contact. +Based on whether you have a Consumption or Standard logic app workflow, follow the corresponding steps: -1. In the [Azure portal](https://portal.azure.com), open your logic app in the visual designer. +### [Consumption](#tab/consumption) -1. To add an action as the last step in your workflow, select **New step**. +1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in the designer. - To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**. + This example continues with the Office 365 Outlook trigger named **When a new email arrives**. -1. In the search box, enter `office 365 outlook` as your filter. This example selects **Create contact**. +1. [Follow these general steps to add the Office 365 Outlook action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action) that you want to your workflow. - ![Select the action to run in your logic app](./media/connectors-create-api-office365-outlook/office365-actions.png) + This example continues with the Office 365 Outlook action named **Create contact**. This operation creates a new contact in Office 365 Outlook. You can use the output from a previous operation in the workflow to create the contact. -1. If you don't have an active connection to your Outlook account, you're prompted to sign in and create that connection. To connect to Outlook with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). Otherwise, provide the information for the action's properties. +1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). > [!NOTE]+ > > Your connection doesn't expire until revoked, even if you change your sign-in credentials. > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md). - This example selects the contacts folder where the action creates the new contact, for example: +1. In the trigger information box, provide the required information, for example: ++ | Parameter | Required | Value | Description | + |--|-|-|-| + | **Folder Id** | Yes | **Contacts** | The folder where the action creates the new contact | + | **Given name** | Yes | <*contact-name*> | The name to give the contact | + | **Home phones** | Yes | <*home-phone-number*> | The home phone number for the contact | - ![Configure the action's properties](./media/connectors-create-api-office365-outlook/select-contacts-folder.png) + This example selects the **Contacts** folder where the action creates the new contact and uses trigger outputs for the remaining parameter values: - To add other available action properties, select those properties from the **Add new parameter** list. + ![Screenshot shows Azure portal, Consumption workflow, and action parameters.](./media/connectors-create-api-office365-outlook/create-contact-consumption.png) -1. On the designer toolbar, select **Save**. + To add other available parameters, open the **Add new parameter** list, and select the parameters that you want. ++1. Save your workflow. On the designer toolbar, select **Save**. ++### [Standard](#tab/standard) ++1. In the [Azure portal](https://portal.azure.com), open your logic app and workflow in the designer. ++ This example continues with the Office 365 Outlook trigger named **When a new email arrives**. ++1. [Follow these general steps to add the Office 365 Outlook action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action) that you want to your workflow. ++ This example continues with the Office 365 Outlook action named **Create contact**. This operation creates a new contact in Office 365 Outlook. You can use the output from a previous operation in the workflow to create the contact. ++1. If prompted, sign in to your Office 365 Outlook account, which creates a connection. To connect with a different user account, such as a service account, see [Connect using other accounts](#connect-using-other-accounts). ++ > [!NOTE] + > + > Your connection doesn't expire until revoked, even if you change your sign-in credentials. + > For more information, see [Configurable token lifetimes in Azure Active Directory](../active-directory/develop/configurable-token-lifetimes.md). ++1. In the trigger information box, provide the required information, for example: ++ | Parameter | Required | Value | Description | + |--|-|-|-| + | **Folder Id** | Yes | **Contacts** | The folder where the action creates the new contact | + | **Given name** | Yes | <*contact-name*> | The name to give the contact | + | **Home phones** | Yes | <*home-phone-number*> | The home phone number for the contact | ++ This example selects the **Contacts** folder where the action creates the new contact and uses trigger outputs for the remaining parameter values: ++ ![Screenshot shows Azure portal, Standard workflow, and action parameters.](./media/connectors-create-api-office365-outlook/create-contact-standard.png) ++ To add other available parameters, open the **Add new parameter** list, and select the parameters that you want. ++1. Save your workflow. On the designer toolbar, select **Save**. ++ <a name="connect-using-other-accounts"></a> If you try connecting to Outlook by using a different account than the one curre * Set up the other account with the **Contributor** role in your logic app's resource group. - 1. On your logic app's resource group menu, select **Access control (IAM)**. Set up the other account with the **Contributor** role. + 1. In the Azure portal, open your logic app's resource group. ++ 1. On the resource group menu, select **Access control (IAM)**. ++ 1. Assign the **Contributor** role to the other account. For more information, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md). If you try connecting to Outlook by using a different account than the one curre ## Next steps * [Managed connectors for Azure Logic Apps](managed.md)-* [Built-in connectors for Azure Logic Apps](built-in.md) -* [What are connectors in Azure Logic Apps](introduction.md) +* [Built-in connectors for Azure Logic Apps](built-in.md) |
connectors | Connectors Create Api Servicebus | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-servicebus.md | To increase the timeout for sending a message, [add the `ServiceProviders.Servic <a name="permissions-connection-string"></a> -## Step 1 - Check access to Service Bus namespace +## Step 1: Check access to Service Bus namespace To confirm that your logic app resource has permissions to access your Service Bus namespace, use the following steps: To confirm that your logic app resource has permissions to access your Service B ![Screenshot showing the Azure portal, Service Bus namespace, and 'Shared access policies' selected.](./media/connectors-create-api-azure-service-bus/azure-service-bus-namespace.png) -## Step 2 - Get connection authentication requirements +## Step 2: Get connection authentication requirements Later, when you add a Service Bus trigger or action for the first time, you're prompted for connection information, including the connection authentication type. Based on your logic app workflow type, Service Bus connector version, and selected authentication type, you'll need the following items: |
connectors | File System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/file-system.md | + + Title: Connect to on-premises file systems +description: Connect to file systems on premises from workflows in Azure Logic Apps. ++ms.suite: integration ++ Last updated : 08/17/2023+++# Connect to on-premises file systems from workflows in Azure Logic Apps +++This how-to guide shows how to access an on-premises file share from a workflow in Azure Logic Apps by using the File System connector. You can then create automated workflows that run when triggered by events in your file share or in other systems and run actions to manage your files. The connector provides the following capabilities: ++- Create, get, append, update, and delete files. +- List files in folders or root folders. +- Get file content and metadata. ++In this how-to guide, the example scenarios demonstrate the following tasks: ++- Trigger a workflow when a file is created or added to a file share, and then send an email. +- Trigger a workflow when copying a file from a Dropbox account to a file share, and then send an email. ++## Limitations and known issues ++- The File System connector currently supports only Windows file systems on Windows operating systems. +- Mapped network drives aren't supported. ++## Connector technical reference ++The File System connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences). ++| Logic app | Environment | Connector version | +|--|-|-| +| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | +| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | +| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector differs in the following ways: <br><br>- The built-in connector supports only Standard logic apps that run in an App Service Environment v3 with Windows plans only. <br><br>- The built-in version can connect directly to a file share and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [File System built-in connector reference](/azure/logic-apps/connectors/built-in/reference/filesystem/) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) | ++## Prerequisites ++* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++* To connect to your file share, different requirements apply, based on your logic app and the hosting environment: ++ - Consumption logic app workflows ++ - In multi-tenant Azure Logic Apps, you need to meet the following requirements, if you haven't already: + + 1. [Install the on-premises data gateway on a local computer](../logic-apps/logic-apps-gateway-install.md). ++ The File System managed connector requires that your gateway installation and file system server must exist in the same Windows domain. ++ 1. [Create an on-premises data gateway resource in Azure](../logic-apps/logic-apps-gateway-connection.md). ++ 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system. ++ - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector. ++ - Standard logic app workflows ++ You can use the File System built-in connector or managed connector. ++ * To use the File System managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps. ++ * To use the File System built-in connector, your Standard logic app workflow must run in App Service Environment v3, but doesn't require the data gateway resource. ++* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer. ++* To follow the example scenario in this how-to guide, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This example uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ. ++ > [!IMPORTANT] + > + > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps. + > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can + > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). + > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md). ++* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free. ++* The logic app workflow where you want to access your file share. To start your workflow with a File System trigger, you have to start with a blank workflow. To add a File System action, start your workflow with any trigger. ++<a name="add-file-system-trigger"></a> ++## Add a File System trigger ++### [Consumption](#tab/consumption) ++1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. ++1. In the designer, [follow these general steps to add the **File System** trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger). ++ For more information, see [File System triggers](/connectors/filesystem/#triggers). This example continues with the trigger named **When a file is created**. ++1. In the connection information box, provide the following information as required: ++ | Property | Required | Value | Description | + |-|-|-|-| + | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | + | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | + | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Password** | Yes | <*password*> | The password for the computer where you have your file system | + | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | ++ The following example shows the connection information for the File System managed connector trigger: ++ ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/file-system-connection-consumption.png) ++ The following example shows the connection information for the File System ISE-based trigger: ++ ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/connect-file-systems/file-system-connection-ise.png) ++1. When you're done, select **Create**. ++ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. ++1. Continue building your workflow. ++ 1. Provide the required information for your trigger. ++ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check. ++ ![Screenshot showing Consumption workflow designer and the trigger named When a file is created.](media/connect-file-systems/trigger-file-system-when-file-created-consumption.png) ++ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address. ++ ![Screenshot showing Consumption workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-file-system-send-email-consumption.png) ++ > [!TIP] + > + > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes. + > When the dynamic content list appears, select from the available outputs. ++1. When you're done, save your workflow. ++1. To test your workflow, upload a file, which triggers the workflow. ++If successful, your workflow sends an email about the new file. ++### [Standard](#tab/standard) ++#### Built-in connector trigger ++The following steps apply only to Standard logic app workflows in an App Service Environment v3 with Windows plans only. ++1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. ++1. In the designer, [follow these general steps to add the **File System** built-in trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger). ++ For more information, see [File System triggers](/azure/logic-apps/connectors/built-in/reference/filesystem/#triggers). This example continues with the trigger named **When a file is added**. ++1. In the connection information box, provide the following information as required: ++ | Property | Required | Value | Description | + |-|-|-|-| + | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | + | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** | + | **Password** | Yes | <*password*> | The password for the computer where you have your file system | ++ The following example shows the connection information for the File System built-in connector trigger: ++ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/connect-file-systems/trigger-file-system-connection-built-in-standard.png) ++1. When you're done, select **Create**. ++ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. ++1. Continue building your workflow. ++ 1. Provide the required information for your trigger. ++ For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check. ++ ![Screenshot showing Standard workflow designer and information for the trigger named When a file is added.](media/connect-file-systems/trigger-when-file-added-built-in-standard.png) ++ 1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address. ++ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is added, and action named Send an email.](media/connect-file-systems/trigger-send-email-built-in-standard.png) ++ > [!TIP] + > + > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes. + > After the dynamic content list and expression editor options appear, select the dynamic content + > list (lightning icon). When the dynamic content list appears, select from the available outputs. ++1. When you're done, save your workflow. ++1. To test your workflow, upload a file, which triggers the workflow. ++If successful, your workflow sends an email about the new file. ++#### Managed connector trigger ++1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. ++1. In the designer, [follow these general steps to add the **File System** managed trigger that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger). ++ For more information, see [File System triggers](/connectors/filesystem/#triggers). This example continues with the trigger named **When a file is created**. ++1. In the connection information box, provide the following information as required: ++ | Property | Required | Value | Description | + |-|-|-|-| + | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | + | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | + | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Password** | Yes | <*password*> | The password for the computer where you have your file system | + | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | ++ The following example shows the connection information for the File System managed connector trigger: ++ ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/connect-file-systems/trigger-file-system-connection-managed-standard.png) ++1. When you're done, select **Create**. ++ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. ++1. Continue building your workflow. ++ 1. Provide the required information for your trigger. ++ For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check. ++ ![Screenshot showing Standard workflow designer and managed connector trigger named When a file is created.](media/connect-file-systems/trigger-when-file-created-managed-standard.png) ++ 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address. ++ ![Screenshot showing Standard workflow designer, managed connector trigger named When a file is created, and action named Send an email.](media/connect-file-systems/trigger-send-email-managed-standard.png) ++ > [!TIP] + > + > To add outputs from previous steps in the workflow, select inside the trigger's edit boxes. + > After the dynamic content list and expression editor options appear, select the dynamic content + > list (lightning icon). When the dynamic content list appears, select from the available outputs. ++1. When you're done, save your workflow. ++1. To test your workflow, upload a file, which triggers the workflow. ++If successful, your workflow sends an email about the new file. ++++<a name="add-file-system-action"></a> ++## Add a File System action ++The example logic app workflow starts with the [Dropbox trigger](/connectors/dropbox/#triggers), but you can use any trigger that you want. ++### [Consumption](#tab/consumption) ++1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. ++1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). ++ For more information, see [File System triggers](/connectors/filesystem/#actions). This example continues with the action named **Create file**. ++1. In the connection information box, provide the following information as required: ++ | Property | Required | Value | Description | + |-|-|-|-| + | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | + | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | + | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Password** | Yes | <*password*> | The password for the computer where you have your file system | + | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | ++ The following example shows the connection information for the File System managed connector action: ++ ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/file-system-connection-consumption.png) ++ The following example shows the connection information for the File System ISE-based connector action: ++ ![Screenshot showing connection information for File System ISE-based connector action.](media/connect-file-systems/file-system-connection-ise.png) ++1. When you're done, select **Create**. ++ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. ++1. Continue building your workflow. ++ 1. Provide the required information for your action. ++ For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox. ++ ![Screenshot showing Consumption workflow designer and the File System managed connector action named Create file.](media/connect-file-systems/action-file-system-create-file-consumption.png) ++ > [!TIP] + > + > To add outputs from previous steps in the workflow, select inside the action's edit boxes. + > When the dynamic content list appears, select from the available outputs. ++ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address. ++ ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-consumption.png) ++1. When you're done, save your workflow. ++1. To test your workflow, upload a file, which triggers the workflow. ++If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. ++### [Standard](#tab/standard) ++#### Built-in connector action ++These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only. ++1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. ++1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action). ++ For more information, see [File System actions](/azure/logic-apps/connectors/built-in/reference/filesystem/#actions). This example continues with the action named **Create file**. ++1. In the connection information box, provide the following information as required: ++ | Property | Required | Value | Description | + |-|-|-|-| + | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | + | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** | + | **Password** | Yes | <*password*> | The password for the computer where you have your file system | ++ The following example shows the connection information for the File System built-in connector action: ++ ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/connect-file-systems/action-file-system-connection-built-in-standard.png) ++ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. ++1. Continue building your workflow. ++ 1. Provide the required information for your action. For this example, follow these steps: ++ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder. ++ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**. ++ 1. After the **File content** parameter appears on the action information pane, select inside the parameter's edit box. ++ 1. After the dynamic content list and expression editor options appear, select the dynamic content list (lightning icon). From the list that appears, under the **When a file is created** trigger section, select **File Content**. ++ When you're done, the **File Content** trigger output appears in the **File content** parameter: ++ ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-built-in-standard.png) ++ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address. ++ ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-built-in-standard.png) ++1. When you're done, save your workflow. ++1. To test your workflow, upload a file, which triggers the workflow. ++If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. ++#### Managed connector action ++1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. ++1. In the designer, [follow these general steps to add the **File System** action that you want to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action). ++ For more information, see [File System actions](/connectors/filesystem/#actions). This example continues with the action named **Create file**. ++1. In the connection information box, provide the following information as required: ++ | Property | Required | Value | Description | + |-|-|-|-| + | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | + | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | + | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | + | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | + | **Password** | Yes | <*password*> | The password for the computer where you have your file system | + | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | ++ The following example shows the connection information for the File System managed connector action: ++ ![Screenshot showing connection information for File System managed connector action.](media/connect-file-systems/action-file-system-connection-managed-standard.png) ++ Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. ++1. Continue building your workflow. ++ 1. Provide the required information for your action. For this example, follow these steps: ++ 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder. ++ 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**. ++ 1. After the **File content** parameter appears on the action information pane, select inside the parameter's edit box. ++ 1. After the dynamic content list and expression editor options appear, select the dynamic content list (lightning icon). From the list that appears, under the **When a file is created** trigger section, select **File Content**. ++ When you're done, the **File Content** trigger output appears in the **File content** parameter: ++ ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/connect-file-systems/action-file-system-create-file-managed-standard.png) ++ 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address. ++ ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/connect-file-systems/action-file-system-send-email-managed-standard.png) ++1. When you're done, save your workflow. ++1. To test your workflow, upload a file, which triggers the workflow. ++If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. ++++## Next steps ++* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) +* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md) |
container-apps | Application Lifecycle Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/application-lifecycle-management.md | When you deploy a container app, the first revision is automatically created. [M A container app flows through four phases: deployment, update, deactivation, and shut down. +> [!NOTE] +> [Azure Container Apps jobs](jobs.md) don't support revisions. Jobs are deployed and updated directly. + ## Deployment As a container app is deployed, the first revision is automatically created. |
container-apps | Azure Arc Enable Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-enable-cluster.md | A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro --query customerId \ --output tsv) LOG_ANALYTICS_WORKSPACE_ID_ENC=$(printf %s $LOG_ANALYTICS_WORKSPACE_ID | base64 -w0) # Needed for the next step- lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \ + LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys \ --resource-group $GROUP_NAME \ --workspace-name $WORKSPACE_NAME \ --query primarySharedKey \ --output tsv)- lOG_ANALYTICS_KEY_ENC=$(printf %s $lOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step + LOG_ANALYTICS_KEY_ENC=$(printf %s $LOG_ANALYTICS_KEY | base64 -w0) # Needed for the next step ``` # [PowerShell](#tab/azure-powershell) A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro --query customerId ` --output tsv) $LOG_ANALYTICS_WORKSPACE_ID_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_WORKSPACE_ID))# Needed for the next step- $lOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys ` + $LOG_ANALYTICS_KEY=$(az monitor log-analytics workspace get-shared-keys ` --resource-group $GROUP_NAME ` --workspace-name $WORKSPACE_NAME ` --query primarySharedKey ` --output tsv)- $lOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($lOG_ANALYTICS_KEY)) + $LOG_ANALYTICS_KEY_ENC=[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes($LOG_ANALYTICS_KEY)) ``` A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro --configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" \ --configuration-settings "logProcessor.appLogs.destination=log-analytics" \ --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" \- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}" + --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}" ``` # [PowerShell](#tab/azure-powershell) A [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) pro --configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${AKS_CLUSTER_GROUP_NAME}" ` --configuration-settings "logProcessor.appLogs.destination=log-analytics" ` --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${LOG_ANALYTICS_WORKSPACE_ID_ENC}" `- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${lOG_ANALYTICS_KEY_ENC}" + --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${LOG_ANALYTICS_KEY_ENC}" ``` |
container-apps | Azure Arc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-arc-overview.md | Optionally, you can choose to have the extension install [KEDA](https://keda.sh/ The following table describes the role of each revision created for you: -| Pod | Description | Number of Instances | CPU | Memory | -|-|-|-|-|-| -| `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB | -| `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB | -| `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 1 GB | -| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB | -| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 200 millicpu | 500 MB | -| `<extensionName>-k8se-event-processor` | An alternative routing destination to help with apps that have scaled to zero while the system gets the first instance available. | 2 | 100 millicpu | 500 MB | -| `<extensionName>-k8se-http-scaler` | Monitors inbound request volume in order to provide scaling information to [KEDA](https://keda.sh). | 1 | 100 millicpu | 500 MB | -| `<extensionName>-k8se-keda-cosmosdb-scaler` | Keda Cosmos DB Scaler | 1 | 10 m | 128 MB | -| `<extensionName>-k8se-keda-metrics-apiserver` | Keda Metrics Server | 1 | 1 Core | 1000 MB | -| `<extensionName>-k8se-keda-operator` | Manages component updated and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | -| `<extensionName>-k8se-local-envoy` | A front-end proxy layer for all data-plane tcp requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB | -| `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | -| `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB | -| dapr-metrics | Dapr metrics pod | 1 | 100 millicpu | 500 MB | -| dapr-operator | Manages component updates and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | -| dapr-placement-server | Used for Actors only - creates mapping tables that map actor instances to pods | 1 | 100 millicpu | 500 MB | -| dapr-sentry | Manages mTLS between services and acts as a CA | 2 | 800 millicpu | 200 MB | +| Pod | Description | Number of Instances | CPU | Memory | Type | +|-|-|-|-|-|-| +| `<extensionName>-k8se-activator` | Used as part of the scaling pipeline | 2 | 100 millicpu | 500 MB | ReplicaSet | +| `<extensionName>-k8se-billing` | Billing record generation - Azure Container Apps on Azure Arc enabled Kubernetes is Free of Charge during preview | 3 | 100 millicpu | 100 MB | ReplicaSet | +| `<extensionName>-k8se-containerapp-controller` | The core operator pod that creates resources on the cluster and maintains the state of components. | 2 | 100 millicpu | 1 GB | ReplicaSet | +| `<extensionName>-k8se-envoy` | A front-end proxy layer for all data-plane http requests. It routes the inbound traffic to the correct apps. | 3 | 1 Core | 1536 MB | ReplicaSet | +| `<extensionName>-k8se-envoy-controller` | Operator, which generates Envoy configuration | 2 | 200 millicpu | 500 MB | ReplicaSet | +| `<extensionName>-k8se-event-processor` | An alternative routing destination to help with apps that have scaled to zero while the system gets the first instance available. | 2 | 100 millicpu | 500 MB | ReplicaSet | +| `<extensionName>-k8se-http-scaler` | Monitors inbound request volume in order to provide scaling information to [KEDA](https://keda.sh). | 1 | 100 millicpu | 500 MB | ReplicaSet | +| `<extensionName>-k8se-keda-cosmosdb-scaler` | Keda Cosmos DB Scaler | 1 | 10 m | 128 MB | ReplicaSet | +| `<extensionName>-k8se-keda-metrics-apiserver` | Keda Metrics Server | 1 | 1 Core | 1000 MB | ReplicaSet | +| `<extensionName>-k8se-keda-operator` | Manages component updated and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | ReplicaSet | +| `<extensionName>-k8se-log-processor` | Gathers logs from apps and other components and sends them to Log Analytics. | 2 | 200 millicpu | 500 MB | DaemonSet | +| `<extensionName>-k8se-mdm` | Metrics and Logs Agent | 2 | 500 millicpu | 500 MB | ReplicaSet | +| dapr-metrics | Dapr metrics pod | 1 | 100 millicpu | 500 MB | ReplicaSet | +| dapr-operator | Manages component updates and service endpoints for Dapr | 1 | 100 millicpu | 500 MB | ReplicaSet | +| dapr-placement-server | Used for Actors only - creates mapping tables that map actor instances to pods | 1 | 100 millicpu | 500 MB | StatefulSet | +| dapr-sentry | Manages mTLS between services and acts as a CA | 2 | 800 millicpu | 200 MB | ReplicaSet | ## FAQ for Azure Container Apps on Azure Arc (Preview) ARM64 based clusters aren't supported at this time. ### Container Apps extension v1.12.8 (June 2023) + - Update OSS Fluent Bit to 2.1.2 - Upgrade of Dapr to 1.10.6 - Support for container registries exposed on custom port - Enable activate/deactivate revision when a container app is stopped - Fix Revisions List not returning init containers - Default allow headers added for cors policy +### Container Apps extension v1.12.9 (July 2023) ++ - Minor updates to EasyAuth sidecar containers + - Update of Extension Monitoring Agents ++### Container Apps extension v1.17.8 (August 2023) ++ - Update EasyAuth to 1.6.16 + - Update of Dapr to 1.10.8 + - Update Envoy to 1.25.6 + - Add volume mount support for Azure Container App jobs + - Added IP Restrictions for applications with TCP Ingress type + - Added support for Container Apps with multiple exposed ports + ## Next steps -[Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md) +[Create a Container Apps connected environment (Preview)](azure-arc-enable-cluster.md) |
container-apps | Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/billing.md | Billing in Azure Container Apps is based on your [plan type](plans.md). - Your plan selection determines billing calculations. - Different applications in an environment can use different plans. -For more information, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). +This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). ## Consumption plan The following resources are free during each calendar month, per subscription: - The first 360,000 GiB-seconds - The first 2 million HTTP requests -This article describes how to calculate the cost of running your container app. For pricing details in your account's currency, see [Azure Container Apps Pricing](https://azure.microsoft.com/pricing/details/container-apps/). -+Free usage doesn't appear on your bill. You'll only be charged when your resource usage exceeds the monthly free grants. > [!NOTE] > If you use Container Apps with [your own virtual network](networking.md#managed-resources) or your apps utilize other Azure resources, additional charges may apply. ### Resource consumption charges -Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure for each revision. You're charged for the amount of resources allocated to each replica while it's running. +Azure Container Apps runs replicas of your application based on the [scaling rules and replica count limits](scale-app.md) you configure for each revision. [Azure Container Apps jobs](jobs.md) run replicas when job executions are triggered. You're charged for the amount of resources allocated to each replica while it's running. There are 2 meters for resource consumption: There are 2 meters for resource consumption: The first 180,000 vCPU-seconds and 360,000 GiB-seconds in each subscription per calendar month are free. +#### Container apps + The rate you pay for resource consumption depends on the state of your container app's revisions and replicas. By default, replicas are charged at an *active* rate. However, in certain conditions, a replica can enter an *idle* state. While in an *idle* state, resources are billed at a reduced rate. -#### No replicas are running +##### No replicas are running When a revision is scaled to zero replicas, no resource consumption charges are incurred. -#### Minimum number of replicas are running +##### Minimum number of replicas are running -Idle usage charges may apply when a revision is running under a specific set of circumstances. To be eligible for idle charges, a revision must be: +Idle usage charges may apply when a container app's revision is running under a specific set of circumstances. To be eligible for idle charges, a revision must be: - Configured with a [minimum replica count](scale-app.md) greater than zero - Scaled to the minimum replica count Usage charges are calculated individually for each replica. A replica is conside When a replica is idle, resource consumption charges are calculated at the reduced idle rates. When a replica isn't idle, the active rates apply. -#### More than the minimum number of replicas are running +##### More than the minimum number of replicas are running When a revision is scaled above the [minimum replica count](scale-app.md), all of its running replicas are charged for resource consumption at the active rate. +#### Jobs ++In the Consumption plan, resources consumed by Azure Container Apps jobs are charged the active rate. Idle charges don't apply to jobs because executions stop consuming resources once the job completes. + ### Request charges In addition to resource consumption, Azure Container Apps also charges based on the number of HTTP requests received by your container app. Only requests that come from outside a Container Apps environment are billable. In addition to resource consumption, Azure Container Apps also charges based on - The first 2 million requests in each subscription per calendar month are free. - [Health probe](./health-probes.md) requests aren't billable. +Request charges don't apply to Azure Container Apps jobs because they don't support ingress. + <a id="consumption-dedicated"></a> ## Dedicated plan You're billed based on workload profile instances, not by individual applications. -Billing for apps running in the Dedicated plan is based on workload profile instances, not by individual applications. The charges are as follows: +Billing for apps and jobs running in the Dedicated plan is based on workload profile instances, not by individual applications. The charges are as follows: | Fixed management costs | Variable costs | ||| |
container-apps | Blue Green Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/blue-green-deployment.md | After you test and verify the new revision, you can then point production traffi This article shows you how to implement blue-green deployment in a container app. To run the following examples, you need a container app environment where you can create a new app. > [!NOTE]-> Refer to [containerapps-blue-green repository](https://github.com/Azure-Samples/containerapps-blue-green) for a complete example of a github workflow that implements blue-green deployment for Container Apps. +> Refer to [containerapps-blue-green repository](https://github.com/Azure-Samples/containerapps-blue-green) for a complete example of a GitHub workflow that implements blue-green deployment for Container Apps. ## Create a container app with multiple active revisions enabled |
container-apps | Compare Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md | There's no perfect solution for every use case and every team. The following exp ## Container option comparisons ### Azure Container Apps-Azure Container Apps enables you to build serverless microservices based on containers. Distinctive features of Container Apps include: +Azure Container Apps enables you to build serverless microservices and jobs based on containers. Distinctive features of Container Apps include: * Optimized for running general purpose containers, especially for applications that span many microservices deployed in containers. * Powered by Kubernetes and open-source technologies like [Dapr](https://dapr.io/), [KEDA](https://keda.sh/), and [envoy](https://www.envoyproxy.io/). * Supports Kubernetes-style apps and microservices with features like [service discovery](connect-apps.md) and [traffic splitting](revisions.md). * Enables event-driven application architectures by supporting scale based on traffic and pulling from [event sources like queues](scale-app.md), including [scale to zero](scale-app.md).-* Support of long running processes and can run [background tasks](background-processing.md). +* Supports running on demand, scheduled, and event-driven [jobs](jobs.md). Azure Container Apps doesn't provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use [Azure Kubernetes Service](../aks/intro-kubernetes.md). However, if you would like to build Kubernetes-style applications and don't require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices. For these reasons, many teams may prefer to start building container microservices with Azure Container Apps. |
container-apps | Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md | Azure Container Apps manages the details of Kubernetes and container orchestrati Azure Container Apps supports: -- Any Linux-based x86-64 (`linux/amd64`) container image+- Any Linux-based x86-64 (`linux/amd64`) container image with no required base image - Containers from any public or private container registry+- [Sidecar](#sidecar-containers) and [init](#init-containers) containers -Features include: +Container apps features include: -- There's no required base container image. - Changes to the `template` configuration section trigger a new [container app revision](application-lifecycle-management.md). - If a container crashes, it automatically restarts. +Jobs features include: ++- Job executions use the `template` configuration section to define the container image and other settings when each execution starts. +- If a container exits with a non-zero exit code, the job execution is marked as failed. You can configure a job to retry failed executions. + ## Configuration The following code is an example of the `containers` array in the [`properties.template`](azure-resource-manager-api-spec.md#propertiestemplate) section of a container app resource template. The excerpt shows the available configuration options when setting up a container. You can define one or more [init containers](https://kubernetes.io/docs/concepts Init containers are defined in the `initContainers` array of the container app template. The containers run in the order they are defined in the array and must complete successfully before the primary app container starts. +> [!NOTE] +> Init containers support [image pulls using managed identities](#managed-identity-with-azure-container-registry), but processes running in init containers don't have access to managed identities. + ## Container registries You can deploy images hosted on private registries by providing credentials in the Container Apps configuration. |
container-apps | Dapr Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md | This guide provides insight into core Dapr concepts and details regarding the Da | Dapr API | Description | | -- | |-| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. | +| [**Service-to-service invocation**][dapr-serviceinvo] | Discover services and perform reliable, direct service-to-service calls with automatic mTLS authentication and encryption. [See known limitations for Dapr service invocation in Azure Container Apps.](#unsupported-dapr-capabilities) | | [**State management**][dapr-statemgmt] | Provides state management capabilities for transactions and CRUD operations. | | [**Pub/sub**][dapr-pubsub] | Allows publisher and subscriber container apps to intercommunicate via an intermediary message broker. | | [**Bindings**][dapr-bindings] | Trigger your applications based on events | | [**Actors**][dapr-actors] | Dapr actors are message-driven, single-threaded, units of work designed to quickly scale. For example, in burst-heavy workload situations. | | [**Observability**](./observability.md) | Send tracing information to an Application Insights backend. | | [**Secrets**][dapr-secrets] | Access secrets from your application code or reference secure values in your Dapr components. |+| [**Configuration**][dapr-config] | Retrieve and subscribe to application configuration items for supported configuration stores. | + > [!NOTE] > The above table covers stable Dapr APIs. To learn more about using alpha APIs and features, [see the Dapr FAQ][dapr-faq]. This resource defines a Dapr component called `dapr-pubsub` via ARM. - **Custom configuration for Dapr Observability**: Instrument your environment with Application Insights to visualize distributed tracing. - **Dapr Configuration spec**: Any capabilities that require use of the Dapr configuration spec.+- **Invoking non-Dapr services from Dapr as if they were Dapr-enabled**: Dapr's Service Invocation with Azure Container Apps is supported only between Dapr-enabled services. - **Declarative pub/sub subscriptions** - **Any Dapr sidecar annotations not listed above** - **Alpha APIs and components**: Azure Container Apps doesn't guarantee the availability of Dapr alpha APIs and features. For more information, refer to the [Dapr FAQ][dapr-faq]. This resource defines a Dapr component called `dapr-pubsub` via ARM. ### Known limitations - **Actor reminders**: Require a minReplicas of 1+ to ensure reminders is always active and fires correctly.+- **Jobs**: Dapr isn't supported for jobs. ## Next Steps Now that you've learned about Dapr and some of the challenges it solves: [dapr-bindings]: https://docs.dapr.io/developing-applications/building-blocks/bindings/bindings-overview/ [dapr-actors]: https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/ [dapr-secrets]: https://docs.dapr.io/developing-applications/building-blocks/secrets/secrets-overview/+[dapr-config]: https://docs.dapr.io/developing-applications/building-blocks/configuration/ [dapr-cncf]: https://www.cncf.io/projects/dapr/ [dapr-args]: https://docs.dapr.io/reference/arguments-annotations-overview/ [dapr-component]: https://docs.dapr.io/concepts/components-concept/ |
container-apps | Environment Custom Dns Suffix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment-custom-dns-suffix.md | +> > To configure a custom domain for individual container apps, see [Custom domain names and certificates in Azure Container Apps](custom-domains-certificates.md).+> +> If you configure a custom DNS suffix for your environment, traffic to FQDNs that use this suffix will resolve to the environment. FQDNs that use this suffix outside the environment will be unreachable from the environment. ## Add a custom DNS suffix and certificate |
container-apps | Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/environment.md | -A Container Apps environment is a secure boundary around groups of container apps that share the same virtual network and write logs to the same logging destination. +A Container Apps environment is a secure boundary around groups of container apps and jobs that share the same virtual network and write logs to the same logging destination. Container Apps environments are fully managed where Azure handles OS upgrades, scale operations, failover procedures, and resource balancing. :::image type="content" source="media/environments/azure-container-apps-environments.png" alt-text="Azure Container Apps environments."::: -Reasons to deploy container apps to the same environment include situations when you need to: +Reasons to deploy container apps and jobs to the same environment include situations when you need to: - Manage related services - Deploy different applications to the same virtual network |
container-apps | Ingress How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md | You can configure ingress for your container app using the Azure CLI, an ARM tem ::: zone pivot="azure-cli" -# [Azure CLI](#tab/azure-cli) - This `az containerapp ingress enable` command enables ingress for your container app. You must specify the target port, and you can optionally set the exposed port if your transport type is `tcp`. ```azurecli az containerapp ingress enable \ ::: zone pivot="azure-portal" -# [Portal](#tab/portal) - Enable ingress for your container app by using the portal. You can enable ingress when you create your container app, or you can enable ingress for an existing container app. You can enable ingress when you create your container app, or you can enable ing You can configure ingress when you create your container app by using the Azure portal. - 1. Set **Ingress** to **Enabled**. 1. Configure the ingress settings for your container app. 1. Select **Limited to Container Apps Environment** for internal ingress or **Accepting traffic from anywhere** for external ingress. The **Ingress** settings page for your container app also allows you to configur ::: zone pivot="azure-resource-manager" -# [ARM template](#tab/arm-template) - Enable ingress for your container app by using the `ingress` configuration property. Set the `external` property to `true`, and set your `transport` and `targetPort` properties. -`external` property can be set to *true* for external or *false* for internal ingress. - Set the `transport` to `auto` to detect HTTP/1 or HTTP/2, `http` for HTTP/1, `http2` for HTTP/2, or `tcp` for TCP. Enable ingress for your container app by using the `ingress` configuration prope } ``` -- ::: zone-end ::: zone pivot="azure-cli" ## Disable ingress -# [Azure CLI](#tab/azure-cli) - Disable ingress for your container app by using the `az containerapp ingress` command. ```azurecli az containerapp ingress disable \ ::: zone pivot="azure-portal" -# [Portal](#tab/portal) - You can disable ingress for your container app using the portal. 1. Select **Ingress** from the **Settings** menu of the container app page. You can disable ingress for your container app using the portal. ::: zone pivot="azure-resource-manager" -# [ARM template](#tab/arm-template) - Disable ingress for your container app by omitting the `ingress` configuration property from `properties.configuration` entirely. -++## <a name="use-additional-tcp-ports"></a>Use additional TCP ports (preview) ++You can expose additional TCP ports from your application. To learn more, see the [ingress concept article](ingress-overview.md#additional-tcp-ports). ++++Adding additional TCP ports can be done through the CLI by referencing a YAML file with your TCP port configurations. ++```azurecli +az containerapp create + --name <app-name> \ + --resource-group <resource-group> \ + --yaml <your-yaml-file> +``` ++The following is an example YAML file you can reference in the above CLI command. The configuration for the additional TCP ports is under `additionalPortMappings`. ++```yml +location: northcentralus +name: multiport-example +properties: + configuration: + activeRevisionsMode: Single + ingress: + additionalPortMappings: + - exposedPort: 21025 + external: false + targetPort: 1025 + allowInsecure: false + external: true + targetPort: 1080 + traffic: + - latestRevision: true + weight: 100 + transport: http + managedEnvironmentId: <env id> + template: + containers: + - image: maildev/maildev + name: maildev + resources: + cpu: 0.25 + memory: 0.5Gi + scale: + maxReplicas: 1 + minReplicas: 1 + workloadProfileName: Consumption +type: Microsoft.App/containerApps +``` ++++This feature is not supported in the Azure portal. ++++The following ARM template provides an example of how you can add additional ports to your container apps. Each additional port should be added under `additionalPortMappings` within the `ingress` section for `configuration` within `properties` for the container app. The following is an example: ++```json +{ + ... + "properties": { + ... + "configuration": { + "ingress": { + ... + "additionalPortMappings": [ + { + "external": false + "targetPort": 80 + "exposedPort": 12000 + } + ] + } + } + ... +} +``` ::: zone-end |
container-apps | Ingress Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md | HTTP ingress adds headers to pass metadata about the client request to your cont | `X-Forwarded-Proto` | Protocol used by the client to connect with the Container Apps service. | `http` or `https` | | `X-Forwarded-For` | The IP address of the client that sent the request. | | | `X-Forwarded-Host` | The host name the client used to connect with the Container Apps service. | |-| `X-Forwarded-Client-Cert` | The client certificate if `clientCertificateMode` is set. | Semicolon seperated list of Hash, Cert, and Chain. For example: `Hash=....;Cert="...";Chain="...";` | +| `X-Forwarded-Client-Cert` | The client certificate if `clientCertificateMode` is set. | Semicolon separated list of Hash, Cert, and Chain. For example: `Hash=....;Cert="...";Chain="...";` | ### <a name="tcp"></a>TCP With TCP ingress enabled, your container app: - Is accessible to other container apps in the same environment via its name (defined by the `name` property in the Container Apps resource) and exposed port number. - Is accessible externally via its fully qualified domain name (FQDN) and exposed port number if the ingress is set to "external". +## <a name="additional-tcp-ports"></a>Additional TCP ports (preview) ++In addition to the main HTTP/TCP port for your container apps, you may expose additional TCP ports to enable applications that accept TCP connections on multiple ports. This feature is in preview. ++The following apply to additional TCP ports: +- Additional TCP ports can only be external if the app itself is set as external and the container app is using a custom VNet. +- Any externally exposed additional TCP ports must be unique across the entire Container Apps environment. This includes all external additional TCP ports, external main TCP ports, and 80/443 ports used by built-in HTTP ingress. If the additional ports are internal, the same port can be shared by multiple apps. +- If an exposed port is not provided, the exposed port will default to match the target port. +- Each target port must be unique, and the same target port cannot be exposed on different exposed ports. +- There is a maximum of 5 additional ports per app. If additional ports are required, please open a support request. +- Only the main ingress port supports built-in HTTP features such as CORS and session affinity. When running HTTP on top of the additional TCP ports, these built-in features are not supported. ++Visit the [how to article on ingress](ingress-how-to.md#use-additional-tcp-ports) for more information on how to enable additional ports for your container apps. + ## Domain names You can access your app in the following ways: -- The default fully-qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md).+- The default fully qualified domain name (FQDN): Each app in a Container Apps environment is automatically assigned an FQDN based on the environment's DNS suffix. To customize an environment's DNS suffix, see [Custom environment DNS Suffix](environment-custom-dns-suffix.md). - A custom domain name: You can configure a custom DNS domain for your Container Apps environment. For more information, see [Custom domain names and certificates](./custom-domains-certificates.md). - The app name: You can use the app name for communication between apps in the same environment. |
container-apps | Jobs Get Started Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-cli.md | Title: Create a job with Azure Container Apps (preview) + Title: Create a job with Azure Container Apps description: Learn to create an on-demand or scheduled job in Azure Container Apps Previously updated : 05/08/2023 Last updated : 08/17/2023 zone_pivot_groups: container-apps-job-types -# Create a job with Azure Container Apps (preview) +# Create a job with Azure Container Apps Azure Container Apps [jobs](jobs.md) allow you to run containerized tasks that execute for a finite duration and exit. You can trigger a job manually, schedule their execution, or trigger their execution based on events. |
container-apps | Jobs Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs-get-started-portal.md | + + Title: Create a job with Azure Container Apps using the Azure portal +description: Learn to create an on-demand or scheduled job in Azure Container Apps using the Azure portal +++++ Last updated : 08/21/2023++++# Create a job with Azure Container Apps using the Azure portal ++Azure Container Apps [jobs](jobs.md) allow you to run containerized tasks that execute for a finite duration and exit. You can trigger a job manually, schedule their execution, or trigger their execution based on events. ++Jobs are best suited to for tasks such as data processing, machine learning, or any scenario that requires on-demand processing. ++In this quickstart, you create a scheduled job. To learn how to create an event-driven job, see [Deploy an event-driven job with Azure Container Apps](tutorial-event-driven-jobs.md). ++## Prerequisites ++An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Also, please make sure to have the Resource Provider "Microsoft.App" registered. ++## Setup ++Begin by signing in to the [Azure portal](https://portal.azure.com). ++## Create a container app ++To create your Container Apps job, start at the Azure portal home page. ++1. Search for **Container App Jobs** in the top search bar. +1. Select **Container App Jobs** in the search results. +1. Select the **Create** button. ++### Basics tab ++In the *Basics* tab, do the following actions. ++1. Enter the following values in the *Project details* section. ++ | Setting | Action | + ||| + | Subscription | Select your Azure subscription. | + | Resource group | Select **Create new** and enter **jobs-quickstart**. | + | Container job name | Enter **my-job**. | ++#### Create an environment ++Next, create an environment for your container app. ++1. Select the appropriate region. ++ | Setting | Value | + |--|--| + | Region | Select **Central US**. | ++1. In the *Create Container Apps environment* field, select the **Create new** link. +1. In the *Create Container Apps Environment* page, on the *Basics* tab, enter the following values: ++ | Setting | Value | + |--|--| + | Environment name | Enter **my-environment**. | + | Type | Enter **Workload Profile**. | + | Zone redundancy | Select **Disabled** | ++1. Select the **Create** button at the bottom of the *Create Container App Environment* page. ++### Deploy the job ++1. In *Job details*, select **Scheduled** for the *Trigger type*. ++ In the *Cron expression* field, enter `*/1 * * * *`. + + This expression starts the job every minute. ++1. Select the **Next: Container** button at the bottom of the page. ++1. In the *Container* tab, enter the following values: ++ | Setting | Value | + |--|--| + | Name | Enter **main-container**. | + | Image source | Select **Docker Hub or other registries**. | + | Image type | Select **Public**. | + | Registry login server | Enter **mcr.microsoft.com**. | + | Image and tag | Enter **k8se/quickstart-jobs:latest**. | + | Workload profile | Select **Consumption**. | + | CPU and memory | Select **0.25** and **0.5Gi**. | ++1. Select the **Review and create** button at the bottom of the page. ++ As the settings in the job are verified, if no errors are found, the *Create* button is enabled. ++ Any errors appear on a tab marked with a red dot. If you encounter errors, navigate to the appropriate tab and you'll find fields containing errors highlighted in red. Once all errors are fixed, select **Review and create** again. ++1. Select **Create**. ++ A page with the message *Deployment is in progress* is displayed. Once the deployment is successfully completed, you'll see the message: *Your deployment is complete*. ++### Verify deployment ++1. Select **Go to resource** to view your new Container Apps job. ++2. Select the **Execution history** tab. ++ The *Execution history* tab displays the status of each job execution. Select the **Refresh** button to update the list. Wait up to a minute for the scheduled job execution to start. Its status changes from *Pending* to *Running* to *Succeeded*. ++1. Select **View logs**. ++ The logs show the output of the job execution. It may take a few minutes for the logs to appear. ++## Clean up resources ++If you're not going to continue to use this application, you can delete the Azure Container Apps instance and all the associated services by removing the resource group. ++1. Select the **jobs-quickstart** resource group from the *Overview* section. +1. Select the **Delete resource group** button at the top of the resource group *Overview*. +1. Enter the resource group name **jobs-quickstart** in the *Are you sure you want to delete "jobs-quickstart"* confirmation dialog. +1. Select **Delete**. + The process to delete the resource group may take a few minutes to complete. ++> [!TIP] +> Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps). ++## Next steps ++> [!div class="nextstepaction"] +> [Container Apps jobs](jobs.md) |
container-apps | Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/jobs.md | Title: Jobs in Azure Container Apps (preview) -description: Learn about jobs in Azure Container Apps (preview) + Title: Jobs in Azure Container Apps +description: Learn about jobs in Azure Container Apps Previously updated : 05/08/2023 Last updated : 08/17/2023 -# Jobs in Azure Container Apps (preview) +# Jobs in Azure Container Apps Azure Container Apps jobs enable you to run containerized tasks that execute for a finite duration and exit. You can use jobs to perform tasks such as data processing, machine learning, or any scenario where on-demand processing is required. Apps are services that run continuously. If a container in an app fails, it's re Jobs are tasks that start, run for a finite duration, and exit when finished. Each execution of a job typically performs a single unit of work. Job executions start manually, on a schedule, or in response to events. Examples of jobs include batch processes that run on demand and scheduled tasks. +### Example scenarios ++The following table compares common scenarios for apps and jobs: ++| Container | Compute resource | Notes | +|||| +| An HTTP server that serves web content and API requests | App | Configure an [HTTP scale rule](scale-app.md#http). | +| A process that generates financial reports nightly | Job | Use the [*Schedule* job type](#scheduled-jobs) and configure a cron expression. | +| A continuously running service that processes messages from an Azure Service Bus queue | App | Configure a [custom scale rule](scale-app.md#custom). | +| A job that processes a single message or a small batch of messages from an Azure queue and exits | Job | Use the *Event* job type and [configure a custom scale rule](tutorial-event-driven-jobs.md) to trigger job executions. | +| A background task that's triggered on-demand and exits when finished | Job | Use the *Manual* job type and [start executions](#start-a-job-execution-on-demand) manually or programmatically using an API. | +| A self-hosted GitHub Actions runner or Azure Pipelines agent | Job | Use the *Event* job type and configure a [GitHub Actions](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-github-actions) or [Azure Pipelines](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-azure-pipelines) scale rule. | +| An Azure Functions app | App | [Deploy Azure Functions to Container Apps](../azure-functions/functions-container-apps-hosting.md). | +| An event-driven app using the Azure WebJobs SDK | App | [Configure a scale rule](scale-app.md#custom) for each event source. | + ## Job trigger types A job's trigger type determines how the job is started. The following trigger types are available: To create a manual job using the Azure CLI, use the `az containerapp job create` az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Manual" \- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \ + --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \ --image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" ``` The following example Azure Resource Manager template creates a manual job named "parallelism": 1, "replicaCompletionCount": 1 },- "replicaRetryLimit": 1, + "replicaRetryLimit": 0, "replicaTimeout": 1800, "triggerType": "Manual" }, The following example Azure Resource Manager template creates a manual job named } ``` +# [Azure portal](#tab/azure-portal) ++To create a manual job using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Specify *Manual* as the trigger type. ++Enter the following values in the *Containers* tab to use a sample container image. ++| Setting | Value | +||| +| Name | *main* | +| Image source | *Docker Hub or other registries* | +| Image type | *Public* | +| Registry login server | *mcr.microsoft.com* | +| Image and tag | *k8se/quickstart-jobs:latest* | +| CPU and memory | *0.25 CPU cores, 0.5 Gi memory*, or higher | + -The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits. +The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a public sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits. To authenticate and use a private container image, see [Containers](containers.md#container-registries). The above command only creates the job. To start a job execution, see [Start a job execution on demand](#start-a-job-execution-on-demand). Container Apps jobs use cron expressions to define schedules. It supports the st | Expression | Description | |||+| `*/5 * * * *` | Runs every 5 minutes. | | `0 */2 * * *` | Runs every two hours. | | `0 0 * * *` | Runs every day at midnight. | | `0 0 * * 0` | Runs every Sunday at midnight. | To create a scheduled job using the Azure CLI, use the `az containerapp job crea az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Schedule" \- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \ + --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \ --image "mcr.microsoft.com/k8se/quickstart-jobs:latest" \ --cpu "0.25" --memory "0.5Gi" \- --cron-expression "0 0 * * *" + --cron-expression "*/1 * * * *" ``` # [Azure Resource Manager](#tab/azure-resource-manager) The following example Azure Resource Manager template creates a manual job named "properties": { "configuration": { "scheduleTriggerConfig": {- "cronExpression": "0 0 * * *", + "cronExpression": "*/1 * * * *", "parallelism": 1, "replicaCompletionCount": 1 },- "replicaRetryLimit": 1, + "replicaRetryLimit": 0, "replicaTimeout": 1800, "triggerType": "Schedule" }, The following example Azure Resource Manager template creates a manual job named } ``` +# [Azure portal](#tab/azure-portal) ++To create a scheduled job using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Specify *Schedule* as the trigger type and define the schedule with a cron expression, such as `*/1 * * * *` to run every minute. ++Enter the following values in the *Containers* tab to use a sample container image. ++| Setting | Value | +||| +| Name | *main* | +| Image source | *Docker Hub or other registries* | +| Image type | *Public* | +| Registry login server | *mcr.microsoft.com* | +| Image and tag | *k8se/quickstart-jobs:latest* | +| CPU and memory | *0.25 CPU cores, 0.5 Gi memory*, or higher | + -The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits. +The `mcr.microsoft.com/k8se/quickstart-jobs:latest` image is a public sample container image that runs a job that waits a few seconds, prints a message to the console, and then exits. To authenticate and use a private container image, see [Containers](containers.md#container-registries). -The cron expression `0 0 * * *` runs the job every day at midnight UTC. +The cron expression `*/1 * * * *` runs the job every minute. ### Event-driven jobs Event-driven jobs are triggered by events from supported [custom scalers](scale-app.md#custom). Examples of event-driven jobs include: - A job that runs when a new message is added to a queue such as Azure Service Bus, Kafka, or RabbitMQ.-- A self-hosted GitHub Actions runner or Azure DevOps agent that runs when a new job is queued in a workflow or pipeline.+- A self-hosted [GitHub Actions runner](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-github-actions) or [Azure DevOps agent](tutorial-ci-cd-runners-jobs.md?pivots=container-apps-jobs-self-hosted-ci-cd-azure-pipelines) that runs when a new job is queued in a workflow or pipeline. Container apps and event-driven jobs use [KEDA](https://keda.sh/) scalers. They both evaluate scaling rules on a polling interval to measure the volume of events for an event source, but the way they use the results is different. To create an event-driven job using the Azure CLI, use the `az containerapp job az containerapp job create \ --name "my-job" --resource-group "my-resource-group" --environment "my-environment" \ --trigger-type "Event" \- --replica-timeout 1800 --replica-retry-limit 1 --replica-completion-count 1 --parallelism 1 \ + --replica-timeout 1800 --replica-retry-limit 0 --replica-completion-count 1 --parallelism 1 \ --image "docker.io/myuser/my-event-driven-job:latest" \ --cpu "0.25" --memory "0.5Gi" \ --min-executions "0" \ az containerapp job create \ --secrets "connection-string-secret=<QUEUE_CONNECTION_STRING>" ``` +The example configures an Azure Storage queue scale rule. + # [Azure Resource Manager](#tab/azure-resource-manager) The following example Azure Resource Manager template creates an event-driven job named `my-job` in a resource group named `my-resource-group` and a Container Apps environment named `my-environment`: The following example Azure Resource Manager template creates an event-driven jo ], } },- "replicaRetryLimit": 1, + "replicaRetryLimit": 0, "replicaTimeout": 1800, "triggerType": "Event", "secrets": [ The following example Azure Resource Manager template creates an event-driven jo } ``` +The example configures an Azure Storage queue scale rule. ++# [Azure portal](#tab/azure-portal) ++To create an event-driven job using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Specify *Event* as the trigger type and configure the scaling rule. + -The example configures an Azure Storage queue scale rule. For a complete tutorial, see [Deploy an event-driven job](tutorial-event-driven-jobs.md). +For a complete tutorial, see [Deploy an event-driven job](tutorial-event-driven-jobs.md). ## Start a job execution on demand Replace `<SUBSCRIPTION_ID>` with your subscription ID. To authenticate the request, replace `<TOKEN>` in the `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure). +# [Azure portal](#tab/azure-portal) ++Starting a job execution using the Azure portal isn't supported. + -When you start a job execution, you can choose to override the job's configuration. For example, you can override an environment variable or the startup command to pass specific data to the job. +When you start a job execution, you can choose to override the job's configuration. For example, you can override an environment variable or the startup command to run the same job with different inputs. The overridden configuration is only used for the current execution and doesn't change the job's configuration. # [Azure CLI](#tab/azure-cli) -Azure CLI doesn't support overriding a job's configuration when starting a job execution. +To override the job's configuration while starting an execution, use the `az containerapp job start` command and pass a YAML file containing the template to use for the execution. The following example starts an execution of a job named `my-job` in a resource group named `my-resource-group`. ++Retrieve the job's current configuration with the `az containerapp job show` command and save the template to a file named `my-job-template.yaml`: ++```azurecli +az containerapp job show --name "my-job" --resource-group "my-resource-group" --query "properties.template" --output yaml > my-job-template.yaml +``` ++Edit the `my-job-template.yaml` file to override the job's configuration. For example, to override the environment variables, modify the `env` section: ++```yaml +containers: +- name: print-hello + image: ubuntu + resources: + cpu: 1 + memory: 2Gi + env: + - name: MY_NAME + value: Azure Container Apps jobs + args: + - /bin/bash + - -c + - echo "Hello, $MY_NAME!" +``` ++Start the job using the template: ++```azurecli +az containerapp job start --name "my-job" --resource-group "my-resource-group" \ + --yaml my-job-template.yaml +``` # [Azure Resource Manager](#tab/azure-resource-manager) Authorization: Bearer <TOKEN> Replace `<SUBSCRIPTION_ID>` with your subscription ID and `<TOKEN>` in the `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure). +# [Azure portal](#tab/azure-portal) ++Starting a job execution using the Azure portal isn't supported. + ## Get job execution history Replace `<SUBSCRIPTION_ID>` with your subscription ID. To authenticate the request, add an `Authorization` header with a valid bearer token. For more information, see [Azure REST API reference](/rest/api/azure). +# [Azure portal](#tab/azure-portal) ++To view the status of job executions using the Azure portal, search for *Container App Jobs* in the Azure portal and select the job. The *Execution history* tab displays the status of recent executions. + -The execution history for scheduled & event-based jobs is limited to the most recent `100` successful and failed job executions. +The execution history for scheduled and event-based jobs is limited to the most recent 100 successful and failed job executions. To list all executions of a job or to get detailed output from a job, query the logs provider configured for your Container Apps environment. The following table includes the job settings that you can configure: | Setting | Azure Resource Manager property | CLI parameter| Description | ||||| | Job type | `triggerType` | `--trigger-type` | The type of job. (`Manual`, `Schedule`, or `Event`) |-| Parallelism | `parallelism` | `--parallelism` | The number of replicas to run per execution. | -| Replica completion count | `replicaCompletionCount` | `--replica-completion-count` | The number of replicas to complete successfully for the execution to succeed. | +| Parallelism | `parallelism` | `--parallelism` | The number of replicas to run per execution. For most jobs, set the value to `1`. | +| Replica completion count | `replicaCompletionCount` | `--replica-completion-count` | The number of replicas to complete successfully for the execution to succeed. For most jobs, set the value to `1`. | | Replica timeout | `replicaTimeout` | `--replica-timeout` | The maximum time in seconds to wait for a replica to complete. |-| Replica retry limit | `replicaRetryLimit` | `--replica-retry-limit` | The maximum number of times to retry a failed replica. | +| Replica retry limit | `replicaRetryLimit` | `--replica-retry-limit` | The maximum number of times to retry a failed replica. To fail a replica without retrying, set the value to `0`. | ### Example The following example Azure Resource Manager template creates a job with advance } ``` +# [Azure portal](#tab/azure-portal) ++To configure advanced settings using the Azure portal, search for *Container App Jobs* in the Azure portal and select *Create*. Select *Configuration* to configure the settings. + -## Jobs preview restrictions +## Jobs restrictions -The following features are not supported: +The following features aren't supported: -- Volume mounts-- Init containers - Dapr-- Azure Key Vault references in secrets - Ingress and related features such as custom domains and SSL certificates ## Next steps |
container-apps | Manage Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/manage-secrets.md | Secrets are defined at the application level in the `resources.properties.config { "name": "queue-connection-string", "keyVaultUrl": "<KEY-VAULT-SECRET-URI>",- "identity": "System" + "identity": "system" }], } } } ``` -Here, a connection string to a queue storage account is declared in the `secrets` array. Its value is automatically retrieved from Key Vault using the specified identity. To use a user managed identity, replace `System` with the identity's resource ID. +Here, a connection string to a queue storage account is declared in the `secrets` array. Its value is automatically retrieved from Key Vault using the specified identity. To use a user managed identity, replace `system` with the identity's resource ID. Replace `<KEY-VAULT-SECRET-URI>` with the URI of your secret in Key Vault. az containerapp create \ --secrets "queue-connection-string=keyvaultref:<KEY_VAULT_SECRET_URI>,identityref:<USER_ASSIGNED_IDENTITY_ID>" ``` -Here, a connection string to a queue storage account is declared in the `--secrets` parameter. Replace `<KEY_VAULT_SECRET_URI>` with the URI of your secret in Key Vault. Replace `<USER_ASSIGNED_IDENTITY_ID>` with the resource ID of the user assigned identity. For system assigned identity, use `System` instead of the resource ID. +Here, a connection string to a queue storage account is declared in the `--secrets` parameter. Replace `<KEY_VAULT_SECRET_URI>` with the URI of your secret in Key Vault. Replace `<USER_ASSIGNED_IDENTITY_ID>` with the resource ID of the user assigned identity. For system assigned identity, use `system` instead of the resource ID. > [!NOTE] > The user assigned identity must have access to read the secret in Key Vault. System assigned identity can't be used with the create command because it's not available until after the container app is created. |
container-apps | Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md | In addition, Container Apps on the workload profiles environment reserve the fol ## Routes User Defined Routes (UDR) and controlled egress through NAT Gateway are supported in the workload profiles environment, which is in preview. In the Consumption only environment, these features aren't supported.-+<a name="udr"></a> ### User defined routes (UDR) - preview > [!NOTE] Application rules allow or deny traffic based on the application layer. The foll | All scenarios | *mcr.microsoft.com*, **.data.mcr.microsoft.com* | These FQDNs for Microsoft Container Registry (MCR) are used by Azure Container Apps and either these application rules or the network rules for MCR must be added to the allowlist when using Azure Container Apps with Azure Firewall. | | Azure Container Registry (ACR) | *Your-ACR-address*, **.blob.windows.net* | These FQDNs are required when using Azure Container Apps with ACR and Azure Firewall. | | Azure Key Vault | *Your-Azure-Key-Vault-address*, *login.microsoft.com* | These FQDNs are required in addition to the service tag required for the network rule for Azure Key Vault. | +| Managed Identities | **.identity.azure.net*, *login.microsoftonline.com*, **.login.microsoftonline.com*, **.login.microsoft.com* | These FQDNs are required when using managed identities with Azure Firewall in Azure Container Apps. | Docker Hub Registry | *hub.docker.com*, *registry-1.docker.io*, *production.cloudflare.docker.com* | If you're using [Docker Hub registry](https://docs.docker.com/desktop/allow-list/) and want to access it through the firewall, you need to add these FQDNs to the firewall. | ##### Azure Firewall - Network Rules Network rules allow or deny traffic based on the network and transport layer. Th > [!Note] > For Azure resources you are using with Azure Firewall not listed in this article, please refer to the [service tags documentation](../virtual-network/service-tags-overview.md#available-service-tags).-+<a name="nat"></a> ### NAT gateway integration - preview You can use NAT Gateway to simplify outbound connectivity for your outbound internet traffic in your virtual network on the workload profiles environment. NAT Gateway is used to provide a static public IP address, so when you configure NAT Gateway on your Container Apps subnet, all outbound traffic from your container app is routed through the NAT Gateway's static public IP address. You can enable mTLS in the ARM template for Container Apps environments using th 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App environmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record. The A record contains the name `*<DNS Suffix>` and the static IP address of the Container Apps environment. - 1. **Custom domains**: If you plan to use custom domains, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment. + 1. **Custom domains**: If you plan to use custom domains and are using an external Container Apps environment, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. If you are using an internal Container Apps environment, there is no validation for the DNS binding, as the cluster can only be accessed from within the virtual network. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment. The static IP address of the Container Apps environment can be found in the Azure portal in **Custom DNS suffix** of the container app page or using the Azure CLI `az containerapp env list` command. The name of the resource group created in the Azure subscription where your envi In addition to the [Azure Container Apps billing](./billing.md), you're billed for: -- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/).+- One standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for egress if using an internal or external environment, plus one standard static [public IP](https://azure.microsoft.com/pricing/details/ip-addresses/) for ingress if using an external environment. If you need more public IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/). - Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations. |
container-apps | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md | +- Hosting background processing jobs - Handling event-driven processing - Running microservices With Azure Container Apps, you can: - [**Build microservices with Dapr**](microservices.md) and [access its rich set of APIs](./dapr-overview.md). +- [**Run jobs**](jobs.md) on-demand, on a schedule, or based on events. + - Add [**Azure Functions**](https://aka.ms/functionsonaca) and [**Azure Spring Apps**](https://aka.ms/asaonaca) to your Azure Container Apps environment. - [**Use specialized hardware**](plans.md) for access to increased compute resources. |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
container-apps | Revisions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/revisions.md | Azure Container Apps implements container app versioning by creating revisions. :::image type="content" source="media/revisions/azure-container-apps-revisions.png" alt-text="Azure Container Apps: Containers"::: +> [!NOTE] +> [Azure Container Apps jobs](jobs.md) don't have revisions. Each job execution uses the latest configuration of the job. + ## Use cases Container Apps revisions help you manage the release of updates to your container app by creating a new revision each time you make a *revision-scope* change to your app. You can control which revisions are active, and the external traffic that is routed to each active revision. |
container-apps | Tutorial Ci Cd Runners Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md | In this tutorial, you learn how to run Azure Pipelines agents as an [event-drive - **Azure DevOps organization**: If you don't have a DevOps organization with an active subscription, you [can create one for free](https://azure.microsoft.com/services/devops/). ::: zone-end -Refer to [jobs preview limitations](jobs.md#jobs-preview-restrictions) for a list of limitations. +Refer to [jobs restrictions](jobs.md#jobs-restrictions) for a list of limitations. ## Setup |
container-instances | Container Instances Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-application-gateway.md | az network application-gateway create \ --public-ip-address myAGPublicIPAddress \ --vnet-name myVNet \ --subnet myAGSubnet \- --servers "$ACI_IP" + --servers "$ACI_IP" \ --priority 100 ``` |
container-instances | Container Instances Gpu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-gpu.md | To run certain compute-intensive workloads on Azure Container Instances, deploy This article shows how to add GPU resources when you deploy a container group by using a [YAML file](container-instances-multi-container-yaml.md) or [Resource Manager template](container-instances-multi-container-group.md). You can also specify GPU resources when you deploy a container instance using the Azure portal. > [!IMPORTANT]-> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](https://learn.microsoft.com/azure/virtual-machines/nc-series-retirement) and [NCv2 Series](https://learn.microsoft.com/azure/virtual-machines/ncv2-series-retirement) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](https://learn.microsoft.com/azure/aks/aks-migration). +> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md). > [!IMPORTANT] > This feature is currently in preview, and some [limitations apply](#preview-limitations). Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA). |
container-instances | Container Instances Reference Yaml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-reference-yaml.md | The following tables describe the values you need to set in the schema. | name | string | No | Name of the header. | | value | string | No | Value of the header. | +> [!IMPORTANT] +> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md). + ### GpuResource object | Name | Type | Required | Value | | - | - | - | - | | count | integer | Yes | The count of the GPU resource. |-| sku | enum | Yes | The SKU of the GPU resource. - K80, P100, V100 | +| sku | enum | Yes | The SKU of the GPU resource. - V100 | ## Next steps |
container-instances | Container Instances Resource And Quota Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md | The following limits are default limits that canΓÇÖt be increased through a quot | Resource | Actual Limit | | | : | | Standard sku container groups per region per subscription | 100 | -| Standard sku cores (CPUs) per region per subscription | 100 | -| Standard sku cores (CPUs) for K80 GPU per region per subscription | 0 | +| Standard sku cores (CPUs) per region per subscription | 100 | | Standard sku cores (CPUs) for V100 GPU per region per subscription | 0 | | Container group creates per hour |300<sup>1</sup> | | Container group creates per 5 minutes | 100<sup>1</sup> | | Container group deletes per hour | 300<sup>1</sup> | | Container group deletes per 5 minutes | 100<sup>1</sup> | -## Standard Core Resources +## Standard Container Resources ### Linux Container Groups The following resources are available in all Azure Regions supported by Azure Co | :-: | :--: | :-: | | 4 | 16 | 20 | Y | -## GPU Resources (Preview) +## Spot Container Resources (Preview) ++The following maximum resources are available to a container group deployed using [Spot Containers](container-instances-spot-containers-overview.md) (preview). ++> [!NOTE] +> Spot Containers are only available in the following regions at this time: East US 2, West Europe, and West US. ++| Max CPU | Max Memory (GB) | VNET Max CPU | VNET Max Memory (GB) | Storage (GB) | +| :: | :: | :-: | :--: | :-: | +| 4 | 16 | N/A | N/A | 50 | ++## Confidential Container Resources (Preview) ++The following maximum resources are available to a container group deployed using [Confidential Containers](container-instances-confidential-overview.md) (preview). ++> [!NOTE] +> Confidential Containers are only available in the following regions at this time: East US, North Europe, West Europe, and West US. ++| Max CPU | Max Memory (GB) | VNET Max CPU | VNET Max Memory (GB) | Storage (GB) | +| :: | :: | :-: | :--: | :-: | +| 4 | 16 | 4 | 16 | 50 | ++## GPU Container Resources (Preview) > [!IMPORTANT] > K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md). The following maximum resources are available to a container group deployed with | V100 | 1 | 6 | 112 | 50 | | V100 | 2 | 12 | 224 | 50 | | V100 | 4 | 24 | 448 | 50 | -<! -| K80 | 1 | 6 | 56 | 50 | -| K80 | 2 | 12 | 112 | 50 | -| K80 | 4 | 24 | 224 | 50 | -| P100, V100 | 1 | 6 | 112 | 50 | -| P100, V100 | 2 | 12 | 224 | 50 | -| P100, V100 | 4 | 24 | 448 | 50 | -> ## Next steps |
container-instances | Container Instances Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-vnet.md | Examples in this article are formatted for the Bash shell. If you prefer another ## Deploy to new virtual network > [!NOTE]-> If you are using port 29 to have only 3 IP addresses, we recommend always to go one range above or below. For example, use port 28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able start or not able to stop states. +> If you are using subnet IP range /29 to have only 3 IP addresses. we recommend always to go one range above (never below). For example, use subnet IP range /28 so you can have at least 1 or more IP buffer per container group. By doing this, you can avoid containers in stuck, not able to start, restart or even not able to stop states. To deploy to a new virtual network and have Azure create the network resources for you automatically, specify the following when you execute [az container create][az-container-create]: |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Container Registry Tutorial Sign Build Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-build-push.md | Otherwise create an x509 self-signed certificate storing it in AKV for remote si notation verify $IMAGE ``` Upon successful verification of the image using the trust policy, the sha256 digest of the verified image is returned in a successful output message.++## Next steps ++See [Ratify on Azure: Allow only signed images to be deployed on AKS with Notation and Ratify](https://github.com/deislabs/ratify/blob/main/docs/quickstarts/ratify-on-azure.md). |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
container-registry | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md | description: Lists Azure Policy Regulatory Compliance controls available for Azu Previously updated : 08/03/2023 Last updated : 08/25/2023 |
cosmos-db | Secondary Indexing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/secondary-indexing.md | Title: Indexing in Azure Cosmos DB for Apache Cassandra account -description: Learn how secondary indexing works in Azure Azure Cosmos DB for Apache Cassandra account. +description: Learn how secondary indexing works in Azure Cosmos DB for Apache Cassandra account. |
cosmos-db | Cmk Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cmk-troubleshooting-guide.md | A troubleshooting solution, for example, would be to create a new identity with After updating the account's default identity, you need to wait upwards to one hour for the account to stop being in revoke state. If the issue isn't resolved after more than two hours, contact customer service. -## Customer Managed Key does not exist +## Azure Key Vault Resource not found ### Reason for error? -You see this error when the customer managed key isn't found on the specified Azure Key Vault. +You see this error when the Azure Key Vault or specified Key are not found. ### Troubleshooting |
cosmos-db | Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/data-residency.md | In Azure Cosmos DB, you must explicitly configure the cross-region data replicat **Periodic mode Backups**: By default, periodic mode account backups will be stored in geo-redundant storage. For periodic backup modes, you can configure data redundancy at the account level. There are three redundancy options for the backup storage. They are local redundancy, zone redundancy, or geo redundancy. For more information, see [periodic backup/restore](periodic-backup-restore-introduction.md). +## Residency requirements for analytical store ++Analytical store is resident by default as it is stored in either locally redundant or zone redundant storage. To learn more, see the [analytical store](analytical-store-introduction.md) article. ++ ## Use Azure Policy to enforce the residency requirements If you have data residency requirements that require you to keep all your data in a single Azure region, you can enforce zone-redundant or locally redundant backups for your account by using an Azure Policy. You can also enforce a policy that the Azure Cosmos DB accounts are not geo-replicated to other regions. |
cosmos-db | How To Setup Customer Managed Keys Existing Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md | + + Title: Configure customer-managed keys on existing accounts ++description: Store customer-managed keys in Azure Key Vault to use for encryption in your existing Azure Cosmos DB account with access control. +++ Last updated : 08/17/2023+++ms.devlang: azurecli +++# Configure customer-managed keys for your existing Azure Cosmos DB account with Azure Key Vault (Preview) +++Enabling a second layer of encryption for data at rest using [Customer Managed Keys](./how-to-setup-customer-managed-keys.md) while creating a new Azure Cosmos DB account has been Generally available for some time now. As a natural next step, we now have the capability to enable CMK on existing Azure Cosmos DB accounts. ++This feature eliminates the need for data migration to a new account to enable CMK. It helps to improve customers’ security and compliance posture. ++> [!NOTE] +> Currently, enabling customer-managed keys on existing Azure Cosmos DB accounts is in preview. This preview is provided without a service-level agreement. Certain features of this preview may not be supported or may have constrained capabilities. For more information, see [supplemental terms of use for Microsoft Azure previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++Enabling CMK kicks off a background, asynchronous process to encrypt all the existing data in the account, while new incoming data are encrypted before persisting. There's no need to wait for the asynchronous operation to succeed. The enablement process consumes unused/spare RUs so that it doesn't affect your read/write workloads. You can refer to this [link](./how-to-setup-customer-managed-keys.md?tabs=azure-powershell#how-do-customer-managed-keys-influence-capacity-planning) for capacity planning once your account is encrypted. ++## Get started by enabling CMK on your existing accounts ++### Prerequisites ++All the prerequisite steps needed while configuring Customer Managed Keys for new accounts is applicable to enable CMK on your existing account. Refer to the steps [here](./how-to-setup-customer-managed-keys.md?tabs=azure-portal#prerequisites) ++### Steps to enable CMK on your existing account ++To enable CMK on an existing account, update the account with an ARM template setting a Key Vault key identifier in the keyVaultKeyUri property – just like you would when enabling CMK on a new account. This step can be done by issuing a PATCH call with the following payload: ++``` + { + "properties": { + "keyVaultKeyUri": "<key-vault-key-uri>" + } + } +``` ++The output of this CLI command for enabling CMK waits for the completion of encryption of data. ++```azurecli + az cosmosdb update --name "testaccount" --resource-group "testrg" --key-uri "https://keyvaultname.vault.azure.net/keys/key1" +``` ++### Steps to enable CMK on your existing Azure Cosmos DB account with PITR or Analytical store account ++For enabling CMK on existing account that has continuous backup and point in time restore enabled, we need to follow some extra steps. Follow step 1 to step 5 and then follow instructions to enable CMK on existing account. ++> [!NOTE] +> System-assigned identity and continuous backup mode is currently under Public Preview and may change in the future. Currently, only user-assigned managed identity is supported for enabling CMK on continuous backup accounts. ++++1. Configure managed identity to your cosmos account [Configure managed identities with Azure AD for your Azure Cosmos DB account](./how-to-setup-managed-identity.md) ++1. Update cosmos account to set default identity to point to managed identity added in previous step ++ **For System managed identity :** + ``` + az cosmosdb update --resource-group $resourceGroupName  --name $accountName  --default- identity "SystemAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/ systemAssignedIdentities/MyID" + ``` ++ **For User managed identity  :** ++ ``` + az cosmosdb update -n $sourceAccountName -g $resourceGroupName --default-identity "UserAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/MyID" + ``` ++1. Configure Keyvault as given in documentation [here](./how-to-setup-customer-managed-keys.md?tabs=azure-cli#configure-your-azure-key-vault-instance) ++1. Add [access policy](./how-to-setup-customer-managed-keys.md?tabs=azure-cli#using-a-managed-identity-in-the-azure-key-vault-access-policy) in the keyvault for the default identity that is set in previous step ++1. Update cosmos account to set keyvault uri, this update triggers enabling CMK on account   + ``` + az cosmosdb update --name $accountName --resource-group $resourceGroupName --key-uri $keyVaultKeyURI  + ``` +## Known limitations ++- Enabling CMK is available only at a Cosmos DB account level and not at collections. +- We don't support enabling CMK on existing Azure Cosmos DB for Apache Cassandra accounts. +- We don't support enabling CMK on existing accounts that are enabled for Materialized Views and Full Fidelity Change Feed (FFCF) as well. +- Ensure account must not have documents with large IDs greater than 990 bytes before enabling CMK. If not, you'll get an error due to max supported limit of 1024 bytes after encryption. +- During encryption of existing data, [control plane](./audit-control-plane-logs.md) actions such as "add region" is blocked. These actions are unblocked and can be used right after the encryption is complete. ++## Monitor the progress of the resulting encryption ++Enabling CMK on an existing account is an asynchronous operation that kicks off a background task that encrypts all existing data. As such, the REST API request to enable CMK provides in its response an "Azure-AsyncOperation" URL. Polling this URL with GET requests return the status of the overall operation, which eventually Succeed. This mechanism is fully described in [this](https://learn.microsoft.com/azure/azure-resource-manager/management/async-operations) article. ++The Cosmos DB account can continue to be used and data can continue to be written without waiting for the asynchronous operation to succeed. CLI command for enabling CMK waits for the completion of encryption of data. ++If you have further questions, reach out to Microsoft Support. ++## FAQs ++**What are the factors on which the encryption time depends?** ++Enabling CMK is an asynchronous operation and depends on sufficient unused RUs being available. We suggest enabling CMK during off-peak hours and if applicable you can increase RUs before hand, to speed up encryption. It's also a direct function of data size. ++**Do we need to brace ourselves for downtime?** ++Enabling CMK kicks off a background, asynchronous process to encrypt all the data. There's no need to wait for the asynchronous operation to succeed. The Azure Cosmos DB account is available for reads and writes and there's no need for a downtime. ++**Can you bump up the RU’s once CMK has been triggered?** ++It's suggested to bump up the RUs before you trigger CMK. Once CMK is triggered, then some control plane operations are blocked till the encryption is complete. This block may prevent the user from increasing the RU’s once CMK is triggered. ++**Is there a way to reverse the encryption or disable encryption after triggering CMK?** ++Once the data encryption process using CMK is triggered, it can't be reverted. ++**Will enabling encryption using CMK on existing account have an impact on data size and read/writes?** ++As you would expect, by enabling CMK there's a slight increase in data size and RUs to accommodate extra encryption/decryption processing. ++**Should you back up the data before enabling CMK?** ++Enabling CMK doesn't pose any threat of data loss. In general, we suggest you back up the data regularly. ++**Are old backups taken as a part of periodic backup encrypted?** ++No. Old periodic backups aren't encrypted. Newly generated backups after CMK enabled is encrypted. ++**What is the behavior on existing accounts that are enabled for Continuous backup (PITR)** ++When CMK is turned on, the encryption is turned on for continuous backups as well. All restores going forward is encrypted. ++**What is the behavior if CMK is enabled on PITR enabled account and we restore account to the time CMK was disabled?** ++In this case CMK is explicitly enabled on the restored target account for the following reasons: +- Once CMK is enabled on the account, there's no option to disable CMK. +- This behavior is in line with the current design of restore of CMK enabled account if periodic backup ++**What happens when user revokes the key while CMK migration is in-progress?** ++The state of the key is checked when CMK encryption is triggered. If the key in Azure Key vault is in good standing, the encryption is started and the process completes without further check. Even if the key is revoked, or Azure key vault is deleted or unavailable, the encryption process succeeds. ++**Can we enable CMK encryption on our existing production account?** ++Yes. Since the capability is currently in preview, we recommend testing all scenarios first on nonproduction accounts and once you're comfortable you can consider production accounts. ++## Next steps ++* Learn more about [data encryption in Azure Cosmos DB](database-encryption-at-rest.md). +* You can choose to add a second layer of encryption with your own keys, to learn more, see the [customer-managed keys](how-to-setup-cmk.md) article. +* For an overview of Azure Cosmos DB security and the latest improvements, see [Azure Cosmos DB database security](database-security.md). +* For more information about Microsoft certifications, see the [Azure Trust Center](https://azure.microsoft.com/support/trust-center/). |
cosmos-db | How To Setup Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md | Data stored in your Azure Cosmos DB account is automatically and seamlessly encr You must store customer-managed keys in [Azure Key Vault](../key-vault/general/overview.md) and provide a key for each Azure Cosmos DB account that is enabled with customer-managed keys. This key is used to encrypt all the data stored in that account. > [!NOTE]-> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation. +> Currently, customer-managed keys are available only for new Azure Cosmos DB accounts. You should configure them during account creation. Enabling customer-managed keys on your existing accounts is available for preview. You can refer to the link [here](how-to-setup-customer-managed-keys-existing-accounts.md) for more details > [!WARNING] > The following field names are reserved on Cassandra API tables in accounts using Customer-managed Keys: Here, create a new key using Azure Key Vault and retrieve the unique identifier. :::image type="content" source="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" lightbox="media/how-to-setup-customer-managed-keys/new-customer-managed-key.png" alt-text="Screenshot of the dialog to create a new key."::: - > [!TIP] - > Alternatively, you can use the Azure CLI to generate a key with: - > - > ```azurecli - > az keyvault key create \ - > --vault-name <name-of-key-vault> \ - > --name <name-of-key> - > ``` - > - > For more information on managing a key vault with the Azure CLI, see [manage Azure Key Vault with the Azure CLI](../key-vault/general/manage-with-cli2.md). + > [!TIP] + > Alternatively, you can use the Azure CLI to generate a key with: + > + > ```azurecli + > az keyvault key create \ + > --vault-name <name-of-key-vault> \ + > --name <name-of-key> + > ``` + > + > For more information on managing a key vault with the Azure CLI, see [manage Azure Key Vault with the Azure CLI](../key-vault/general/manage-with-cli2.md). 1. After the key is created, select the newly created key and then its current version. |
cosmos-db | Index Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md | Azure Cosmos DB supports two indexing modes: - **None**: Indexing is disabled on the container. This mode is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete. > [!NOTE]-> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmosdblazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing). +> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmosdbindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing). By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index items as they're written. When removing indexed paths, you should group all your changes into one indexing When you drop an indexed path, the query engine will immediately stop using it, and will do a full scan instead. > [!NOTE]-> Where possible, you should always try to group multiple indexing changes into one single indexing policy modification +> Where possible, you should always try to group multiple index removals into one single indexing policy modification. ++> [!IMPORTANT] +> Removing an index takes affect immediately, whereas adding a new index takes some time as it requires an indexing transformation. When replacing one index with another (for example, replacing a single property index with a composite-index) make sure to add the new index first and then wait for the index transformation to complete **before** you remove the previous index from the indexing policy. Otherwise this will negatively affect your ability to query the previous index and may break any active workloads that reference the previous index. ## Indexing policies and TTL |
cosmos-db | Intra Account Container Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/intra-account-container-copy.md | To get started with intra-account offline container copy for NoSQL and Cassandra ### API for MongoDB -To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline container copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription. +To get started with intra-account offline container copy for Azure Cosmos DB for MongoDB accounts, register for the **Intra-account offline collection copy (MongoDB)** preview feature flag in [Preview Features](access-previews.md) in the Azure portal. Once the registration is complete, the preview is effective for all API for MongoDB accounts in the subscription. <a name="how-to-do-container-copy"></a> |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md | Today's applications are required to be highly responsive and always online. To Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security. +Use Retrieval Augmented Generation (RAG) to bring the most semantically relevant data to enrich your AI-powered applications built with Azure OpenAI models like GPT-3.5 and GPT-4. For more information, see [Retrieval Augmented Generation (RAG) with Azure Cosmos DB](rag-data-openai.md). + App development is faster and more productive thanks to: - Turnkey multi region data distribution anywhere in the world - Open source APIs - SDKs for popular languages.+- Retrieval Augmented Generation that brings your data to Azure OpenAI to As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand. End-to-end database management, with serverless and automatic scaling matching y ## Solutions that benefit from Azure Cosmos DB -[Web, mobile, gaming, and IoT application](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times for various data will benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications). +[Web, mobile, gaming, and IoT applications](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications). ## Next steps |
cosmos-db | Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md | Title: Compatibility and feature support description: Review Azure Cosmos DB for MongoDB vCore supported features and syntax including; commands, query support, datatypes, aggregation, and operators.---+++ - Previously updated : 04/11/2023 Last updated : 08/28/2023 # MongoDB compatibility and feature support with Azure Cosmos DB for MongoDB vCore + Azure Cosmos DB is Microsoft's fully managed NoSQL and relational database, offering [multiple database APIs](../../choose-api.md). You can communicate with Azure Cosmos DB for MongoDB using the MongoDB drivers, SDKs and tools you're already familiar with. Azure Cosmos DB for MongoDB enables the use of existing client drivers by adhering to the MongoDB wire protocol. By using the Azure Cosmos DB for MongoDB, you can enjoy the benefits of the MongoDB you're used to, with all of the enterprise capabilities that Azure Cosmos DB provides. |
cosmos-db | Connect Using Robomongo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/connect-using-robomongo.md | - Title: Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore -description: Learn how to connect to Azure Cosmos DB for MongoDB vCore using Studio 3T --- Previously updated : 07/07/2023----# Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore --Studio 3T (also known as Robomongo or Robo 3T) is a professional GUI that offers IDE & client tools for MongoDB. It's a great tool to speed up MongoDB development with a friendly user interface. In order to connect to your Azure Cosmos DB for MongoDB vCore cluster using Studio 3T, you must: --* Download and install [Studio 3T](https://robomongo.org/) -* Have your Azure Cosmos DB for MongoDB vCore [connection string](quickstart-portal.md#get-cluster-credentials) information --## Connect using Studio 3T --To add your Azure Cosmos DB cluster to the Studio 3T connection manager, perform the following steps: --1. Retrieve the connection information for your Azure Cosmos DB for MongoDB vCore using the instructions [here](quickstart-portal.md#get-cluster-credentials). -- :::image type="content" source="./media/connect-using-robomongo/connection-string.png" alt-text="Screenshot of the connection string page."::: -2. Run the **Studio 3T** application. --3. Click the connection button under **File** to manage your connections. Then, click **New Connection** in the **Connection Manager** window, which will open up another window where you can paste the connection credentials. --4. In the connection credentials window, choose the first option and paste your connection string. Click **Next** to move forward. -- :::image type="content" source="./media/connect-using-robomongo/new-connection.png" alt-text="Screenshot of the Studio 3T connection credentials window."::: -5. Choose a **Connection name** and double check your connection credentials. -- :::image type="content" source="./media/connect-using-robomongo/connection-configuration.png" alt-text="Screenshot of the Studio 3T connection details window."::: -6. On the **SSL** tab, check **Use SSL protocol to connect**. -- :::image type="content" source="./media/connect-using-robomongo/connection-ssl.png" alt-text="Screenshot of the Studio 3T new connection SSL Tab."::: -7. Finally, click **Test Connection** in the bottom left to verify that you are able to connect, then click **Save**. --## Next steps --- Learn [how to use Bicep templates](quickstart-bicep.md) to deploy your Azure Cosmos DB for MongoDB vCore cluster.-- Learn [how to connect your Nodejs web application](tutorial-nodejs-web-app.md) to a MongoDB vCore cluster.-- Check the [migration options](migration-options.md) to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore. |
cosmos-db | High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/high-availability.md | Title: High availability and replication description: Review replication and high availability concepts in the context of Azure Cosmos DB for MongoDB vCore.+++ --- Previously updated : 02/07/2023 Last updated : 08/28/2023 # High availability in Azure Cosmos DB for MongoDB vCore + High availability (HA) avoids database downtime by maintaining standby replicas of every node in a cluster. If a node goes down, Azure Cosmos DB for MongoDB vCore switches incoming connections from the failed node to its standby replica. ## How it works |
cosmos-db | How To Connect Studio 3T | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-connect-studio-3t.md | + + Title: Use Studio 3T to connect ++description: Connect to an Azure Cosmos DB for MongoDB vCore account using the Studio 3T community tool to query data. ++++++ Last updated : 08/28/2023+# CustomerIntent: As a database owner, I want to use Studio 3T so that I can connect to and query my collections. +++# Use Studio 3T to connect to Azure Cosmos DB for MongoDB vCore +++Studio 3T (also known as Robomongo or Robo 3T) is a professional GUI that offers IDE & client tools for MongoDB. It's a popular community tool to speed up MongoDB development with a straightforward user interface. ++## Prerequisites ++- An existing Azure Cosmos DB for MongoDB vCore cluster. + - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free). + - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md). +- [Studio 3T](https://robomongo.org/) community tool ++## Connect using Studio 3T ++To add your Azure Cosmos DB cluster to the Studio 3T connection manager, perform the following steps: ++1. Retrieve the connection information for your Azure Cosmos DB for MongoDB vCore using the instructions [here](quickstart-portal.md#get-cluster-credentials). ++ :::image type="content" source="./media/connect-using-robomongo/connection-string.png" alt-text="Screenshot of the connection string page."::: ++1. Run the **Studio 3T** application. ++1. Select the connection button under **File** to manage your connections. Then, select **New Connection** in the **Connection Manager** window, which opens another window where you can paste the connection credentials. ++1. In the connection credentials window, choose the first option and paste your connection string. Select **Next** to move forward. ++ :::image type="content" source="./media/connect-using-robomongo/new-connection.png" alt-text="Screenshot of the Studio 3T connection credentials window."::: ++1. Choose a **Connection name** and double check your connection credentials. ++ :::image type="content" source="./media/connect-using-robomongo/connection-configuration.png" alt-text="Screenshot of the Studio 3T connection details window."::: ++1. On the **SSL** tab, check **Use SSL protocol to connect**. ++ :::image type="content" source="./media/connect-using-robomongo/connection-ssl.png" alt-text="Screenshot of the Studio 3T new connection TLS/SSL Tab."::: ++1. Finally, select **Test Connection** in the bottom left to verify that you're able to connect, then select **Save**. ++## Next step ++> [!div class="nextstepaction"] +> [Migration options](migration-options.md) |
cosmos-db | How To Create Text Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-create-text-index.md | + + Title: Search and query with text indexes ++description: Configure and use text indexes to perform and fine tune text searches in Azure Cosmos DB for MongoDB vCore. ++++++ Last updated : 08/28/2023+# CustomerIntent: As a database query developer, I want to create a text index so that I can perform full-text searches. +++# Search and query with text indexes in Azure Cosmos DB for MongoDB vCore +++One of the key features that Azure Cosmos DB for MongoDB vCore provides is text indexing, which allows for efficient searching and querying of text-based data. The service implements **version 2** text indexes. Version 2 supports case sensitivity but not diacritic sensitivity. ++Text indexes in Azure Cosmos DB for MongoDB are special data structures that optimize text-based queries, making them faster and more efficient. They're designed to handle textual content like documents, articles, comments, or any other text-heavy data. Text indexes use techniques such as tokenization, stemming, and stop words to create an index that improves the performance of text-based searches. ++## Prerequisites ++- An existing Azure Cosmos DB for MongoDB vCore cluster. + - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free). + - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md). ++## Define a text index ++For simplicity, let us consider an example of a blog application with the following setup: ++- **Database name**: `cosmicworks` +- **Collection name**: `products` ++This example application stores articles as documents with the following structure: ++```json +{ + "_id": ObjectId("617a34e7a867530bff1b2346"), + "title": "Azure Cosmos DB - A Game Changer", + "content": "Azure Cosmos DB is a globally distributed, multi-model database service.", + "author": "John Doe", + "category": "Technology", + "published": true +} +``` ++1. Use the `createIndex` method with the `text` option to create a text index on the `title` field. ++ ```javascript + use cosmicworks; ++ db.products.createIndex({ Title: "text" }) + ``` ++ > [!NOTE] + > While you can define only one text index per collection, Azure Cosmos DB for MongoDB vCore allows you to create text indexes on multiple fields to enable you to perform text searches across different fields in your documents. ++1. Optionally, create an index to support search on both the `title` and `content` fields. ++ ```javascript + db.products.createIndex({ Title: "text", content: "text" }) + ``` ++## Configure text index options ++Text indexes in Azure Cosmos DB for MongoDB come with several options to customize their behavior. For example, you can specify the language for text analysis, set weights to prioritize certain fields, and configure case-insensitive searches. Here's an example of creating a text index with options: ++1. Create an index to support search on both the `title` and `content` fields with English language support. Also, assign higher weights to the `title` field to prioritize it in search results. ++ ```javascript + db.products.createIndex( + { Title: "text", content: "text" }, + { default_language: "english", weights: { Title: 10, content: 5 }, caseSensitive: false } + ) + ``` ++### Weights in text indexes ++When creating a text index, you can assign different weights to individual fields in the index. These weights represent the importance or relevance of each field in the search. Azure Cosmos DB for MongoDB vCore calculates a score and assigned weights for each document based on the search terms when executing a text search query. The score represents the relevance of the document to the search query. ++1. Create an index to support search on both the `title` and `content` fields. Assign a weight of 2 to the "title" field and a weight of 1 to the "content" field. ++ ```javascript + db.products.createIndex( + { Title: "text", content: "text" }, + { weights: { Title: 2, content: 1 } } + ) + ``` ++ > [!NOTE] + > When a client performs a text search query with the term "Cosmos DB," the score for each document in the collection will be calculated based on the presence and frequency of the term in both the "title" and "content" fields, with higher importance given to the "title" field due to its higher weight. ++## Perform a text search using a text index ++Once the text index is created, you can perform text searches using the "text" operator in your queries. The text operator takes a search string and matches it against the text index to find relevant documents. ++1. Perform a text search for the phrase `Cosmos DB`. ++ ```javascript + db.products.find( + { $text: { $search: "Cosmos DB" } } + ) + ``` ++1. Optionally, use the `$meta` projection operator along with the `textScore` field in a query to see the weight ++ ```javascript + db.products.find( + { $text: { $search: "Cosmos DB" } }, + { score: { $meta: "textScore" } } + ) + ``` ++## Dropping a text index ++To drop a text index in MongoDB, you can use the `dropIndex()` method on the collection and specify the index key or name for the text index you want to remove. ++1. Drop a text index by explicitly specifying the key. ++ ```javascript + db.products.dropIndex({ Title: "text" }) + ``` ++1. Optionally, drop a text index by specifying the autogenerated unique name. ++ ```javascript + db.products.dropIndex("title_text") + ``` ++## Text index limitations ++- Only one text index can be defined on a collection. +- Text indexes support simple text searches and don't provide advanced search capabilities like regular expression searches. +- Hint() isn't supported in combination with a query using $text expression. +- Sort operations can't use the ordering of the text index in MongoDB. +- Text indexes can be relatively large, consuming significant storage space compared to other index types. ++## Next step ++> [!div class="nextstepaction"] +> [Build a Node.js web application](tutorial-nodejs-web-app.md) |
cosmos-db | How To Migrate Native Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-migrate-native-tools.md | + + Title: Migrate MongoDB using MongoDB native tools ++description: Use MongoDB native tools to migrate small datasets from existing MongoDB instances to Azure Cosmos DB for MongoDB vCore offline. ++++++ Last updated : 08/28/2023+# CustomerIntent: As a database owner, I want to use the native tools in MongoDB Core so that I can migrate an existing dataset to Azure Cosmos DB for MongoDB vCore. +++# Migrate MongoDB to Azure Cosmos DB for MongoDB vCore offline using MongoDB native tools +++In this tutorial, you use MongoDB native tools to perform an offline (one-time) migration of a database from an on-premises or cloud instance of MongoDB to Azure Cosmos DB for MongoDB vCore. The MongoDB native tools are a set of binaries that facilitate data manipulation on an existing MongoDB instance. The focus of this doc is on migrating data out of a MongoDB instance using *mongoexport/mongoimport* or *mongodump/mongorestore*. Since the native tools connect to MongoDB using connection strings, you can run the tools anywhere. The native tools can be the simplest solution for small datasets where total migration time isn't a concern. ++## Prerequisites ++- An existing Azure Cosmos DB for MongoDB vCore cluster. + - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free). + - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md). +- [MongoDB native tools](https://www.mongodb.com/try/download/database-tools) installed on your machine. ++## Prepare ++Prior to starting the migration, make sure you have prepared your Azure Cosmos DB for MongoDB vCore account and your existing MongoDB instance for migration. ++- MongoDB instance (source) + - Complete the [premigration assessment](../pre-migration-steps.md#pre-migration-assessment) to determine if there are a list of incompatibilities and warnings between your source instance and target account. + - Ensure that your MongoDB native tools match the same version as the existing (source) MongoDB instance. + - If your MongoDB instance has a different version than Azure Cosmos DB for MongoDB vCore, then install both MongoDB native tool versions and use the appropriate tool version for MongoDB and Azure Cosmos DB for MongoDB vCore, respectively. + - Add a user with `readWrite` permissions, unless one already exists. You eventually use this credential with the *mongoexport* and *mongodump* tools. +- Azure Cosmos DB for MongoDB vCore (target) + - Gather the Azure Cosmos DB for MongoDB vCore [account's credentials](./quickstart-portal.md#get-cluster-credentials). + - [Configure Firewall Settings](./security.md#network-security-options) on Azure Cosmos DB for MongoDB vCore. ++> [!TIP] +> We recommend running these tools within the same network as the MongoDB instance to avoid further firewall issues. ++## Choose the proper MongoDB native tool ++There are some high-level considerations when choosing the right MongoDB native tool for your offline migration. ++## Perform the migration ++Migrate a collection from the source MongoDB instance to the target Azure Cosmos DB for MongoDB vCore account using your preferred native tool. For more information on selecting a tool, see [native MongoDB tools](migration-options.md#native-mongodb-tools-offline) ++> [!TIP] +> If you simply have a small JSON file that you want to import into Azure Cosmos DB for MongoDB vCore, the *mongoimport* tool is a quick solution for ingesting the data. ++### [mongoexport/mongoimport](#tab/export-import) ++1. To export the data from the source MongoDB instance, open a terminal and use the ``--host``, ``--username``, and ``--password`` arguments to connect to and export JSON records. ++ ```bash + mongoexport \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --collection <collection-name> \ + --out <filename>.json + ``` ++1. Optionally, export a subset of the MongoDB data by adding a ``--query`` argument. This argument ensures that the tool only exports documents that match the filter. ++ ```bash + mongoexport \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --collection <collection-name> \ + --query '{ "quantity": { "$gte": 15 } }' \ + --out <filename>.json + ``` ++1. Import the previously exported file into the target Azure Cosmos DB for MongoDB vCore account. ++ ```bash + mongoimport \ + --file <filename>.json \ + --type json \ + --writeConcern="{w:0}" \ + --db <database-name> \ + --collection <collection-name> \ + --ssl \ + <target-connection-string> + ``` ++1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the import operation's status. ++### [mongodump/mongorestore](#tab/dump-restore) ++1. To create a data dump of all data in your MongoDB instance, open a terminal and use the ``--host``, ``--username``, and ``--password`` arguments to dump the data as native BSON. ++ ```bash + mongodump \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --out <dump-directory> + ``` ++1. Optionally, you can specify the ``--db`` and ``--collection`` arguments to narrow the scope of the data you wish to dump: ++ ```bash + mongodump \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --out <dump-directory> + ``` ++ ```bash + mongodump \ + --host <hostname><:port> \ + --username <username> \ + --password <password> \ + --db <database-name> \ + --collection <collection-name> \ + --out <dump-directory> + ``` ++1. Observe that the tool created a directory with the native BSON data dumped. The files and folders are organized into a resource hierarchy based on the database and collection names. Each database is a folder and each collection is a `.bson` file. ++1. Restore the contents of any specific collection into an Azure Cosmos DB for MongoDB vCore account by specifying the collection's specific BSON file. The filename is constructed using this syntax: `<dump-directory>/<database-name>/<collection-name>.bson`. ++ ```bash + mongorestore \ + --writeConcern="{w:0}" \ + --db <database-name> \ + --collection <collection-name> \ + --ssl \ + <dump-directory>/<database-name>/<collection-name>.bson + ``` ++1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the restore operation's status. ++++## Next step ++> [!div class="nextstepaction"] +> [Build a Node.js web application](tutorial-nodejs-web-app.md) |
cosmos-db | How To Restore Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-restore-cluster.md | Title: Restore a cluster backup description: Restore an Azure Cosmos DB for MongoDB vCore cluster from a point in time encrypted backup snapshot.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # Restore a cluster in Azure Cosmos DB for MongoDB vCore |
cosmos-db | How To Scale Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-scale-cluster.md | Title: Scale or configure a cluster description: Scale an Azure Cosmos DB for MongoDB vCore cluster by changing the tier and disk size or change the configuration by enabling high availability.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # Scaling and configuring Your Azure Cosmos DB for MongoDB vCore cluster |
cosmos-db | How To Text Indexes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-text-indexes.md | - Title: Text indexes in Azure Cosmos DB for MongoDB vCore- -description: How to configure and use text indexes in Azure Cosmos DB for MongoDB vCore ------- Previously updated : 07/26/2023---# Text indexes in Azure Cosmos DB for MongoDB vCore ---One of the key features that Azure Cosmos DB for MongoDB vCore provides is text indexing, which allows for efficient searching and querying of text-based data. The service implements version 2 text indexes which support case sensitivity but not diacritic sensitivity. In this article, we will explore the usage of text indexes in Azure Cosmos DB for MongoDB, along with practical examples and syntax to help you leverage this feature effectively. --## What are Text Indexes? --Text indexes in Azure Cosmos DB for MongoDB are special data structures that optimize text-based queries, making them faster and more efficient. They are designed to handle textual content like documents, articles, comments, or any other text-heavy data. Text indexes use techniques such as tokenization, stemming, and stop words to create an index that improves the performance of text-based searches. --## Defining a Text Index --For simplicity let us consider an example of a blog application that stores articles with the following document structure: --```json -{ - "_id": ObjectId("617a34e7a867530bff1b2346"), - "title": "Azure Cosmos DB - A Game Changer", - "content": "Azure Cosmos DB is a globally distributed, multi-model database service.", - "author": "John Doe", - "category": "Technology", - "published": true -} -``` --To create a text index in Azure Cosmos DB for MongoDB, you can use the "createIndex" method with the "text" option. Here's an example of how to create a text index for a "title" field in a collection named "articles": --``` -db.articles.createIndex({ Title: "text" }) -``` --While we can define only one text index per collection, Azure Cosmos DB for MongoDB allows you to create text indexes on multiple fields to enable you to perform text searches across different fields in your documents. --For example, if we want to perform search on both the "title" and "content" fields, then the text index can be defined as: --``` -db.articles.createIndex({ Title: "text", content: "text" }) -``` --## Text Index Options --Text indexes in Azure Cosmos DB for MongoDB come with several options to customize their behavior. For example, you can specify the language for text analysis, set weights to prioritize certain fields, and configure case-insensitive searches. Here's an example of creating a text index with options: --``` -db.articles.createIndex( - { content: "text", Title: "text" }, - { default_language: "english", weights: { Title: 10, content: 5 }, caseSensitive: false } -) -``` -In this example, we have defined a text index on both the "content" and "title" fields with English language support. We have also assigned higher weights to the "title" field to prioritize it in search results. --## Significance of weights in text indexes --When creating a text index, you have the option to assign different weights to individual fields in the index. These weights represent the importance or relevance of each field in the search. --When executing a text search query, Cosmos DB will calculate a score for each document based on the search terms and the assigned weights of the indexed fields. The score represents the relevance of the document to the search query. ---``` -db.articles.createIndex( - { Title: "text", content: "text" }, - { weights: { Title: 2, content: 1 } } -) -``` --For example, let's say we have a text index with two fields: "title" and "content." We assign a weight of 2 to the "title" field and a weight of 1 to the "content" field. When a user performs a text search query with the term "Cosmos DB," the score for each document in the collection will be calculated based on the presence and frequency of the term in both the "title" and "content" fields, with higher importance given to the "title" field due to its higher weight. --To look at the score of documents in the query result, you can use the $meta projection operator along with the textScore field in your query projection. ---``` -db.articles.find( - { $text: { $search: "Cosmos DB" } }, - { score: { $meta: "textScore" } } -) -``` --## Performing a Text Search --Once the text index is created, you can perform text searches using the "text" operator in your queries. The text operator takes a search string and matches it against the text index to find relevant documents. Here's an example of a text search query: --``` -db.articles.find({ $text: { $search: "Azure Cosmos DB" } }) -``` --This query will return all documents in the "articles" collection that contain the terms "Azure" and "Cosmos DB" in any order. --## Limitations --* Only one text index can be defined on a collection. -* Text indexes support simple text searches and do not provide advanced search capabilities like regular expression searches. -* Hint() is not supported in combination with a query using $text expression. -* Sort operations cannot leverage the ordering of the text index in MongoDB. -* Text indexes can be relatively large, consuming significant storage space compared to other index types. ----## Dropping a text index --To drop a text index in MongoDB, you can use the dropIndex() method on the collection and specify the index key or name for the text index you want to remove. --``` -db.articles.dropIndex({ Title: "text" }) -``` -or -``` -db.articles.dropIndex("title_text") -``` |
cosmos-db | How To Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-transactions.md | + + Title: Group multiple operations in transactions ++description: Support atomicity, consistency, isolation, and durability with transactions in Azure Cosmos DB for MongoDB vCore. ++++++ Last updated : 08/28/2023+# CustomerIntent: As a developer, I want to use transactions so that I can group multiple operations together. +++# Group multiple operations in transactions in Azure Cosmos DB for MongoDB vCore +++It's common to want to group multiple operations into a single transaction to either commit or rollback together. In database principles, transactions typically implement four key **ACID** principles. ACID stands for: ++- **Atomicity**: Transactions complete entirely or not at all. +- **Consistency**: Databases transition from one consistent state to another. +- **Isolation**: Individual transactions are shielded from simultaneous ones. +- **Durability**: Finished transactions are permanent, ensuring data remains consistent, even during system failures. ++The ACID principles in database management ensure transactions are processed reliably. Azure Cosmos DB for MongoDB vCore implements these principles, enabling you to create transactions for multiple operations. ++## Prerequisites ++- An existing Azure Cosmos DB for MongoDB vCore cluster. + - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free). + - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md). ++## Create a transaction ++Create a new transaction using the appropriate methods from the developer language of your choice. These methods typically include some wrapping mechanism to group multiple transactions together, and a method to commit the transaction. ++### [JavaScript](#tab/javascript) ++> [!NOTE] +> The samples in this section assume you have a collection variable named `collection`. ++1. Use `startSession()` to create a client session for the transaction operation. ++ ```javascript + const transactionSession = client.startSession(); + ``` ++1. Create a transaction using `withTransaction()` and place all relevant transaction operations within the callback. ++ ```javascript + await transactionSession.withTransaction(async () => { + await collection.insertOne({ name: "Coolarn shirt", price: 38.00 }, transactionSession); + await collection.insertOne({ name: "Coolarn shirt button", price: 1.50 }, transactionSession); + }); + ``` ++1. Commit the transaction using `commitTransaction()`. ++ ```javascript + transactionSession.commitTransaction(); + ``` ++1. Use `endSession()` to end the transaction session. ++ ```javascript + transactionSession.endSession(); + ``` ++### [Java](#tab/java) ++> [!NOTE] +> The samples in this section assume you have a collection variable named `databaseCollection`. ++1. Use `startSession()` to create a client session for the transaction operation within a `try` block. ++ ```java + try (ClientSession session = client.startSession()) { + } + ``` ++1. Create a transaction using `startTransaction()`. ++ ```java + session.startTransaction(); + ``` ++1. Include all relevant transaction operations. ++ ```java + collection.insertOne(session, new Document().append("name", "Coolarn shirt").append("price", 38.00)); + collection.insertOne(session, new Document().append("name", "Coolarn shirt button").append("price", 1.50)); + ``` ++1. Commit the transaction using `commitTransaction()`. ++ ```java + clientSession.commitTransaction(); + ``` ++### [Python](#tab/python) ++> [!NOTE] +> The samples in this section assume you have a collection variable named `coll`. ++1. Use `start_session()` to create a client session for the transaction operation. ++ ```python + with client.start_session() as ts: + ``` ++1. Within the session block, create a transaction using `start_transaction()`. ++ ```python + ts.start_transaction() + ``` ++1. Include all relevant transaction operations. ++ ```python + coll.insert_one({ 'name': 'Coolarn shirt', 'price': 38.00 }, session=ts) + coll.insert_one({ 'name': 'Coolarn shirt button', 'price': 1.50 }, session=ts) + ``` ++1. Commit the transaction using `commit_transaction()`. ++ ```python + ts.commit_transaction() + ``` ++++## Roll back a transaction ++Occasionally, you may be required to undo a transaction before it's committed. ++### [JavaScript](#tab/javascript) ++1. Using an existing transaction session, abort the transaction with `abortTransaction()`. ++ ```javascript + transactionSession.abortTransaction(); + ``` ++1. End the transaction session. ++ ```javascript + transactionSession.endSession(); + ``` ++### [Java](#tab/java) ++1. Using an existing transaction session, abort the transaction with `()`. ++ ```java + clientSession.abortTransaction(); + ``` ++### [Python](#tab/python) ++1. Using an existing transaction session, abort the transaction with `abort_transaction()`. ++ ```javascript + ts.abort_transaction() + ``` ++++## Next step ++> [!div class="nextstepaction"] +> [Build a Node.js web application](tutorial-nodejs-web-app.md) |
cosmos-db | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/introduction.md | Title: Introduction/Overview description: Learn about Azure Cosmos DB for MongoDB vCore, a fully managed MongoDB-compatible database for building modern applications with a familiar architecture.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # What is Azure Cosmos DB for MongoDB vCore? (Preview) |
cosmos-db | Migration Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/migration-options.md | Title: Migrate data from MongoDB + Title: Options to migrate data from MongoDB -description: Learn about the various options to migrate your data from other MongoDB sources to Azure Cosmos DB for MongoDB vCore. ---+description: Review various options to migrate your data from other MongoDB sources to Azure Cosmos DB for MongoDB vCore. +++ Previously updated : 03/09/2023 Last updated : 08/28/2023 +# Options to migrate data from MongoDB to Azure Cosmos DB for MongoDB vCore -# Options for migrating data to Azure Cosmos DB for MongoDB vCore --## Pre-migration assessment -+This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore-based offering. +## Premigration assessment Assessment involves finding out whether you're using the [features and syntax that are supported](./compatibility.md). The aim of this stage is to create a list of incompatibilities and warnings, if any. After you have the assessment results, you can try to address the findings during rest of the migration planning. The [Azure Cosmos DB Migration for MongoDB extension](/sql/azure-data-studio/extensions/database-migration-for-mongo-extension) in Azure Data Studio helps you assess a MongoDB workload for migrating to Azure Cosmos DB for MongoDB. You can use this extension to run an end-to-end assessment on your workload and find out the actions that you may need to take to seamlessly migrate your workloads on Azure Cosmos DB. During the assessment of a MongoDB endpoint, the extension reports all the discovered resources. --> [!NOTE] +> [!TIP] > We recommend you to go through [the supported features and syntax](./compatibility.md) in detail, as well as perform a proof-of-concept prior to the actual migration. +## Native MongoDB tools (Offline) -This document describes the various options to lift and shift your MongoDB workloads to Azure Cosmos DB for MongoDB vCore-based offering. +You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexport/mongoimport* to migrate datasets offline (without replicating live changes) to Azure Cosmos DB for MongoDB vCore-based offering. -## Native MongoDB tools (Offline) +| Scenario | MongoDB native tool | +| | | +| Move subset of database data (JSON/CSV-based) | *mongoexport/mongoimport* | +| Move whole database (BSON-based) | *mongodump/mongorestore* | -- You can use the native MongoDB tools such as *mongodump/mongorestore*, *mongoexport/mongoimport* to migrate datasets offline (without replicating live changes) to Azure Cosmos DB for MongoDB vCore-based offering.-- *mongodump/mongorestore* works well for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into Azure Cosmos DB.+- *mongoexport/mongoimport* is the best pair of migration tools for migrating a subset of your MongoDB database. + - *mongoexport* exports your existing data to a human-readable JSON or CSV file. *mongoexport* takes an argument specifying the subset of your existing data to export. + - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB for MongoDB vCore in this case.). + - JSON and CSV aren't a compact format; you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB for MongoDB vCore. +- *mongodump/mongorestore* is the best pair of migration tools for migrating your entire MongoDB database. The compact BSON format makes more efficient use of network resources as the data is inserted into Azure Cosmos DB for MongoDB vCore. - *mongodump* exports your existing data as a BSON file.- - *mongorestore* imports your BSON file dump into Azure Cosmos DB. -- *mongoexport/mongoimport* works well for migrating a subset of your MongoDB database.- - *mongoexport* exports your existing data to a human-readable JSON or CSV file. - - *mongoexport* takes an argument specifying the subset of your existing data to export. - - *mongoimport* opens a JSON or CSV file and inserts the content into the target database instance (Azure Cosmos DB in this case.). - - Since JSON and CSV aren't compact formats, you may incur excess network charges as *mongoimport* sends data to Azure Cosmos DB. -- Here's how you can [migrate data to Azure Cosmos DB for MongoDB vCore using the native MongoDB tools](../tutorial-mongotools-cosmos-db.md#perform-the-migration).+ - *mongorestore* imports your BSON file dump into Azure Cosmos DB for MongoDB vCore. ++> [!NOTE] +> The MongoDB native tools can move data only as fast as the host hardware allows. ## Data migration using Azure Databricks (Offline/Online) This document describes the various options to lift and shift your MongoDB workl ## Next steps -- Migrate data to Azure Cosmos DB for MongoDB vCore [using native MongoDB tools](../tutorial-mongotools-cosmos-db.md).+- Migrate data to Azure Cosmos DB for MongoDB vCore [using native MongoDB tools](how-to-migrate-native-tools.md). - Migrate data to Azure Cosmos DB for MongoDB vCore [using Azure Databricks](../migrate-databricks.md). |
cosmos-db | Quickstart Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/quickstart-bicep.md | Title: | Quickstart: Deploy a cluster by using a Bicep template description: In this quickstart, create a new Azure Cosmos DB for MongoDB vCore cluster to store databases, collections, and documents by using a Bicep template.+++ --- Previously updated : 03/07/2023- Last updated : 08/28/2023 # Quickstart: Create an Azure Cosmos DB for MongoDB vCore cluster by using a Bicep template When you're done with your Azure Cosmos DB for MongoDB vCore cluster, you can de -## Next steps +## Next step In this guide, you learned how to create an Azure Cosmos DB for MongoDB vCore cluster. You can now migrate data to your cluster. |
cosmos-db | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/quickstart-portal.md | Title: | Quickstart: Create a new cluster description: In this quickstart, create a new Azure Cosmos DB for MongoDB vCore cluster to store databases, collections, and documents by using the Azure portal.+++ --- Previously updated : 03/07/2023 Last updated : 08/28/2023 # Quickstart: Create an Azure Cosmos DB for MongoDB vCore cluster by using the Azure portal When you're done with Azure Cosmos DB for MongoDB vCore cluster, you can delete :::image type="content" source="media/quickstart-portal/delete-resource-group-dialog.png" alt-text="Screenshot of the delete resource group confirmation dialog with the name of the group filled out."::: -## Next steps +## Next step In this guide, you learned how to create an Azure Cosmos DB for MongoDB vCore cluster. You can now migrate data to your cluster. |
cosmos-db | Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/security.md | Title: Security options and features description: Review security options and built-in security mechanisms for Azure Cosmos DB for MongoDB vCore accounts.--- Previously updated : 02/07/2023++++ Last updated : 08/28/2023 # Security in Azure Cosmos DB for MongoDB vCore + This page outlines the multiple layers of security available to protect the data in your cluster. ## In transit |
cosmos-db | Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/transactions.md | - Title: ACID Transactions in Azure Cosmos DB for MongoDB vCore -description: Delve deep into the importance and functionality of ACID transactions in Azure Cosmos DB's MongoDB vCore. --- Previously updated : 08/08/2023----# ACID Transactions in Azure Cosmos DB for MongoDB vCore ---ACID stands for Atomicity, Consistency, Isolation, and Durability. These principles in database management ensure transactions are processed reliably: --- **Atomicity**: Transactions complete entirely or not at all.-- **Consistency**: Databases transition from one consistent state to another.-- **Isolation**: Individual transactions are shielded from simultaneous ones.-- **Durability**: Finished transactions are permanent, ensuring data remains consistent, even during system failures.--Azure Cosmos DB for MongoDB vCore builds off these principles, enabling developers to harness the advantages of ACID properties while benefiting from the innate flexibility and performance of Cosmos DB. This native feature is pivotal for a range of applications, from basic ones to comprehensive enterprise-grade solutions, especially when it comes to preserving transactional data integrity across distributed sharded clusters. --## Why Use Azure Cosmos DB for MongoDB vCore? -- **Native Vector Search**: Power your AI apps directly within Azure Cosmos DB, leveraging native high-dimensional data search and bypassing pricier external solutions.-- **Fully-Managed Azure Service**: Rely on a unified, dedicated support team for seamless database operations.-- **Effortless Azure Integrations**: Easily connect with a wide range of Azure services without the typical maintenance hassles.--## Next Steps --- Begin your journey with ACID transactions in Azure Cosmos DB for MongoDB vCore by accessing our [guide and tutorials](quickstart-portal.md).-- Explore further capabilities and benefits of Azure Cosmos DB for MongoDB vCore in our [documentation](introduction.md).- |
cosmos-db | Troubleshoot Common Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/troubleshoot-common-issues.md | + + Title: Troubleshoot common errors in Azure Cosmos DB for MongoDB vCore +description: This doc discusses the ways to troubleshoot common issues encountered in Azure Cosmos DB for MongoDB vCore. +++ Last updated : 08/11/2023+++++# Troubleshoot common issues in Azure Cosmos DB for MongoDB vCore ++This guide is tailored to assist you in resolving issues you may encounter when using Azure Cosmos DB for MongoDB vCore. The guide provides solutions for connectivity problems, error scenarios, and optimization challenges, offering practical insights to improve your experience. ++>[!Note] +> Please note that these solutions are general guidelines and may require specific configurations based on individual situations. Always refer to official documentation and support resources for the most accurate and up-to-date information. ++## Common errors and solutions ++### Unable to Connect to Azure Cosmos DB for MongoDB vCore - Timeout error +This issue might occur when the cluster does not have the correct firewall rule(s) enabled. If you're trying to access the cluster from a non-Azure IP range, you need to add extra firewall rules. Refer to [Security options and features - Azure Cosmos DB for MongoDB vCore](./security.md#network-security-options) for detailed steps. Firewall rules can be configured in the portal's Networking setting for the cluster. Options include adding a known IP address/range or enabling public IP access. ++++### Unable to Connect with DNSClient.DnsResponseException Error +#### Debugging Connectivity Issues: +Windows User: <br> +Psping doesn't work. The use of nslookup confirms cluster reachability and discoverability, indicating network issues are unlikely. ++Unix Users: <br> +For Socket/Network-related exceptions, potential network connectivity issues might be hindering the application from establishing a connection with the Azure Cosmos DB Mongo API endpoint. ++To check connectivity, follow these steps: +``` +nc -v <accountName>.documents.azure.com 10250 +``` +If TCP connect to port 10250/10255 fails, an environment firewall may be blocking the Azure Cosmos DB connection. Kindly scroll down to the page's bottom to submit a support ticket. ++++#### Verify your connection string: +Only use the connection string provided in the portal. Avoid using any variations. Particularly, the connection string using mongodb+srv:// protocol and the c. prefixes aren't recommended. If the issue persists, share application/client-side driver logs for debugging connectivity issues with the team by submitting a support ticket. +++## Next steps +If you've completed all the troubleshooting steps and haven't been able to discover a solution for your issue, kindly consider submitting a [Support Ticket](https://azure.microsoft.com/support/create-ticket/). + |
cosmos-db | Tutorial Nodejs Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/tutorial-nodejs-web-app.md | Title: | Tutorial: Build a Node.js web application description: In this tutorial, create a Node.js web application that connects to an Azure Cosmos DB for MongoDB vCore cluster and manages documents within a collection.+++ --- Previously updated : 03/09/2023- Last updated : 08/28/2023+# CustomerIntent: As a developer, I want to connect to Azure Cosmos DB for MongoDB vCore from my Node.js application, so I can build MERN stack applications. # Tutorial: Connect a Node.js web app with Azure Cosmos DB for MongoDB vCore -The [MERN (MongoDB, Express, React.js, Node.js) stack](https://www.mongodb.com/mern-stack) is a popular collection of technologies used to build many modern web applications. With Azure Cosmos DB for MongoDB vCore, you can build a new web application or migrate an existing application using MongoDB drivers that you're already familiar with. In this tutorial, you: ++In this tutorial, you build a Node.js web application that connects to Azure Cosmos DB for MongoDB vCore. The MongoDB, Express, React.js, Node.js (MERN) stack is a popular collection of technologies used to build many modern web applications. With Azure Cosmos DB for MongoDB vCore, you can build a new web application or migrate an existing application using MongoDB drivers that you're already familiar with. In this tutorial, you: > [!div class="checklist"]-> > - Set up your environment > - Test the MERN application with a local MongoDB container > - Test the MERN application with the Azure Cosmos DB for MongoDB vCore cluster > - Deploy the MERN application to Azure App Service-> ## Prerequisites To complete this tutorial, you need the following resources: - An existing Azure Cosmos DB for MongoDB vCore cluster.- - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free). - - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md?tabs=azure-cli). -- A [GitHub account](https://github.com/join).- - GitHub comes with free Codespaces hours for all users. For more information, see [GitHub Codespaces free utilization](https://github.com/features/codespaces#pricing). +- A GitHub account. + - GitHub comes with free Codespaces hours for all users. -## 1 - Configure dev environment +## Configure development environment -A [development container](https://containers.dev/) environment is available with all dependencies required to complete every exercise in this project. You can run the development container in GitHub Codespaces or locally using Visual Studio Code. +A development container environment is available with all dependencies required to complete every exercise in this project. You can run the development container in GitHub Codespaces or locally using Visual Studio Code. ### [GitHub Codespaces](#tab/github-codespaces) -[GitHub Codespaces](https://docs.github.com/codespaces) runs a development container managed by GitHub with [Visual Studio Code for the Web](https://code.visualstudio.com/docs/editor/vscode-web) as the user interface. For the most straightforward development environment, use GitHub Codespaces so that you have the correct developer tools and dependencies preinstalled to complete this training module. +GitHub Codespaces runs a development container managed by GitHub with Visual Studio Code for the Web as the user interface. For the most straightforward development environment, use GitHub Codespaces so that you have the correct developer tools and dependencies preinstalled to complete this training module. > [!IMPORTANT]-> All GitHub accounts can use Codespaces for up to 60 hours free each month with 2 core instances. For more information, see [GitHub Codespaces monthly included storage and core hours](https://docs.github.com/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts). +> All GitHub accounts can use Codespaces for up to 60 hours free each month with 2 core instances. -1. Start the process to create a new GitHub Codespace on the `main` branch of the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository. +1. Start the process to create a new GitHub Codespace on the `main` branch of the `azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app` GitHub repository. - > [!div class="nextstepaction"] - > [Open this project in GitHub Codespaces](https://github.com/codespaces/new?hide_repo_select=true&ref=main&repo=611024069) + [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/Azure-Samples/msdocs-azure-cosmos-db-mongodb-mern-web-app?quickstart=1) 1. On the **Create codespace** page, review the codespace configuration settings and then select **Create new codespace** A [development container](https://containers.dev/) environment is available with ### [Visual Studio Code](#tab/visual-studio-code) -The [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) for Visual Studio Code requires [Docker](https://docs.docker.com/) to be installed on your local machine. The extension hosts the development container locally using the Docker host with the correct developer tools and dependencies preinstalled to complete this training module. +The **Dev Containers extension** for Visual Studio Code requires **Docker** to be installed on your local machine. The extension hosts the development container locally using the Docker host with the correct developer tools and dependencies preinstalled to complete this training module. 1. Open **Visual Studio Code** in the context of an empty directory. -1. Ensure that you have the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) installed in Visual Studio Code. +1. Ensure that you have the **Dev Containers extension** installed in Visual Studio Code. 1. Open a new terminal in the editor. The [Dev Containers extension](https://marketplace.visualstudio.com/items?itemNa > > :::image type="content" source="media/tutorial-nodejs-web-app/open-terminal-option.png" lightbox="media/tutorial-nodejs-web-app/open-terminal-option.png" alt-text="Screenshot of the menu option to open a new terminal."::: -1. Clone the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository into the current directory. +1. Clone the `azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app` GitHub repository into the current directory. ```bash git clone https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app.git . The [Dev Containers extension](https://marketplace.visualstudio.com/items?itemNa -## 2 - Test the MERN application's API with the MongoDB container +## Test the MERN application's API with the MongoDB container Start by running the sample application's API with the local MongoDB container to validate that the application works. Start by running the sample application's API with the local MongoDB container t 1. Close the terminal. -## 3 - Test the MERN application with the Azure Cosmos DB for MongoDB vCore cluster +## Test the MERN application with the Azure Cosmos DB for MongoDB vCore cluster Now, let's validate that the application works seamlessly with Azure Cosmos DB for MongoDB vCore. For this task, populate the pre-existing cluster with seed data using the MongoDB shell and then update the API's connection string. -1. Sign in to the [Azure portal](https://portal.azure.com). +1. Sign in to the Azure portal (<https://portal.azure.com>). 1. Navigate to the existing Azure Cosmos DB for MongoDB vCore cluster page. Now, let's validate that the application works seamlessly with Azure Cosmos DB f 1. Close the extra browser tab/window. Then, close the terminal. -## 4 - Deploy the MERN application to Azure App Service +## Deploy the MERN application to Azure App Service Deploy the service and client to Azure App Service to prove that the application works end-to-end. Use secrets in the web apps to store environment variables with credentials and API endpoints. Deploy the service and client to Azure App Service to prove that the application clientAppName="client-app-$suffix" ``` -1. If you haven't already, sign in to the Azure CLI using the [`az login --use-device-code`](/cli/azure/reference-index#az-login) command. +1. If you haven't already, sign in to the Azure CLI using the `az login --use-device-code` command. 1. Change the current working directory to the **server/** path. Deploy the service and client to Azure App Service to prove that the application cd server ``` -1. Create a new web app for the server component of the MERN application with [`az webapp up`](/cli/azure/webapp#az-webapp-up). +1. Create a new web app for the server component of the MERN application with `az webapp up`. ```shell az webapp up \ Deploy the service and client to Azure App Service to prove that the application --runtime "NODE|18-lts" ``` -1. Create a new connection string setting for the server web app named `CONNECTION_STRING` with [`az webapp config connection-string set`](/cli/azure/webapp/config/connection-string#az-webapp-config-connection-string-set). Use the same value for the connection string you used with the MongoDB shell and **.env** file earlier in this tutorial. +1. Create a new connection string setting for the server web app named `CONNECTION_STRING` with `az webapp config connection-string set`. Use the same value for the connection string you used with the MongoDB shell and **.env** file earlier in this tutorial. ```shell az webapp config connection-string set \ Deploy the service and client to Azure App Service to prove that the application --settings "CONNECTION_STRING=<mongodb-connection-string>" ``` -1. Get the URI for the server web app with [`az webapp show`](/cli/azure/webapp#az-webapp-show) and store it in a shell variable name d **serverUri**. +1. Get the URI for the server web app with `az webapp show` and store it in a shell variable name d **serverUri**. ```azurecli serverUri=$(az webapp show \ Deploy the service and client to Azure App Service to prove that the application --output tsv) ``` -1. Use the [`open-cli`](https://www.npmjs.com/package/open-cli) package and command from NuGet with `npx` to open a browser window using the URI for the server web app. Validate that the server app is returning your JSON array data from the MongoDB vCore cluster. +1. Use the `open-cli` package and command from NuGet with `npx` to open a browser window using the URI for the server web app. Validate that the server app is returning your JSON array data from the MongoDB vCore cluster. ```shell npx open-cli "https://$serverUri/products" --yes Deploy the service and client to Azure App Service to prove that the application cd ../client ``` -1. Create a new web app for the client component of the MERN application with [`az webapp up`](/cli/azure/webapp#az-webapp-up). +1. Create a new web app for the client component of the MERN application with `az webapp up`. ```shell az webapp up \ Deploy the service and client to Azure App Service to prove that the application --runtime "NODE|18-lts" ``` -1. Create a new app setting for the client web app named `REACT_APP_API_ENDPOINT` with [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az-webapp-config-appsettings-set). Use the server API endpoint stored in the **serverUri** shell variable. +1. Create a new app setting for the client web app named `REACT_APP_API_ENDPOINT` with `az webapp config appsettings set`. Use the server API endpoint stored in the **serverUri** shell variable. ```shell az webapp config appsettings set \ Deploy the service and client to Azure App Service to prove that the application --settings "REACT_APP_API_ENDPOINT=https://$serverUri" ``` -1. Get the URI for the client web app with [`az webapp show`](/cli/azure/webapp#az-webapp-show) and store it in a shell variable name d **clientUri**. +1. Get the URI for the client web app with `az webapp show` and store it in a shell variable name d **clientUri**. ```azurecli clientUri=$(az webapp show \ Deploy the service and client to Azure App Service to prove that the application --output tsv) ``` -1. Use the [`open-cli`](https://www.npmjs.com/package/open-cli) package and command from NuGet with `npx` to open a browser window using the URI for the client web app. Validate that the client app is rendering data from the server app's API. +1. Use the `open-cli` package and command from NuGet with `npx` to open a browser window using the URI for the client web app. Validate that the client app is rendering data from the server app's API. ```shell npx open-cli "https://$clientUri" --yes Deploy the service and client to Azure App Service to prove that the application When you're working in your own subscription, at the end of a project, it's a good idea to remove the resources that you no longer need. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources. -1. To delete the entire resource group, use [`az group delete`](/cli/azure/group#az-group-delete). +1. To delete the entire resource group, use `az group delete`. ```azurecli az group delete \ When you're working in your own subscription, at the end of a project, it's a go --yes ``` -1. Validate that the resource group is deleted using [`az group list`](/cli/azure/group#az-group-list). +1. Validate that the resource group is deleted using `az group list`. ```azurecli az group list You may also wish to clean up your development environment or return it to its t Deleting the GitHub Codespaces environment ensures that you can maximize the amount of free per-core hours entitlement you get for your account. -> [!IMPORTANT] -> For more information about your GitHub account's entitlements, see [GitHub Codespaces monthly included storage and core hours](https://docs.github.com/billing/managing-billing-for-github-codespaces/about-billing-for-github-codespaces#monthly-included-storage-and-core-hours-for-personal-accounts). - 1. Sign into the GitHub Codespaces dashboard (<https://github.com/codespaces>). -1. Locate your currently running codespaces sourced from the [`azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app`](https://github.com/azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app) GitHub repository. +1. Locate your currently running codespaces sourced from the `azure-samples/msdocs-azure-cosmos-db-mongodb-mern-web-app` GitHub repository. :::image type="content" source="media/tutorial-nodejs-web-app/codespace-dashboard.png" alt-text="Screenshot of all the running codespaces including their status and templates."::: You aren't necessarily required to clean up your local environment, but you can -## Next steps +## Next step Now that you have built your first application for the MongoDB vCore cluster, learn how to migrate your data to Azure Cosmos DB. |
cosmos-db | Vector Search Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search-ai.md | + + Title: Build AI apps with vector search ++description: Enhance AI-powered applications with Retrieval Augmented Generation (RAG) using Azure Cosmos DB for MongoDB vCore Vector Search. ++++++ Last updated : 08/28/2023+++# AI Apps with Azure Cosmos DB for MongoDB vCore vector search +++## Introduction ++Large Language Models (LLMs) available in Azure OpenAI are potent tools that can elevate the capabilities of your AI-driven applications. To fully unleash the potential of LLMs, giving them access to timely and relevant data from your application's data store is crucial. This process, known as Retrieval Augmented Generation (RAG), can be seamlessly accomplished using Azure Cosmos DB. In this tutorial, we delve into the core concepts of RAG and provide links to tutorials and sample code that exemplify powerful RAG strategies using Azure Cosmos DB for MongoDB vCore vector search. ++Retrieval Augmented Generation (RAG) elevates AI-powered applications by incorporating external knowledge and data into model inputs. With Azure Cosmos DB for MongoDB vCore's vector search, this process becomes seamless, ensuring that the most pertinent information is effortlessly integrated into your AI models. By applying the power of [embeddings](../../../ai-services/openai/tutorials/embeddings.md) and vector search, you can provide your AI applications with the context they need to excel. Through the provided tutorials and code samples, you can become proficient in harnessing RAG to create smarter and more context-aware AI solutions. ++## Understanding Retrieval Augmented Generation (RAG) ++Retrieval Augmented Generation harnesses external knowledge and models to efficiently manage custom data or domain-specific expertise. This process involves extracting pertinent information from an external data source and seamlessly integrating it into the model's input through prompt engineering. A robust approach is essential to identify the most pertinent data from the external source within the [token limitations of a request](../../../ai-services/openai/quotas-limits.md). This limitation is elegantly addressed by using embeddings, which convert data into vectors, capturing the semantic essence of the text and enabling context comprehension beyond simple keywords. ++## What is vector search? ++[Vector search](./vector-search.md) is an approach that enables the discovery of analogous items based on shared data characteristics, deviating from the necessity for precise matches within a property field. This method proves invaluable in various applications like text similarity searches, image association, recommendation systems, and anomaly detection. Its functionality revolves around the utilization of vector representations (sequences of numerical values) generated from your data via machine learning models or embeddings APIs. Examples of such APIs encompass [Azure OpenAI Embeddings](/azure/ai-services/openai/how-to/embeddings) or [Hugging Face on Azure](https://azure.microsoft.com/solutions/hugging-face-on-azure/). The technique gauges the disparity between your query vector and the data vectors. The data vectors that exhibit the closest proximity to your query vector are identified as semantically akin. ++## Utilizing Vector Search with Azure Cosmos DB for MongoDB vCore ++RAG's power is truly harnessed through the native vector search capability within Azure Cosmos DB for MongoDB vCore. This capability enables a seamless fusion of AI-focused applications with stored data in Azure Cosmos DB. Vector search optimally stores, indexes, and searches high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore alongside other application data. This functionality eliminates the need to migrate data to costlier alternatives for vector search functionality. ++## Code samples and tutorials ++- [**.NET Retail Chatbot Demo**](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/mongovcorev2): Learn how to build a chatbot using .NET that demonstrates RAG's potential in a retail context. +- [**.NET Tutorial - Recipe Chatbot**](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore): Walk through creating a recipe chatbot using .NET, showcasing RAG's application in a culinary scenario. +- [**Python Notebook Tutorial**](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore) - Azure Product Chatbot: Explore a Python notebook tutorial that guides you through constructing an Azure product chatbot, highlighting RAG's benefits. ++## Next steps ++> [!div class="nextstepaction"] +> [Vector search](vector-search.md) |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/vector-search.md | To create a vector index, use the following `createIndexes` template: | `index_name` | string | Unique name of the index. | | `path_to_property` | string | Path to the property that contains the vector. This path can be a top-level property or a dot notation path to the property. If a dot notation path is used, then all the nonleaf elements can't be arrays. | | `kind` | string | Type of vector index to create. Currently, `vector-ivf` is the only supported index option. |-| `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` is set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows. | +| `numLists` | integer | This integer is the number of clusters that the inverted file (IVF) index uses to group the vector data. We recommend that `numLists` is set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows. Using a `numLists` value of `1` is akin to performing brute-force search. | | `similarity` | string | Similarity metric to use with the IVF index. Possible options are `COS` (cosine distance), `L2` (Euclidean distance), and `IP` (inner product). | | `dimensions` | integer | Number of dimensions for vector similarity. The maximum number of supported dimensions is `2000`. | +> [!IMPORTANT] +> Setting the _numLists_ parameter correctly is important for acheiving good accuracy and performance. We recommend that `numLists` is set to `rowCount()/1000` for up to 1 million rows and to `sqrt(rowCount)` for more than 1 million rows. +> +> As the number of items in your database grows, you should tune _numLists_ to be larger in order to achieve good latency performance for vector search. +> +> If you're experimenting with a new scenario or creating a small demo, you can start with `numLists` set to `1` to perform a brute-force search across all vectors. This should provide you with the most accurate results from the vector search. After your initial setup, you should go ahead and tune the `numLists` parameter using the above guidance. + ## Examples The following examples show you how to index vectors, add documents that have vector properties, perform a vector search, and retrieve the index configuration. db.runCommand({ }, cosmosSearchOptions: { kind: 'vector-ivf',- numLists: 100, + numLists: 3, similarity: 'COS', dimensions: 3 } In this example, `vectorIndex` is returned with all the `cosmosSearch` parameter name: 'vectorSearchIndex', cosmosSearch: { kind: 'vector-ivf',- numLists: 100, + numLists: 3, similarity: 'COS', dimensions: 3 }, In this example, `vectorIndex` is returned with all the `cosmosSearch` parameter This guide demonstrates how to create a vector index, add documents that have vector data, perform a similarity search, and retrieve the index definition. By using vector search, you can efficiently store, index, and query high-dimensional vector data directly in Azure Cosmos DB for MongoDB vCore. Vector search enables you to unlock the full potential of your data via vector embeddings, and it empowers you to build more accurate, efficient, and powerful applications. > [!div class="nextstepaction"]-> [Introduction to Azure Cosmos DB for MongoDB vCore](introduction.md) +> [Build AI apps with Azure Cosmos DB for MongoDB vCore vector search](vector-search-ai.md) |
cosmos-db | Monitor Logs Basic Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-logs-basic-queries.md | Here's a list of common troubleshooting queries. Find operations that have a duration greater than 3 milliseconds. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto AzureDiagnostics AzureDiagnostics | summarize count() by clientIpAddress_s, TimeGenerated ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBDataPlaneRequests Find user agents associated with each operation. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto AzureDiagnostics AzureDiagnostics | summarize count() by OperationName, userAgent_s ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBDataPlaneRequests Find operations that ran for a long time by binning their runtime into five-second intervals. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto AzureDiagnostics AzureDiagnostics | render timechart ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBDataPlaneRequests Measure skew by getting common statistics for physical partitions. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto AzureDiagnostics AzureDiagnostics | project SubscriptionId, regionName_s, databaseName_s, collectionName_s, partitionKey_s, sizeKb_d, ResourceId ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBPartitionKeyStatistics CDBPartitionKeyStatistics Measure the request charge (in RUs) for the largest queries. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```Kusto AzureDiagnostics AzureDiagnostics | limit 100 ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBQueryRuntimeStatistics Sort operations by the amount of RU/s they're using. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | summarize max(responseLength_s), max(requestLength_s), max(requestCharge_s), count = count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h) ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBDataPlaneRequests Find queries that consume more RU/s than a baseline amount. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) This query joins with data from ``DataPlaneRequests`` and ``QueryRunTimeStatistics``. AzureDiagnostics | limit 100 ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBQueryRuntimeStatistics Get statistics in both request charge and duration for a specific query. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | project databasename_s, collectionname_s, OperationName1 , querytext_s,requestCharge_s1, duration_s1, bin(TimeGenerated, 1min) ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBQueryRuntimeStatistics CDBDataPlaneRequests Group operations by the resource distribution. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | summarize count = count() by OperationName, requestResourceType_s, bin(TimeGenerated, 1h) ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBDataPlaneRequests Get the maximum throughput for a physical partition. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | summarize max(requestCharge_s) by bin(TimeGenerated, 1h), partitionId_g ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests CDBDataPlaneRequests Measure RU/s consumption on a per-second basis per partition key. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | order by TimeGenerated asc ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBPartitionKeyRUConsumption CDBPartitionKeyRUConsumption Measure request charge per partition key. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | where parse_json(partitionKey_s)[0] == "2" ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBPartitionKeyRUConsumption CDBPartitionKeyRUConsumption Sort partition keys based on request unit consumption within a time window. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | order by total desc ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBPartitionKeyRUConsumption CDBPartitionKeyRUConsumption Find logs for partition keys filtered by the size of storage per partition key. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | where todouble(sizeKb_d) > 800000 ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBPartitionKeyStatistics CDBPartitionKeyStatistics Measure performance for; operation latency, RU/s usage, and response length. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | summarize percentile(todouble(responseLength_s), 50), percentile(todouble(responseLength_s), 99), max(responseLength_s), percentile(todouble(requestCharge_s), 50), percentile(todouble(requestCharge_s), 99), max(requestCharge_s), percentile(todouble(duration_s), 50), percentile(todouble(duration_s), 99), max(duration_s), count() by OperationName, requestResourceType_s, userAgent_s, collectionRid_s, bin(TimeGenerated, 1h) ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBDataPlaneRequests Get control plane long using ``ControlPlaneRequests``. > [!TIP] > Remember to switch on the flag described in [Disable key-based metadata write access](audit-control-plane-logs.md#disable-key-based-metadata-write-access), and execute the operations by using Azure PowerShell, the Azure CLI, or Azure Resource Manager. -#### [Resource-specific](#tab/resource-specific) +#### [Azure Diagnostics](#tab/azure-diagnostics) ```kusto AzureDiagnostics AzureDiagnostics | summarize by OperationName ``` -#### [Azure Diagnostics](#tab/azure-diagnostics) +#### [Resource-specific](#tab/resource-specific) ```kusto CDBControlPlaneRequests |
cosmos-db | Certificate Based Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/certificate-based-authentication.md | |
cosmos-db | How To Delete By Partition Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-delete-by-partition-key.md | For certain scenarios, the effects of a delete by partition key operation isn't - Aggregate queries that use the index - for example, COUNT queries - that are issued during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete. - Queries issued against the [analytical store](../analytical-store-introduction.md) during an ongoing delete by partition key operation may contain the results of the documents to be deleted. This may occur until the delete operation is fully complete.-- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup. +- [Continuous backup (point in time restore)](../continuous-backup-restore-introduction.md) - a restore that is triggered during an ongoing delete by partition key operation may contain the results of the documents to be deleted in the restored collection. It isn't recommended to use this preview feature if you have a scenario that requires continuous backup. ++### Limitations +- [Hierarchical partition keys](../hierarchical-partition-keys.md) deletion is not supported. This feature permits the deletion of items solely based on the last level of partition keys. For example, consider a scenario where a partition key consists of three hierarchical levels: country, state, and city. In this context, the delete by partition keys functionality can be employed effectively by specifying the complete partition key, encompassing all levels, namely country/state/city. Attempting to delete using intermediate partition keys, such as country/state or solely country, will result in an error. ## How to give feedback or report an issue/bug * Email cosmosPkDeleteFeedbk@microsoft.com with questions or feedback. |
cosmos-db | How To Dotnet Read Item | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-dotnet-read-item.md | The following example point reads a single item asynchronously and returns a des :::code language="csharp" source="~/cosmos-db-nosql-dotnet-samples/275-read-item/Program.cs" id="read_item" ::: -The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads and item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators). +The [``Database.ReadItemAsync<>``](/dotnet/api/microsoft.azure.cosmos.container.readitemasync) method reads an item and returns an object of type [``ItemResponse<>``](/dotnet/api/microsoft.azure.cosmos.itemresponse-1). The **ItemResponse<>** type inherits from the [``Response<>``](/dotnet/api/microsoft.azure.cosmos.response-1) type, which contains an implicit conversion operator to convert the object into the generic type. To learn more about implicit operators, see [user-defined conversion operators](/dotnet/csharp/language-reference/operators/user-defined-conversion-operators). Alternatively, you can return the **ItemResponse<>** generic type and explicitly get the resource. The more general **ItemResponse<>** type also contains useful metadata about the underlying API operation. In this example, metadata about the request unit charge for this operation is gathered using the **RequestCharge** property. |
cosmos-db | How To Manage Indexing Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-manage-indexing-policy.md | An [indexing policy update](../index-policy.md#modifying-the-indexing-policy) tr > [!NOTE] > When you update indexing policy, writes to Azure Cosmos DB are uninterrupted. Learn more about [indexing transformations](../index-policy.md#modifying-the-indexing-policy)+ +> [!IMPORTANT] +> Removing an index takes affect immediately, whereas adding a new index takes some time as it requires an indexing transformation. When replacing one index with another (for example, replacing a single property index with a composite-index) make sure to add the new index first and then wait for the index transformation to complete **before** you remove the previous index from the indexing policy. Otherwise this will negatively affect your ability to query the previous index and may break any active workloads that reference the previous index. + ### Use the Azure portal |
cosmos-db | Modeling Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/modeling-data.md | What to do? ## Takeaways -The biggest takeaways from this article are to understand that data modeling in a schema-free world is as important as ever. +The biggest takeaways from this article are to understand that data modeling in a world that's schema-free is as important as ever. Just as there's no single way to represent a piece of data on a screen, there's no single way to model your data. You need to understand your application and how it will produce, consume, and process the data. Then, by applying some of the guidelines presented here you can set about creating a model that addresses the immediate needs of your application. When your applications need to change, you can use the flexibility of a schema-free database to embrace that change and evolve your data model easily. |
cosmos-db | Performance Tips Query Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/performance-tips-query-sdk.md | To learn more about performance using the Java SDK: * [Performance tips for Azure Cosmos DB Java V4 SDK](performance-tips-java-sdk-v4.md) ::: zone-end++## Reduce Query Plan calls ++To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There is a way to remove this request and reduce the latency of the single partition query operation. For single partition queries specify the partition key value for the item and pass it as [partition_key](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) argument: ++```python +items = container.query_items( + query="SELECT * FROM r where r.city = 'Seattle'", + partition_key="Washington" + ) +``` ++## Tune the page size ++When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. The [max_item_count](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) allows you to set the maximum number of items to be returned in the enumeration operation. ++```python +items = container.query_items( + query="SELECT * FROM r where r.city = 'Seattle'", + partition_key="Washington", + max_item_count=1000 + ) +``` ++## Next steps ++To learn more about using the Python SDK for API for NoSQL: ++* [Azure Cosmos DB Python SDK for API for NoSQL](sdk-python.md) +* [Quickstart: Azure Cosmos DB for NoSQL client library for Python](quickstart-python.md) +++## Reduce Query Plan calls ++To execute a query, a query plan needs to be built. This in general represents a network request to the Azure Cosmos DB Gateway, which adds to the latency of the query operation. There is a way to remove this request and reduce the latency of the single partition query operation. For single partition queries scoping a query to a single partition can be accomplished two ways. ++Using a parameterized query expression and specifying partition key in query statement. The query is programmatically composed to `SELECT * FROM todo t WHERE t.partitionKey = 'Bikes, Touring Bikes'`: ++```javascript +// find all items with same categoryId (partitionKey) +const querySpec = { + query: "select * from products p where p.categoryId=@categoryId", + parameters: [ + { + name: "@categoryId", + value: "Bikes, Touring Bikes" + } + ] +}; ++// Get items +const { resources } = await container.items.query(querySpec).fetchAll(); ++for (const item of resources) { + console.log(`${item.id}: ${item.name}, ${item.sku}`); +} +``` ++Or specify [partitionKey](/javascript/api/@azure/cosmos/feedoptions#@azure-cosmos-feedoptions-partitionkey) in `FeedOptions` and pass it as argument: ++```javascript +const querySpec = { + query: "select * from products p" +}; ++const { resources } = await container.items.query(querySpec, { partitionKey: "Bikes, Touring Bikes" }).fetchAll(); ++for (const item of resources) { + console.log(`${item.id}: ${item.name}, ${item.sku}`); +} +``` ++## Tune the page size ++When you issue a SQL query, the results are returned in a segmented fashion if the result set is too large. The [maxItemCount](/javascript/api/@azure/cosmos/feedoptions#@azure-cosmos-feedoptions-maxitemcount) allows you to set the maximum number of items to be returned in the enumeration operation. ++```javascript +const querySpec = { + query: "select * from products p where p.categoryId=@categoryId", + parameters: [ + { + name: "@categoryId", + value: items[2].categoryId + } + ] +}; ++const { resources } = await container.items.query(querySpec, { maxItemCount: 1000 }).fetchAll(); ++for (const item of resources) { + console.log(`${item.id}: ${item.name}, ${item.sku}`); +} +``` ++## Next steps ++To learn more about using the Node.js SDK for API for NoSQL: ++* [Azure Cosmos DB Node.js SDK for API for NoSQL](sdk-nodejs.md) +* [Quickstart - Azure Cosmos DB for NoSQL client library for Node.js](quickstart-nodejs.md) + |
cosmos-db | Sdk Connection Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-connection-modes.md | The following table shows a summary of the connectivity modes available for vari |Connection mode |Supported protocol |Supported SDKs |API/Service port | |||||-|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10250, 10255, 10256), Table (443), Cassandra (10350), Graph (443) <br> The port 10250 maps to a default Azure Cosmos DB for MongoDB instance without geo-replication. Whereas the ports 10255 and 10256 map to the instance that has geo-replication. | +|Gateway | HTTPS | All SDKs | SQL (443), MongoDB (10255), Table (443), Cassandra (10350), Graph (443) <br> | |Direct | TCP (Encrypted via TLS) | .NET SDK Java SDK | When using public/service endpoints: ports in the 10000 through 20000 range<br>When using private endpoints: ports in the 0 through 65535 range | ## <a id="direct-mode"></a> Direct mode connection architecture |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
cosmos-db | Howto Table Size | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-table-size.md | provides helper functions to query this information. <td>citus_table_size(relation_name)</td> <td><ul> <li><p>citus_relation_size plus:</p>-<blockquote> + <ul> <li>size of <a href="https://www.postgresql.org/docs/current/static/storage-fsm.html">free space map</a></li> <li>size of <a href="https://www.postgresql.org/docs/current/static/storage-vm.html">visibility map</a></li> </ul>-</blockquote></li> +</li> </ul></td> </tr> <tr class="odd"> <td>citus_total_relation_size(relation_name)</td> <td><ul> <li><p>citus_table_size plus:</p>-<blockquote> + <ul> <li>size of indices</li> </ul>-</blockquote></li> +</li> </ul></td> </tr> </tbody> |
cosmos-db | Product Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md | Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### August 2023+* General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.21, 12.16, 13.12, 14.9, and 15.4) are now available in all supported regions. +* General availability: [PgBouncer](http://www.pgbouncer.org/) version 1.20 is now supported for all [PostgreSQL versions](reference-versions.md#postgresql-versions) in all [supported regions](./resources-regions.md) + * See [Connection pooling and managed PgBouncer in Azure Cosmos DB for PostgreSQL](./concepts-connection-pool.md). * General availability: Citus 12 is now available in [all supported regions](./resources-regions.md) with PostgreSQL 14 and PostgreSQL 15. * Check [what's new in Citus 12](https://www.citusdata.com/updates/v12-0/). * See [Postgres and Citus version in-place upgrade](./concepts-upgrade.md). |
cosmos-db | Reference Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md | The versions of each extension installed in a cluster sometimes differ based on > | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | > | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | > | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 |-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.2.0 | 1.2.0 | 1.2.0 | 1.4.0 | 1.4.0 | -> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.4.0 | 2.4.0 | 2.4.0 | 2.4.0 | 2.5.0 | +> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | +> | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 | 2.5.0 | ### Full-text search extensions The versions of each extension installed in a cluster sometimes differ based on > | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | > | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 | > | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.6.0 | 4.6.0 | 4.6.0 | 4.6.2 | 4.7.0 | +> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 | 4.7.3 | > | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 | > | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 | > | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | The versions of each extension installed in a cluster sometimes differ based on > | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | > | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |-> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.0 | 1.0 | 1.0 | +> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.2 | 1.2 | 1.2 | > | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 | 1.4 | +> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | > | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 | The versions of each extension installed in a cluster sometimes differ based on > [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||-> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.4.0 | 0.4.0 | 0.4.0 | 0.4.0 | 0.4.0 | +> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 | 0.4.4 | ### PostGIS extensions > [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 | -> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 | -> | postgis\_sfcgal | PostGIS SFCGAL functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 | -> | postgis\_topology | PostGIS topology spatial types and functions. | 2.5.5 | 3.0.5 | 3.0.5 | 3.1.5 | 3.3.1 | +> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | +> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | +> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | +> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | ## pg_stat_statements There's a tradeoff between the query execution information pg_stat_statements pr You can use dblink and postgres\_fdw to connect from one PostgreSQL server to another, or to another database in the same server. The receiving server needs to allow connections from the sending server through its firewall. To use-these extensions to connect between Azure Cosmos DB for PostgreSQL servers or -clusters, set **Allow Azure services and resources to access this cluster (or +these extensions to connect between Azure Cosmos DB for PostgreSQL clusters with [public access](./concepts-firewall-rules.md), set **Allow Azure services and resources to access this cluster (or server)** to ON. You also need to turn this setting ON if you want to use the extensions to loop back to the same server. The **Allow Azure services and resources to access this cluster** setting can be found in the Azure portal |
cosmos-db | Reference Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md | versions](https://www.postgresql.org/docs/release/): ### PostgreSQL version 15 -The current minor release is 15.3. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/15.3/) to +The current minor release is 15.4. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/15.4/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 14 -The current minor release is 14.8. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/14.8/) to +The current minor release is 14.9. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/14.9/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 13 -The current minor release is 13.11. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/13.11/) to +The current minor release is 13.12. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/13.12/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 12 -The current minor release is 12.15. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/12.15/) to +The current minor release is 12.16. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/12.16/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 11 -The current minor release is 11.20. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/11.20/) to +The current minor release is 11.21. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/11.21/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 10 and older |
cosmos-db | Rag Data Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/rag-data-openai.md | + + Title: Use data with Azure OpenAI ++description: Use Retrieval Augmented Generation (RAG) and vector search to ground your Azure OpenAI models with data stored in Azure Cosmos DB. ++++ Last updated : 08/16/2023+++# Use Azure Cosmos DB data with Azure OpenAI +++The Large Language Models (LLMs) in Azure OpenAI are incredibly powerful tools that can take your AI-powered applications to the next level. The utility of LLMs can increase significantly when the models can have access to the right data, at the right time, from your application's data store. This process is known as Retrieval Augmented Generation (RAG) and there are many ways to do this today with Azure Cosmos DB. ++In this article, we review key concepts for RAG and then provide links to tutorials and sample code that demonstrate some of most powerful RAG patterns using *vector search* to bring the most semantically relevant data to your LLMs. These tutorials can help you become comfortable with using your Azure Cosmos DB data in Azure OpenAI models. ++To jump right into tutorials and sample code for RAG patterns with Azure Cosmos DB, use the following links: ++| | Description | +| | | +| **[Azure Cosmos DB for NoSQL with Azure Cognitive Search](#azure-cosmos-db-for-nosql-and-azure-cognitive-search)**. | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure Cognitive Search. | +| **[Azure Cosmos DB for Mongo DB vCore](#azure-cosmos-db-for-mongodb-vcore)**. | Featuring native support for vector search, store your application data and vector embeddings together in a single MongoDB-compatible service. | +| **[Azure Cosmos DB for PostgreSQL](#azure-cosmos-db-for-postgresql)**. | Offering native support vector search, you can store your data and vectors together in a scalable PostgreSQL offering. | ++## Key concepts ++This section includes key concepts that are critical to implementing RAG with Azure Cosmos DB and Azure OpenAI. ++### Retrieval Augmented Generation (RAG) ++RAG involves the process of retrieving supplementary data to provide the LLM with the ability to use this data when it generates responses. When presented with a user's question or prompt, RAG aims to select the most pertinent and current domain-specific knowledge from external sources, such as articles or documents. This retrieved information serves as a valuable reference for the model when generating its response. For example, a simple RAG pattern using Azure Cosmos DB for NoSQL could be: ++1. Insert data into an Azure Cosmos DB for NoSQL database and collection. +2. Create embeddings from a data property using an Azure OpenAI Embeddings model +3. Link the Azure Cosmos DB for NoSQL to Azure Cognitive Search (for vector indexing/search) +4. Create a vector index over the embeddings properties. +5. Create a function to perform vector similarity search based on a user prompt. +6. Perform question answering over the data using an Azure OpenAI Completions model ++The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857) ++### Prompts and prompt engineering ++A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as: ++- **Instructions** provide directives to the LLM +- **Primary content**: gives information to the LLM for processing +- **Examples**: help condition the model to a particular task or process +- **Cues**: direct the LLM's output in the right direction +- **Supporting content**: represents supplemental information the LLM can use to generate output ++The process of creating good prompts for a scenario is called *prompt engineering*. For more information about prompts and best practices for prompt engineering, see [Azure OpenAI Service - Azure OpenAI | Microsoft Learn](../ai-services/openai/concepts/prompt-engineering.md). ++### Tokens ++Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word `hamburger` would be divided into tokens such as `ham`, `bur`, and `ger` while a short and common word like `pear` would be considered a single token. ++In Azure OpenAI, input text provided to the API is turned into tokens (tokenized). The number of tokens processed in each API request depends on factors such as the length of the input, output, and request parameters. The quantity of tokens being processed also impacts the response time and throughput of the models. There are limits to the amount tokens each model can take in a single request/response from Azure OpenAI. [Learn more about Azure OpenAI Service quotas and limits here](../ai-services/openai/quotas-limits.md) ++### Vectors ++Vectors are ordered arrays of numbers (typically floats) that can represent information about some data. For example, an image can be represented as a vector of pixel values, or a string of text can be represented as a vector or ASCII values. The process for turning data into a vector is called *vectorization*. ++### Embeddings ++Embeddings are vectors that represent important features of data. Embeddings are often learned by using a deep learning model, and machine learning and AI models utilized them as features. Embeddings can also capture semantic similarity between similar concepts. For example, in generating an embedding for the words `person` and `human`, we would expect their embeddings (vector representation) to be similar in value since the words are also semantically similar. ++ Azure OpenAI features models for creating embeddings from text data. The service breaks text out into tokens and generates embeddings using models pretrained by OpenAI. [Learn more about creating embeddings with Azure OpenAI here.](../ai-services/openai//concepts/understand-embeddings.md) ++### Vector search ++Vector search refers to the process of finding all vectors in a dataset that are semantically similar to a specific query vector. Therefore, a query vector for the word `human`, and I search the entire dictionary for semantically similar words, I would expect to find the word `person` as a close match. This closeness, or distance, is measured using a similarity metric such as cosine similarity. The more similar the vectors are, the smaller the distance between them. ++Consider a scenario where you have a query over millions of document and you want to find the most similar document in your data. You can create embeddings for your data and the query document using Azure OpenAI. Then, you can perform a vector search to find the most similar documents from your dataset. However, performing a vector search across a few examples is trivial. Performing this same search across thousands or millions of data points becomes challenging. There are also trade-offs between exhaustive search and approximate nearest neighbor (ANN) search methods including latency, throughput, accuracy, and cost, all of which can depend on the requirements of your application. ++Adding Azure Cosmos DB vector search capabilities to Azure OpenAI Service enables you to store long term memory and chat history to improve your Large Language Model (LLM) solution. Vector search allows you to efficiently query back the most relevant context to personalize Azure OpenAI prompts in a token-efficient manner. Storing vector embeddings alongside the data in an integrated solution minimizes the need to manage data synchronization and accelerates your time-to-market for AI app development. ++## Using Azure Cosmos DB data with Azure OpenAI ++The RAG pattern harnesses external knowledge and models to effectively handle custom data or domain-specific knowledge. It involves extracting pertinent information from an external data source and integrating it into the model request through prompt engineering. ++A robust mechanism is necessary to identify the most relevant data from the external source that can be passed to the model considering the limitation of a restricted number of tokens per request. This limitation is where embeddings play a crucial role. By converting the data in our database into embeddings and storing them as vectors for future use, we apply the advantage of capturing the semantic meaning of the text, going beyond mere keywords to comprehend the context. ++Prior to sending a request to Azure OpenAI, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the model request using prompt engineering. ++## Azure Cosmos DB for NoSQL and Azure Cognitive Search ++Implement RAG-patterns with Azure Cosmos DB for NoSQL and Azure Cognitive Search. This approach enables powerful integration of your data residing in Azure Cosmos DB for NoSQL into your AI-oriented applications. Azure Cognitive Search empowers you to efficiently index, and query high-dimensional vector data, which is stored in Azure Cosmos DB for NoSQL. ++### Code samples ++- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/cognitive-search-vector-v2) +- [.NET samples - Hackathon project](https://github.com/AzureCosmosDB/OpenAIHackathon) +- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch) +- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel) +- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch) ++## Azure Cosmos DB for MongoDB vCore ++RAG can be applied using the native vector search feature in Azure Cosmos DB for MongoDB vCore, facilitating a smooth merger of your AI-centric applications with your stored data in Azure Cosmos DB. The use of vector search offers an efficient way to store, index, and search high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore alongside other application data. This approach removes the necessity of migrating your data to costlier alternatives for vector search. ++### Code samples ++- [.NET retail chatbot demo](https://github.com/AzureCosmosDB/VectorSearchAiAssistant/tree/mongovcorev2) +- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore) +- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore) ++## Azure Cosmos DB for PostgreSQL ++You can employ RAG by utilizing native vector search within Azure Cosmos DB for PostgreSQL. This strategy provides a seamless integration of your AI-driven applications, including the ones developed using Azure OpenAI embeddings, with your data housed in Azure Cosmos DB. By taking advantage of vector search, you can effectively store, index, and execute queries on high-dimensional vector data directly within Azure Cosmos DB for PostgreSQL along with the rest of your data. ++### Code samples ++- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch) ++## Next steps ++- [Vector search with Azure Cognitive Search](../search/vector-search-overview.md) +- [Vector search with Azure Cosmos DB for MongoDB vCore(mongodb/vcore/vector-search.md) +- [Vector search with Azure Cosmos DB PostgreSQL](postgresql/howto-use-pgvector.md) |
cosmos-db | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
cost-management-billing | Get Started Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/get-started-partners.md | To view costs for a subscription, open **Cost Management** in the customer's Azu Cost analysis, budgets, and alerts are available for the subscription and resource group Azure RBAC scopes at pay-as-you-go rate-based costs. -Amortized views and actual costs for reserved instances in the Azure RBAC scopes show zero charges. Purchase costs for entitlements such as Reserved instances and Marketplace fees are only shown in billing scopes in the partner's tenant where the purchases were made. +Amortized views and actual costs for reserved instances in the Azure RBAC scopes show zero charges. Purchase costs for entitlements such as Reserved instances, Saving Plan purchases, and Marketplace fees are only shown in billing scopes in the partner's tenant where the purchases were made. The retail rates used to compute costs shown in the view are the same prices shown in the Azure Pricing Calculator for all customers. Costs shown don't include any discounts or credits that the partner may have like Partner Earned Credits, Tier Discounts, and Global Service discounts. |
cost-management-billing | Pricing Calculator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/pricing-calculator.md | + + Title: Estimate costs with the Azure pricing calculator +description: This article explains how to use the Azure pricing calculator to turn anticipated usage into an estimated cost, which makes it easier to plan and budget for your Azure usage. ++ Last updated : 08/23/2023+++++++# Estimate costs with the Azure pricing calculator ++The Azure pricing calculator helps you turn anticipated usage into an estimated cost, which makes it easier to plan and budget for your Azure usage. Whether you're a small business owner or an enterprise-level organization, the web-based tool helps you make informed decisions about your cloud spending. When you log in, the calculator also provides a cost estimate for your Azure consumption with your negotiated or discounted prices. This article explains how to use the Azure pricing calculator. ++>[!NOTE] +> Prices shown in this article are examples to help you understand how the calculator works. They are not actual prices. ++## Access the Azure pricing calculator ++There are two ways to navigate to the calculator: ++- Go to [https://azure.microsoft.com/pricing/calculator/](https://azure.microsoft.com/pricing/calculator/) ++-Or- ++- Go to the [Azure website](https://azure.microsoft.com/) and select the pricing calculator link under **Pricing** in the navigation menu. ++## Understand the Azure pricing calculator ++Let's look at the three main sections of the pricing calculator page: ++**The product picker** - It shows all Azure services that the calculator can estimate costs for. In this section, there's a search box, Azure service categories, and product cards. +++There are other tabs next to the **Products** tab that we cover later. There's also a **Log in** link to authenticate for various functions and features that we cover later. ++**Estimate and product configuration** - The pricing calculator helps you build _estimates_, which are collections of Azure products, similar to a shopping cart. ++Until you add products to your estimate, it appears blank. Here's an example. +++When you add a product to your estimate, the following sections get added to your estimate: ++- The estimation tools are at the top of the estimate. +- The product configuration is under the estimation tools. ++**Estimation summary** - The estimation summary is shown below the product configuration. +++As you continue to add more services to your estimate, more product configuration sections get added, one per service. ++Below your estimate are some links for next steps. There's also a feedback link to help improve the Azure pricing calculator experience. +++## Build an estimate ++Since it's your first time, you start with an empty estimate. ++1. Use the product picker to find a product. You can browse the catalog or search for the Azure service name. +2. Select product tile to add it to the estimate. It adds the product with a default configuration. +3. The pop of the configuration shows high-level filters like region, product type, tiers, and so on. Use the filters to narrow your product selection. The configurations offered change to reflect the features offered by the selected subproduct. +4. Update the default configurations to show your expected monthly consumption. Estimates automatically update for the new configuration. For example, a virtual machine configuration defaults to run for one month (730 hours). Changing the configuration to 200 hours automatically updates the estimate. +5. Some products offer special pricing plans, like reserved instances or savings plans. You can choose these options, if available, to lower your costs. +6. Depending on the selected product or pricing plan, the estimate is split into upfront and monthly costs. + - Upfront costs are incurred before the product is consumed. + - Monthly costs are incurred after the product is consumed. +7. Although optional, we recommend that you give the configuration a unique name. Finding a particular configuration in a large estimate is easier when it has a unique name. +8. Repeat the steps to add more products to an estimate. +9. Finally, don't forget to add a support plan. Choose from Basic, Developer, Standard or Professional Direct. For more information, see [Compare support plans](https://azure.microsoft.com/support/plans/). ++Here's an example of a virtual machine configuration: +++## Use advanced calculator features ++Here's an example with detailed descriptions of all the elements and options in an estimate. +++| Item number | Name | Description | +| | | | +| 1 | Your estimate | Creates multiple estimates and build multiple what-if scenarios. Or, segregate an estimate for different teams or applications. | +| 2 | Expand all | Expands all configurations to view the details of each product configuration. | +| 3 | Collapse all | Collapses all configurations to view a high-level view of the products in the estimate. | +| 4 | Rearrange the services | Rearranges products in the estimate to form a group. For example, group by product type or application. | +| 5 | Delete all | Deletes all products in the estimate to start with an empty estimate. The action can't be undone. | +| 6 | More info | Select to learn more about the product, pricing, or browse product documentation. | +| 7 | Clone | Clones the current product with its configuration to quickly create a similar product. | +| 8 | Delete | Deletes the selected product and its configuration. The action can't be undone. | +| 9 | Export | Exports the current estimate to an Excel file. You can share the file with others. | +| 10 | Save | Saves your progress with the estimate. | +| 11 | Save as | Renames a saved estimate. | +| 12 | Share | Creates a unique link for the estimate. You can share the link with others. However, only you can make changes to the estimate. | +| 13 | Currency | Changes the estimated costs to another currency. | ++## Understand calculator data ++This section provides more information about where the pricing comes from, how calculations work, and alternative sources for the prices in the calculator. ++The per-unit pricing information displayed in the Azure Pricing Calculator originates from data provided by the [Azure Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices). ++The Azure Pricing Calculator considers various factors to provide a cost estimate. Here's how it works: ++- **Product Configuration -** The calculator pulls the per-unit pricing for each product from the Azure Retail Pricing API based on the different product parameters selected by the user such as: region, size, operating system, tier, and other specific features. +- **Consumption Estimation -** The calculator then goes further and uses the usage quantities that you input, such as hours, units, and others to estimate consumption and calculate estimated costs. +- **Pricing plans -** You can select from different pricing plans and savings options for each product. They include pay-as-you-go, one or three-year reserved instances, and savings plans for discounted rates. Selecting a different pricing plan results in different pricing. ++If you need to access pricing information programmatically or require more detailed pricing data, you can use the Azure Retail Pricing API. The API provides comprehensive retail price information for all Azure services across different regions and currencies. For more information, see [Azure Retail Prices API](/rest/api/cost-management/retail-prices/azure-retail-prices). ++## View an estimate with your agreement prices ++The calculator helps you understand the retail costs of your Azure services, but it can also show any negotiated rates specific to your Azure Billing Account. Showing your negotiated prices helps you to get a more accurate representation of your expected costs. ++At the bottom of your calculator estimate, notice the list item titled **Licensing Program**. +++After you log in (not sign-in at the top of the page that takes you to the Azure portal), select the **Licensing Program** list item for the following options: ++- Microsoft Customer Agreement (MCA) +- Enterprise Agreement (EA) +- New Commerce Cloud Solution Provider (CSP) +- Microsoft Online Service Agreement (MOSA) ++If you have negotiated pricing associated with an MCA Billing Account: ++1. Select the **Microsoft Customer Agreement (MCA)** option in the licensing program. +2. Select **None selected (change)**. + :::image type="content" source="./media/pricing-calculator/none-selected-change.png" alt-text="Screenshot showing the None selected (change) option." lightbox="./media/pricing-calculator/none-selected-change.png" ::: +3. Select a billing account and select **Apply**. + :::image type="content" source="./media/pricing-calculator/choose-billing-account.png" alt-text="Screenshot showing the Choose Billing Account area." lightbox="./media/pricing-calculator/choose-billing-account.png" ::: +4. Next, select a billing profile and select **Apply**. + :::image type="content" source="./media/pricing-calculator/choose-billing-profile.png" alt-text="Screenshot showing the Choose Billing Profile area." lightbox="./media/pricing-calculator/choose-billing-profile.png" ::: ++Your calculator estimate updates with your MCA price sheet information. ++If you have negotiated pricing associated with an EA billing account: ++1. Select the **Enterprise Agreement (EA)** option in the licensing program list. + :::image type="content" source="./media/pricing-calculator/select-program-offer-enterprise-agreement.png" alt-text="Screenshot showing the Enterprise Agreement (EA) list item." lightbox="./media/pricing-calculator/select-program-offer-enterprise-agreement.png" ::: +2. In the Choose Agreement area, select your enrollment or your billing account ID and then select **Apply**. + :::image type="content" source="./media/pricing-calculator/select-choose-agreement-enterprise-agreement.png" alt-text="Screenshot showing the Choose Agreement area." lightbox="./media/pricing-calculator/select-choose-agreement-enterprise-agreement.png" ::: ++Your calculator estimate refreshes with your EA price sheet information. ++If you want to change your selected enrollment, select the **Selected agreement** link to the right of the licensing program list item. Here's an example. +++If you're a Cloud Solution Provider (CSP) partner who has transitioned to the new commerce experience, you can view your estimate by selecting the Microsoft Customer Agreement (MCA) option in the licensing program. ++>[!NOTE] +> Partner Earned Credit (PEC) estimation isn't available in the calculator, so you need to manually apply your anticipated PEC to the monthly estimate. ++If you don't have access to log in to the calculator to see negotiated prices, contact your administrator or Azure Account Manager. ++## Help us improve the calculator ++If you want to provide feedback about the Pricing Calculator, there's a link at the bottom of the page. We welcome your feedback. +++## Next steps ++- Estimate prices with the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/calculator/). +- Learn more about the [Azure Retail Prices API](/api/cost-management/retail-prices/azure-retail-prices). |
cost-management-billing | Quick Acm Cost Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md | Depending on the view and scope you're using, you may also see cost insights bel :::image type="content" source="./media/quick-acm-cost-analysis/see-insights.png" alt-text="Screenshot showing insights." lightbox="./media/quick-acm-cost-analysis/see-insights.png" ::: -Lastly, use the table to find your top cost contributors and expand each row to understand how costs are broken down to the next level. Examples include resources with their product meters and services with a breakdown of products. +Lastly, use the table to identify and review your top cost contributors and drill in for more details. :::image type="content" source="./media/quick-acm-cost-analysis/table-show-cost-contributors.png" alt-text="Screenshot showing a table view of subscription costs with their nested resources." lightbox="./media/quick-acm-cost-analysis/table-show-cost-contributors.png" ::: This view is where you spend most of your time in Cost analysis. To explore further: -1. Open other smart views to get different perspectives on your cost. -2. If you want to drill into data further, you might need to [Change scope](understand-work-scopes.md#switch-between-scopes-in-cost-management) to a lower level. For example, you can't view the Subscriptions smart view if your current scope is a subscription. -3. Open a custom view and apply other filters or group the data to explore. +1. Expand rows to take a quick peek and see how costs are broken down to the next level. Examples include resources with their product meters and services with a breakdown of products. +2. Select the name to drill down and see the next level details in a full view. From there, you can drill down again and again, to get down to the finest level of detail, based on what you're interested in. Examples include selecting a subscription, then a resource group, and then a resource to view the specific product meters for that resource. +3. Select the shortcut menu (Γï») to see related costs. Examples include filtering the list of resource groups to a subscription or filtering resources to a specific location or tag. +4. Select the shortcut menu (Γï») to open the management screen for that resource, resource group, or subscription. From this screen, you can stop or delete resources to avoid future charges. +5. Open other smart views to get different perspectives on your costs. +6. Open a customizable view and apply other filters or group the data to explore further. > [!NOTE] > If you want to visualize and monitor daily trends within the period, enable the [chart preview feature](enable-preview-features-cost-management-labs.md#chartsfeature) in Cost Management Labs, available from the **Try preview** command. |
cost-management-billing | Understand Cost Mgt Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/understand-cost-mgt-data.md | The following tables show data that's included or isn't in Cost Management. All | **Included** | **Not included** | | | |-| Azure service usage⁵ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | -| Marketplace offering usage⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | -| Marketplace purchases⁶ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | -| Reservation purchases⁷ | | +| Azure service usage (including deleted resources)⁵ | Unbilled services (e.g., free tier resources) | +| Marketplace offering usage⁶ | Support charges - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | +| Marketplace purchases⁶ | Taxes - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | +| Reservation purchases⁷ | Credits - For more information, see [Invoice terms explained](../understand/understand-invoice.md). | | Amortization of reservation purchases⁷ | | | New Commerce non-Azure products (Microsoft 365 and Dynamics 365) ⁸ | | _⁷ Reservation purchases are only available for Enterprise Agreement (EA) and _⁸ Only available for specific offers._ +Please note Cost Management data only includes the usage and purchases from services and resources that are actively running. Cost data is historical and will include resources, resource groups, and subscriptions that have been stopped, deleted, or cancelled and may not reflect the same resources, resource groups, and subscriptions you see in other tools, like Azure Resource Manager or Azure Resource Graph, which only show the current resources that are deployed in your subscriptions. Not all resources emit usage and therefore may not be represented in the cost data. Similarly, some resources are not tracked by Azure Resource Manager and may not be represented in subscription resources. + ## How tags are used in cost and usage data Cost Management receives tags as part of each usage record submitted by the individual services. The following constraints apply to these tags: |
cost-management-billing | Exchange And Refund Azure Reservations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/exchange-and-refund-azure-reservations.md | You can exchange your reservation from the [Azure portal](https://portal.azure.c 1. Review and complete the transaction. [![Example image showing the VM product to purchase with an exchange, completing the return](./media/exchange-and-refund-azure-reservations/exchange-refund-confirm-exchange.png)](./media/exchange-and-refund-azure-reservations/exchange-refund-confirm-exchange.png#lightbox) -To refund a reservation, go to **Reservation Details** and select **Refund**. +To refund a reservation, go into the Reservationthat you are looking to cancel and select **Return**. ## Exchange multiple reservations |
cost-management-billing | Prepare Buy Reservation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/prepare-buy-reservation.md | Azure Reservations help you save money by committing to one-year or three-years ## Who can buy a reservation -To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. You can't buy a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in Owner or built-in Reservation Purchaser role. +To buy a reservation, you must have owner role or reservation purchaser role on an Azure subscription that's of type Enterprise (MS-AZR-0017P or MS-AZR-0148P) or Pay-As-You-Go (MS-AZR-0003P or MS-AZR-0023P) or Microsoft Customer Agreement. ++Cloud solution providers can use the Azure portal or [Partner Center](/partner-center/azure-reservations) to purchase Azure Reservations. CSP partners can buy reservations for them in Partner Center when authorized by their customers. For more information, see [Buy Microsoft Azure reservations on behalf of your customers](/partner-center/azure-reservations-buying). Or, once the partner has given permission to the end customer and they have the reservation purchaser role, they can purchase reservations in the Azure portal. ++You can't buy a reservation if you have a custom role that mimics owner role or reservation purchaser role on an Azure subscription. You must use the built-in Owner or built-in Reservation Purchaser role. Enterprise Agreement (EA) customers can limit purchases to EA admins by disabling the **Add Reserved Instances** option in the EA Portal. Direct EA customers can now disable Reserved Instance setting in [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/BillingAccounts). Navigate to Policies menu to change settings. |
cost-management-billing | Understand Azure Data Explorer Reservation Charges | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-azure-data-explorer-reservation-charges.md | If you have questions or need help, [create a support request](https://go.micros To learn more about Azure reservations, see the following articles: -* [Prepay for Azure Data Explorer compute resources with Azure Azure Data Explorer reserved capacity](/azure/data-explorer/pricing-reserved-capacity) +* [Prepay for Azure Data Explorer compute resources with Azure Data Explorer reserved capacity](/azure/data-explorer/pricing-reserved-capacity) * [What are reservations for Azure?](save-compute-costs-reservations.md) * [Manage Azure reservations](manage-reserved-vm-instance.md) * [Understand reservation usage for your pay-as-you-go subscription](understand-reserved-instance-usage.md) |
cost-management-billing | View Purchase Refunds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-purchase-refunds.md | ms.reviwer: nitinarora Previously updated : 07/28/2023 Last updated : 08/21/2023 Enterprise Agreement and Microsoft Customer Agreement billing readers can view a ## View reservation transactions in the Azure portal -An Enterprise enrollment or Microsoft Customer Agreement billing administrator can view reservation transactions in Cost Management and Billing. +A Microsoft Customer Agreement billing administrator can view reservation transactions in Cost Management and Billing. For EA enrollments, EA Admins, Indirect Admins, and Partner Admins can view reservation transactions in Cost Management and Billing. To view the corresponding refunds for reservation transactions, select a **Timespan** that includes the purchase refund dates. You might have to select **Custom** under the **Timespan** list option. 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Search for **Cost Management + Billing**. -1. Select **Reservation transactions**. +1. Search for **Cost Management + Billing** and select it. +1. Select a billing scope. +1. Select **Reservation transactions**. + The Reservation transactions left menu item only appears if you have a billing scope selected. 1. To filter the results, select **Timespan**, **Type**, or **Description**. 1. Select **Apply**. |
cost-management-billing | Scope Savings Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/scope-savings-plan.md | You have the following options to scope a savings plan, depending on your needs: - **Single resource group scope** - Applies the savings plan benefit to the eligible resources in the selected resource group only. - **Single subscription scope** - Applies the savings plan benefit to the eligible resources in the selected subscription. - **Shared scope** - Applies the savings plan benefit to eligible resources within subscriptions that are in the billing context. If a subscription was moved to different billing context, the benefit will no longer be applied to this subscription and will continue to apply to other subscriptions in the billing context.- - For Enterprise Agreement customers, the billing context is the enrollment. + - For Enterprise Agreement customers, the billing context is the enrollment. The savings plan shared scope would include multiple Active Directory tenants in an enrollment. - For Microsoft Customer Agreement customers, the billing scope is the billing profile. - **Management group** - Applies the savings plan benefit to eligible resources in the list of subscriptions that are a part of both the management group and billing scope. To buy a savings plan for a management group, you must have at least read permission on the management group and be a savings plan owner on the billing subscription. |
cost-management-billing | Pay Bill | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md | If your default payment method is wire transfer, check your invoice for payment > - [Bulgaria](/legal/pay/bulgaria) > - [Cameroon](/legal/pay/cameroon) > - [Canada](/legal/pay/canada)-> - [Cape Verde](/legal/pay/cape-verde) +> - [Cabo Verde](/legal/pay/cape-verde) > - [Cayman Islands](/legal/pay/cayman-islands) > - [Chile](/legal/pay/chile) > - [China (PRC)](/legal/pay/china-prc) If your default payment method is wire transfer, check your invoice for payment > - [Lithuania](/legal/pay/lithuania) > - [Luxembourg](/legal/pay/luxembourg) > - [Macao Special Administrative Region](/legal/pay/macao)-> - [Macedonia, Former Yugoslav Republic of](/legal/pay/macedonia) > - [Malaysia](/legal/pay/malaysia) > - [Malta](/legal/pay/malta) > - [Mauritius](/legal/pay/mauritius) If your default payment method is wire transfer, check your invoice for payment > - [New Zealand](/legal/pay/new-zealand) > - [Nicaragua](/legal/pay/nicaragua) > - [Nigeria](/legal/pay/nigeria)+> - [North Macedonia, Republic of](/legal/pay/macedonia) > - [Norway](/legal/pay/norway) > - [Oman](/legal/pay/oman) > - [Pakistan](/legal/pay/pakistan) |
data-factory | Compute Linked Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/compute-linked-services.md | You can create **Azure Databricks linked service** to register Databricks worksp | newClusterNumOfWorker| Number of worker nodes that this cluster should have. A cluster has one Spark Driver and num_workers Executors for a total of num_workers + 1 Spark nodes. A string formatted Int32, like "1" means numOfWorker is 1 or "1:10" means autoscale from 1 as min and 10 as max. | No | | newClusterNodeType | This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster. For example, the Spark nodes can be provisioned and optimized for memory or compute intensive workloads. This field is required for new cluster | No | | newClusterSparkConf | a set of optional, user-specified Spark configuration key-value pairs. Users can also pass in a string of extra JVM options to the driver and the executors via spark.driver.extraJavaOptions and spark.executor.extraJavaOptions respectively. | No |-| newClusterInitScripts| a set of optional, user-defined initialization scripts for the new cluster. Specifying the DBFS path to the init scripts. | No | +| newClusterInitScripts| a set of optional, user-defined initialization scripts for the new cluster. You can specify the init scripts in workspace files (recommended) or via the DBFS path (legacy). | No | ## Azure SQL Database linked service |
data-factory | Concepts Change Data Capture Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture-resource.md | The new Change Data Capture resource in ADF allows for full fidelity change data * JSON * ORC * Parquet+* Azure Synapse Analytics ## Known limitations * Currently, when creating source/target mappings, each source and target is only allowed to be used once. The new Change Data Capture resource in ADF allows for full fidelity change data For more information on known limitations and troubleshooting assistance, please reference [this troubleshooting guide](change-data-capture-troubleshoot.md). +## Azure Synapse Analytics as Target +When using Azure Synapse Analytics as target, the **Staging Settings** is available on the main table canvas. Enabling staging is mandatory when selecting Azure Synapse Analytics as the target. This significantly enhances write performance by utilizing performant bulk loading capability such as COPY INTO command. **Staging Settings** can be configured in two ways: utilizing **Factory settings** or opting for a **Custom settings**. **Factory settings** apply at the factory level. For the first time, if these settings aren't configured, you'll be directed to the global staging setting section for configuration. Once set, all CDC top-level resources will adopt this configuration. **Custom settings** is scoped only for the CDC resource for which it is configured and overrides the **Factory settings**. ++> [!NOTE] +> As we utilize the COPY INTO command to transfer data from the staging location to Azure Synapse Analytics, it is advisable to ensure that all required permissions are pre-configured within Azure Synapse Analytics. ++ > [!NOTE] > We always use the last published configuration when starting a CDC. For running CDCs, while your data is being processed, you will be billed 4 v-cores of General Purpose Data Flows. ## Next steps - [Learn how to set up a change data capture resource](how-to-change-data-capture-resource.md).+- [Learn how to set up a change data capture resource with schema evolution](how-to-change-data-capture-resource-with-schema-evolution.md). |
data-factory | Control Flow Set Variable Activity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-set-variable-activity.md | To use a Set Variable activity in a pipeline, complete the following steps: ## Setting a pipeline return value with UI -We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_. This allows communication from the child pipeline to the calling pipeline, in the following scenario. +We have expanded Set Variable activity to include a special system variable, named _Pipeline Return Value_, allowing communication from the child pipeline to the calling pipeline, in the following scenario. You don't need to define the variable, before using it. For more information, see [Pipeline Return Value](tutorial-pipeline-return-value.md) value | String literal or expression object value that the variable is assigned ## Incrementing a variable -A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable. +A common scenario involving variable is to use a variable as an iterator within an **Until** or **ForEach** activity. In a **Set variable** activity, you can't reference the variable being set in the `value` field, that is, no self-referencing. To work around this limitation, set a temporary variable and then create a second **Set variable** activity. The second **Set variable** activity sets the value of the iterator to the temporary variable. Here's an example of this pattern: -Below is an example of this pattern: +* First you define two variables: one for the iterator, and one for temporary storage. +++* Then you use two activities to increment values :::image type="content" source="media/control-flow-set-variable-activity/increment-variable.png" alt-text="Screenshot shows increment variable."::: ``` json {- "name": "pipeline3", + "name": "pipeline1", "properties": { "activities": [ {- "name": "Set I", + "name": "Increment J", "type": "SetVariable",- "dependsOn": [ - { - "activity": "Increment J", - "dependencyConditions": [ - "Succeeded" - ] - } - ], + "dependsOn": [], + "policy": { + "secureOutput": false, + "secureInput": false + }, "userProperties": [], "typeProperties": {- "variableName": "i", + "variableName": "temp_j", "value": {- "value": "@variables('j')", + "value": "@add(variables('counter_i'),1)", "type": "Expression" } } }, {- "name": "Increment J", + "name": "Set I", "type": "SetVariable",- "dependsOn": [], + "dependsOn": [ + { + "activity": "Increment J", + "dependencyConditions": [ + "Succeeded" + ] + } + ], + "policy": { + "secureOutput": false, + "secureInput": false + }, "userProperties": [], "typeProperties": {- "variableName": "j", + "variableName": "counter_i", "value": {- "value": "@string(add(int(variables('i')), 1))", + "value": "@variables('temp_j')", "type": "Expression" } } } ], "variables": {- "i": { - "type": "String", - "defaultValue": "0" + "counter_i": { + "type": "Integer", + "defaultValue": 0 },- "j": { - "type": "String", - "defaultValue": "0" + "temp_j": { + "type": "Integer", + "defaultValue": 0 } }, "annotations": [] Below is an example of this pattern: } ``` -Variables are currently scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity. +Variables are scoped at the pipeline level. This means that they're not thread safe and can cause unexpected and undesired behavior if they're accessed from within a parallel iteration activity such as a ForEach loop, especially when the value is also being modified within that foreach activity. ## Next steps |
data-factory | Data Factory Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/data-factory-private-link.md | If you want to restrict access for Data Factory resources in your subscriptions You're unable to access each PaaS resource when both sides are exposed to Private Link and a private endpoint. This issue is a known limitation of Private Link and private endpoints. -For example, A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesn't block public access, B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B can't access data factory A via public in virtual network B anymore. +For example, customer A is using a private link to access the portal of data factory A in virtual network A. When data factory A doesn't block public access, customer B can access the portal of data factory A in virtual network B via public. But when customer B creates a private endpoint against data factory B in virtual network B, then customer B can't access data factory A via public in virtual network B anymore. ## Next steps |
data-factory | Deploy Linked Arm Templates With Vsts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deploy-linked-arm-templates-with-vsts.md | Title: Deploy linked ARM templates with VSTS -description: Learn how to deploy linked ARM templates with Visual Studio Team Services (VSTS). +description: Learn how to deploy linked ARM templates with Azure DevOps Services (formerly Visual Studio Team Services, or VSTS). -This article describes how to deploy linked Azure Resource Manager (ARM) templates with Visual Studio Team Services (VSTS). +This article describes how to deploy linked Azure Resource Manager (ARM) templates with Azure DevOps Services (formerly Visual Studio Team Services, or VSTS). ## Overview |
data-factory | Enable Aad Authentication Azure Ssis Ir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md | ms.devlang: powershell -+ Last updated 07/17/2023 |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-factory | Solution Template Extract Data From Pdf | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/solution-template-extract-data-from-pdf.md | Last updated 08/10/2023 [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article describes a solution template that you can use to extract data from a PDF source using Azure Data Factory and Form Recognizer. +This article describes a solution template that you can use to extract data from a PDF source using Azure Data Factory and Azure AI Document Intelligence. ## About this solution template -This template analyzes data from a PDF URL source using two Azure Form Recognizer calls. Then, it transforms the output to readable tables in a dataflow and outputs the data to a storage sink. +This template analyzes data from a PDF URL source using two Azure AI Document Intelligence calls. Then, it transforms the output to readable tables in a dataflow and outputs the data to a storage sink. This template contains two activities: -- **Web Activity** to call Azure Form Recognizer's layout model API+- **Web Activity** to call Azure AI Document Intelligence's layout model API - **Data flow** to transform extracted data from PDF This template defines 4 parameters: -- *FormRecognizerURL* is the Form recognizer URL ("https://{endpoint}/formrecognizer/v2.1/layout/analyze"). Replace {endpoint} with the endpoint that you obtained with your Form Recognizer subscription. You need to replace the default value with your own URL.-- *FormRecognizerKey* is the Form Recognizer subscription key. You need to replace the default value with your own subscription key.+- *FormRecognizerURL* is the Azure AI Document Intelligence URL ("https://{endpoint}/formrecognizer/v2.1/layout/analyze"). Replace {endpoint} with the endpoint that you obtained with your Azure AI Document Intelligence subscription. You need to replace the default value with your own URL. +- *FormRecognizerKey* is the Azure AI Document Intelligence subscription key. You need to replace the default value with your own subscription key. - *PDF_SourceURL* is the URL of your PDF source. You need to replace the default value with your own URL. - *outputFolder* is the name of the folder path where you want your files to be in your destination store. You need to replace the default value with your own folder path. ## Prerequisites -* Azure Form Recognizer Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer)) +* Azure AI Document Intelligence Resource Endpoint URL and Key (create a new resource [here](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer)) ## How to use this solution template -1. Go to template **Extract data from PDF**. Create a **New** connection to your Form Recognizer resource or choose an existing connection. +1. Go to template **Extract data from PDF**. Create a **New** connection to your Azure AI Document Intelligence resource or choose an existing connection. - :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to Form Recognizer in template set up."::: + :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-1.png" alt-text="Screenshot of how to create a new connection or select an existing connection from a drop down menu to Azure AI Document Intelligence in template set up."::: - In your connection to Form Recognizer, make sure to add a **Linked service Parameter**. You will need to use this parameter as your dynamic **Base URL**. + In your connection to Azure AI Document Intelligence, make sure to add a **Linked service Parameter**. You will need to use this parameter as your dynamic **Base URL**. - :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-9.png" alt-text="Screenshot of where to add your Form Recognizer linked service parameter."::: + :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-9.png" alt-text="Screenshot of where to add your Azure AI Document Intelligence linked service parameter."::: :::image type="content" source="media/solution-template-extract-data-from-pdf/extract-data-from-pdf-8.png" alt-text="Screenshot of the linked service base URL that references the linked service parameter."::: This template defines 4 parameters: ## Next steps - [What's New in Azure Data Factory](whats-new.md) - [Introduction to Azure Data Factory](introduction.md)- |
data-factory | Tutorial Managed Virtual Network On Premise Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md | the page. **sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433**<br/> <FQDN/IP> is your target SQL Server IP.<br/> - > [!Note] + > [!NOTE] > FQDN doesn't work for on-premises SQL Server unless you add a record in Azure DNS zone. 3. Run below command and check the iptables in your backend server VMs. You can see one record in your iptables with your target IP.<br/> the page. :::image type="content" source="./media/tutorial-managed-virtual-network/command-record-1.png" alt-text="Screenshot that shows the command record."::: - >[!Note] + > [!NOTE] > If you have more than one SQL Server or data sources, you need to define multiple load balancer rules and IP table records with different ports. Otherwise, there will be some conflict. For example,<br/> >- >| |Port in load balancer rule|Backend port in load balance rule|Command run in backend server VM| - >|||--|| - >|**SQL Server 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433| - >|**SQL Server 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433| - + > | |Port in load balancer rule|Backend port in load balance rule|Command run in backend server VM| + > |||--|| + > |**SQL Server 1**|1433 |1433 |sudo ./ip_fwd.sh -i eth0 -f 1433 -a <FQDN/IP> -b 1433| + > |**SQL Server 2**|1434 |1434 |sudo ./ip_fwd.sh -i eth0 -f 1434 -a <FQDN/IP> -b 1433| ++ > [!NOTE] + > It's important to be aware that the configuration within the virtual machine (VM) is not permanent. This means that each time the VM restarts, it will require reconfiguration. + ## Create a Private Endpoint to Private Link Service 1. Select All services in the left-hand menu, select All resources, and then select your data factory from the resources list. :::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-6.png" alt-text="Screenshot that shows the private endpoint settings."::: -> [!Note] -> When deploying your SQL Server on a virtual machine within a virtual network, it is essential to enhance your FQDN by appending **privatelink**. Otherwise, it will be conflicted with other records in the DNS setting. For example, you can simply modify the SQL Server's FQDN from **sqlserver.westus.cloudapp.azure.net** to **sqlserver.privatelink.westus.cloudapp.azure.net**. + > [!NOTE] + > When deploying your SQL Server on a virtual machine within a virtual network, it is essential to enhance your FQDN by appending **privatelink**. Otherwise, it will be conflicted with other records in the DNS setting. For example, you can simply modify the SQL Server's FQDN from **sqlserver.westus.cloudapp.azure.net** to **sqlserver.privatelink.westus.cloudapp.azure.net**. 8. Create private endpoint. data factory from the resources list. :::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-3.png" alt-text="Screenshot that shows the SQL server linked service creation page."::: - > [!Note] + > [!NOTE] > If you have more than one SQL Server and need to define multiple load balancer rules and IP table records with different ports, make sure you explicitly add the port name after the FQDN when you edit Linked Service. The NAT VM will handle the port translation. If it's not explicitly specified, the connection will always time-out. ## Troubleshooting |
data-factory | Data Factory Api Change Log | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-api-change-log.md | The following classes have been renamed. The new names were the original names o * **List** pipeline API returns only the summary of a pipeline instead of full details. For instance, activities in a pipeline summary only contain name and type. ### Feature additions-* The [SqlDWSink](/dotnet/api/microsoft.azure.management.datafactories.models.sqldwsink) class supports two new properties, **SliceIdentifierColumnName** and **SqlWriterCleanupScript**, to support idempotent copy to Azure Azure Synapse Analytics. See the [Azure Synapse Analytics](data-factory-azure-sql-data-warehouse-connector.md) article for details about these properties. +* The [SqlDWSink](/dotnet/api/microsoft.azure.management.datafactories.models.sqldwsink) class supports two new properties, **SliceIdentifierColumnName** and **SqlWriterCleanupScript**, to support idempotent copy to Azure Synapse Analytics. See the [Azure Synapse Analytics](data-factory-azure-sql-data-warehouse-connector.md) article for details about these properties. * We now support running stored procedure against Azure SQL Database and Azure Synapse Analytics sources as part of the Copy Activity. The [SqlSource](/dotnet/api/microsoft.azure.management.datafactories.models.sqlsource) and [SqlDWSource](/dotnet/api/microsoft.azure.management.datafactories.models.sqldwsource) classes have the following properties: **SqlReaderStoredProcedureName** and **StoredProcedureParameters**. See the [Azure SQL Database](data-factory-azure-sql-connector.md#sqlsource) and [Azure Synapse Analytics](data-factory-azure-sql-data-warehouse-connector.md#sqldwsource) articles on Azure.com for details about these properties. |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
data-lake-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
data-lake-analytics | Understand Spark Code Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-code-concepts.md | The other types of U-SQL UDOs will need to be rewritten using user-defined funct ### Transform U-SQL's optional libraries -U-SQL provides a set of optional and demo libraries that offer [Python](data-lake-analytics-u-sql-python-extensions.md), [R](data-lake-analytics-u-sql-r-extensions.md), [JSON, XML, AVRO support](https://github.com/Azure/usql/tree/master/Examples/DataFormats), and some [cognitive services capabilities](data-lake-analytics-u-sql-cognitive.md). +U-SQL provides a set of optional and demo libraries that offer [Python](data-lake-analytics-u-sql-python-extensions.md), [R](data-lake-analytics-u-sql-r-extensions.md), [JSON, XML, AVRO support](https://github.com/Azure/usql/tree/master/Examples/DataFormats), and some [Azure AI services capabilities](data-lake-analytics-u-sql-cognitive.md). Spark offers its own Python and R integration, pySpark and SparkR respectively, and provides connectors to read and write JSON, XML, and AVRO. -If you need to transform a script referencing the cognitive services libraries, we recommend contacting us via your Microsoft Account representative. +If you need to transform a script referencing the Azure AI services libraries, we recommend contacting us via your Microsoft Account representative. ## Transform typed values |
data-lake-store | Data Lake Store Secure Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-secure-data.md | description: Learn how to secure data in Azure Data Lake Storage Gen1 using grou + Last updated 03/26/2018 - # Securing data stored in Azure Data Lake Storage Gen1 Securing data in Azure Data Lake Storage Gen1 is a three-step approach. Both Azure role-based access control (Azure RBAC) and access control lists (ACLs) must be set to fully enable access to data for users and security groups. |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
data-lake-store | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
data-manager-for-agri | Concepts Farm Operations Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-farm-operations-data.md | + + Title: Working with Farm Activities data in Azure Data Manager for Agriculture +description: Learn how to integrate with Farm Activities data providers and ingest data into ADMA ++++ Last updated : 08/14/2023+++# Working with Farm Activities data in Azure Data Manager for Agriculture +Farm Activities data is one of the most important ground truth datasets in precision agriculture. It's these machine-generated reports that preserve the record of what exactly happened where and when that is used to both improve in-field practice and the downstream values chain analytics cases ++The Data Manager for Agriculture supports both +* summary data - entered as properties in the operation data item directly +* precision data - (for example, a .shp, .dat, .isoxml) uploaded as an attachment file and reference linked to the operation data item. ++New operation data can be pushed into the service via the APIs for operation and attachment creation. Or, if the desired source is in the supported list of OEM connectors, data can be synced automatically from providers like Climate FieldView with a farm operation ingestion job. +* Azure Data Manager for Agriculture supports a range of Farm Activities data that can be found [here](/rest/api/data-manager-for-agri/#farm-activities) ++## Integration with farm equipment manufacturers +Azure Data Manager for Agriculture fetches the associated Farm Activities data (planting, application, tillage & harvest) from the data provider (Ex: Climate FieldView) by creating a Farm Activities data ingestion job. Look [here](./how-to-ingest-and-egress-farm-operations-data.md) for more details. ++## Next steps ++* Test our APIs [here](/rest/api/data-manager-for-agri). |
data-manager-for-agri | Concepts Hierarchy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-hierarchy-model.md | To generate actionable insights data related to growers, farms, and fields shoul ### Farm * Farms are logical entities. A farm is a collection of fields. -* Farms don't have any geometry associated with them. Farm entity helps you organize your growing operations. For example Contoso Inc is the Party that has farms in Oregon and Idaho. +* Farms don't have any geometry associated with them. Farm entity helps you organize your growing operations. For example, Contoso Inc is the Party that has farms in Oregon and Idaho. ### Field-* Fields denote a stable boundary that is in general agnostic to seasons and other temporal constructs. For example, field could be the boundary denoted in government records. +* Fields denote a stable geometry that is in general agnostic to seasons and other temporal constructs. For example, field could be the geometry denoted in government records. * Fields are multi-polygon. For example, a road might divide the farm in two or more parts.-* Fields are multi-boundary. ### Seasonal field-* This is the most important construct in the farming world. A seasonal fields definition includes the following things - * Boundary +* Seasonal field is the most important construct in the farming world. A seasonal fields definition includes the following things + * geometry * Season * Crop * A seasonal field is associated with a field or a farm To generate actionable insights data related to growers, farms, and fields shoul * A seasonal field is associated with one season. If a farmer cultivates across multiple seasons, they have to create one seasonal field per season. * It's multi-polygon. Same crop can be planted in different areas within the farm. --### Boundary -* Boundary represents the geometry of a field or a seasonal field. -* It's represented as a multi-polygon GeoJSON consisting of vertices (lat/long). - ### Season-* Season represents the temporal aspect of farming. It is a function of local agronomic practices, procedures and weather. +* Season represents the temporal aspect of farming. It's a function of local agronomic practices, procedures and weather. ### Crop * Crop entity provides the phenotypic details of the planted crop. |
data-manager-for-agri | Concepts Ingest Satellite Imagery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md | Satellite imagery makes up a foundational pillar of agriculture data. To support * Read the Sinergise Sentinel Hub terms of service and privacy policy: https://www.sentinel-hub.com/tos/ * Have your providerClientId and providerClientSecret ready -## Ingesting boundary-clipped imagery +## Ingesting geometry-clipped imagery Using satellite data in Data Manager for Agriculture involves following steps: :::image type="content" source="./media/satellite-flow.png" alt-text="Diagram showing satellite data ingestion flow."::: |
data-manager-for-agri | Concepts Ingest Sensor Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md | In addition to the above approach, IOT devices (sensors/nodes/gateway) can direc ## Sensor topology -The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each boundary under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data. +The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each geometry under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data. :::image type="content" source="./media/sensor-topology-new.png" alt-text="Screenshot showing sensor topology."::: |
data-manager-for-agri | Concepts Isv Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md | The solution framework is built on top of Data Manager for Agriculture that prov Following are some of the examples of use cases on how an ISV partner could use the solution framework to build an industry specific solution. -* Yield Prediction Model: An ISV partner can build a yield model using historical data for a specific boundary and track periodic progress. The ISV can then enable forecast of estimated yield for the upcoming season. +* Yield Prediction Model: An ISV partner can build a yield model using historical data for a specific geometry and track periodic progress. The ISV can then enable forecast of estimated yield for the upcoming season. * Carbon Emission Model: An ISV partner can estimate the amount of carbon emitted from the field based upon the imagery, sensors data for a particular farm. * Crop Identification: Use imagery data to identify crop growing in an area of interest. |
data-manager-for-agri | How To Ingest And Egress Farm Operations Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-ingest-and-egress-farm-operations-data.md | + + Title: Working with Farm Activities and in-field activity data in Azure Data Manager for Agriculture +description: Learn how to manage Farm Activities data with manual and auto sync data ingestion jobs ++++ Last updated : 08/14/2023+++# Working with Farm Activities and activity data in Azure Data Manager for Agriculture ++Users can create a farm operation data ingestion job to **pull associated Farm Activities activity data** from a specified data provider into your Azure Data Manager for Agriculture instance, associated with a specific party. The job handles any required auth refresh, and by default detects and syncs any changes daily. In some cases, the job will also **pull farm and field** information associated with the given account into the party. ++> [!NOTE] +> +>Before creating Farm Activities job, it is mandatory to successfully [**integrate with Farm Activities data provider oAuth flow**](./how-to-integrate-with-farm-ops-data-provider.md) +> ++## Create FarmOperations Job ++Create a farm-operations job to ingest Farm Activity data with an ID of your choice. This job ID is used to monitor the status of the job using GET Farm Operations job. ++API documentation:[FarmOperations_CreateDataIngestionJob](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/farm-operations/create-data-ingestion-job) ++> [!NOTE] +>`shapeType` and `shapeResolution` are provider specific attributes. If they aren't applicable to your provider, set the value to "None". ++Based on the `startYear` and `operations` list provided, Azure Data Manager for Agriculture fetches the data from the start year to the current date. ++Along with specific data (geometry), Farm Activities data provider also gives us the DAT file for the activity performed on your farm or field. The DAT file, Shape File etc. contain a geometry that reflects where the activity was performed. ++Job status and details can be retrieved with: [FarmOperations_GetDataIngestionJobDetails](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/farm-operations/get-data-ingestion-job-details) +++## Finding and retrieving Farm Activities data ++Now that the data is ingested into Azure Data Manager for Agriculture, it can be queried or listed with the following methods: ++### Method 1: List data by type ++Retrieved data is sorted by type under the party. These can be listed, with standard filters applied ++[PlantingData_Search](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/planting-data/search) ++[HarvestData_Search](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/harvest-data/search) ++[ApplicationData_Search](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/application-data/search) ++Individual data items may be retrieved to view the properties and metadata, including the `sourceActivityId`, `providerFieldId` and `Geometry`. +++[PlantingData_Get](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/planting-data/get) ++[HarvestData_Get](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/harvest-data/get) ++[ApplicationData_Get](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/application-data/get) + ++### Method 2: search Farm Activities data using geometry intersect +To account for the high degree of change found in field definitions, Azure Data Manager for Agriculture supports a search by intersect feature that allows you to organize data by space and time, without needing to first know the farm/field hierarchy or association. ++++[PlantingData_Search](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/planting-data/search) +++[HarvestData_Search](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/harvest-data/search) ++[ApplicationData_Search](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/application-data/search) ++You can also use the ID like `plantingId` to fetch the above data in the same API. if you remove the ID, you're able to see any other data that intersects with the same geometry across party. So it shows data for the same geometry across different parties. ++## List and Download Attachments ++The message attribute in the response of `FarmOperations_GetDataIngestionJobDetails` API shows how much data was processed and how many attachments were created. To check the attachments associated to the partyId, go to attachment API. The response gives you all the attachments created under the partyId. ++API documentation: [Attachments](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/attachments) ++## Next steps ++* Understand our APIs [here](/rest/api/data-manager-for-agri). |
data-manager-for-agri | How To Integrate With Farm Ops Data Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-integrate-with-farm-ops-data-provider.md | + + Title: How to integrate with Farm Activities data provider +description: Learn how to integrate with Farm Activities data provider ++++ Last updated : 08/14/2023++++# Integrate with Farm Activities Data Provider +Azure Data Manager for Agriculture supports connectors to conveniently sync your end-users' data from a range of farm machinery data sources. The setup involves **Configuring oAuth flow as a pre-requisite for integrating with any Farm Activities data provider**, along with a per-account, transparent consent step that handles initial and incremental data sync to keep the ADMA data estate up to date. ++> [!NOTE] +> +> Steps 1 to 3 are part of the one-time-per-provider initial configuration. Once integrated, you will be able to enable all your end users to use the existing oAuth workflow and call the config API (Step 4) per user (PartyID) to retrieve the access token. ++## Provider setup +The example flow here uses Climate FieldView +### Step 1: App Creation ++If your application isn't already registered with Climate Fieldview, go to [FieldView portal](https://dev.fieldview.com/join-us/) and submit the form. Once FieldView processes your request, they send your `client_id` and `client_secret` which you'll use once per ADMA instance for FieldView. ++### Step 2: Provider Configuration ++Use the `oAuthProvider` API to create or update the oAuth provider (Ex: FIELDVIEW) with appropriate credentials of the newly created App. ++API documentation: [oAuthProviders - Create Or Update](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-providers/create-or-update) +++**Optional Step:** Once the operation is done, you can run the [oAuthProviders_Get](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-providers/get) to verify whether the application is registered. +Now, all the parties created in your Azure Data Manager for Agriculture instance can use FieldView as a provider to fetch Farm Activities data. ++### Step 3: Endpoint Configuration ++**User redirect endpoint**: This endpoint is where you want your users to be redirected to once the oAuth flow is completed. This endpoint will be generated by you and provided to ADMA as `userRedirectLink` in the oauth/tokens/:connect API. +**Register the oAuth callback endpoint with your App on Climate FieldView portal.** +## End-user account setup +### Step 4: Party (End-user) Integration ++When a party (end-user) lands on your webpage where the user action is expected (Ex: Connect to FieldView button), make a call to `oauth/tokens/:connect` API in the below fashion to get the oAuth provider's (Ex: Climate FieldView) sign-in uri back to start the end-user oAuth flow. ++API documentation: [oAuthTokens - Get OAuth Connection Link](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-tokens/get-o-auth-connection-link) ++Once the `oauth/tokens/:connect` API successfully returns the `oauthAuthorizationLink`, **end-user clicks on this link to complete the oAuth flow** (Ex: For Climate FieldView, the user is served a FieldView access consent and sign-in page). Once the sign-in is completed, ADMA will redirect the user to the endpoint provided by customer (`userRedirectLink`) with the following query parameters in the url ++1. **status** (success/failure) +2. **state** (optional string to uniquely identify the user at customer end) +3. **message** (optional string) +4. **errorCode** (optional string sent for Failure/error) in the parameters. ++> [!NOTE] +> +> If the API returns 404, then it implies the oAuth flow failed and ADMA could not acquire the access token. ++### Step 5: Check Access Token Info (Optional) ++This step is optional, only to confirm if for a given user or list of users, the required valid access token has been acquired or not. This can be done via making a call to the `oauth/tokens` API to **check for the entry `isValid: true` in the response body**. ++API documentation: [oAuthTokens - List](/rest/api/data-manager-for-agri/dataplane-version2023-07-01-preview/o-auth-tokens/list) ++**This step marks the successful completion of the oAuth flow for a user**. Now, the user is all-set to trigger a new [FarmOperationsDataJob](./how-to-ingest-and-egress-farm-operations-data.md) to start pulling the Farm Activities data from Climate FieldView. |
data-manager-for-agri | How To Set Up Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-audit-logs.md | -This article provides you with the steps to setup logging for Azure Data Manager for Agriculture. +This article provides you with the steps to set up logging for Azure Data Manager for Agriculture. ## Enable collection of logs The `categories` field for Data Manager for Agriculture can have values that are ### Categories table | category| Description | | | |-|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Boundary, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses. +|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses. |FarmOperationsLogs|Logs for CRUD operations for FarmOperations data ingestion job, ApplicationData, PlantingData, HarvestingData, TillageData |SatelliteLogs| Logs for create and get operations for Satellite data ingestion job |WeatherLogs|Logs for create, delete and get operations for weather data ingestion job All the `categories` of resource logs are mapped as a table in log analytics. To ### List of tables in log analytics and their mapping to categories in resource logs | Table name in log analytics| Categories in resource logs |Description | | | |-|AgriFoodFarmManagementLogs|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Boundary, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses. +|AgriFoodFarmManagementLogs|FarmManagementLogs| Logs for CRUD operations for party, Farm, Field, Seasonal Field, Crop, CropVariety, Season, Attachment, prescription maps, prescriptions, management zones, zones, plant tissue analysis and nutrient analyses. |AgriFoodFarmOperationsLogs|FarmOperationsLogs| Logs for CRUD operations for FarmOperations data ingestion job, ApplicationData, PlantingData, HarvestingData, TillageData. |AgriFoodSatelliteLogs|SatelliteLogs| Logs for create and get operations for satellite data ingestion job. |AgriFoodWeatherLogs|WeatherLogs|Logs for create, delete and get operations for weather data ingestion job. All the `categories` of resource logs are mapped as a table in log analytics. To |**partyId**| ID of the party associated with the operation. | |**Properties** | Available only in`AgriFoodJobProcessesLogs` table, it contains: `farmOperationEntityId` (ID of the entity that failed to be created by the farmOperation job), `farmOperationEntityType`(Type of the entity that failed to be created, can be ApplicationData, PeriodicJob, etc.), `errorCode`(Code for failure of the job at Data Manager for Agriculture end),`errorMessage`(Description of failure at the Data Manager for Agriculture end),`internalErrorCode`(Code of failure of the job provide by the provider),`internalErrorMessage`(Description of the failure provided by the provider),`providerId`(ID of the provider such as JOHN-DEERE). | -Each of these tables can be queried by creating a log analytics workspace. Reference for query language is [here](https://learn.microsoft.com/azure/data-explorer/kql-quick-reference). +Each of these tables can be queried by creating a log analytics workspace. Reference for query language is [here](/azure/data-explorer/kql-quick-reference). ### List of sample queries in the log analytics workspace | Query name | Description | |
data-manager-for-agri | How To Set Up Sensors Customer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-set-up-sensors-customer.md | API Endpoint: PATCH /sensor-partners/{sensorPartnerId}/integrations/{integration This step marks the completion of the sensor partner on-boarding from a customer perspective. , get all the required information to call your API endpoints to create Sensor model, Device model, Sensors & Devices. The partners are now able to push sensor events using the connection string generated for each sensor ID. -The final step is to start consuming sensor events. Before consuming the events, you need to create a mapping of every sensor ID to a specific Party ID & Boundary ID. +The final step is to start consuming sensor events. Before consuming the events, you need to create a mapping of every sensor ID to a specific Party ID and resource (Field, Seasonal Field). ## Step 6: Create sensor mapping -Use the `SensorMappings` collection, call into the `SensorMappings_CreateOrUpdate` API to create mapping for each of sensor. Mapping is nothing but associating a sensor ID with a specific PartyID and BoundaryID. PartyID and BoundaryID are already present in the Data Manager for Agriculture system. This association ensures that as a platform you get to build data science models around a common boundary and party dimension. Every data source (satellite, weather, farm operations) is tied to a party & boundary. As you establish this mapping object on a per sensor level you power all the agronomic use cases to benefit from sensor data. +Use the `SensorMappings` collection, call into the `SensorMappings_CreateOrUpdate` API to create mapping for each of sensor. Mapping is nothing but associating a sensor ID with a specific PartyID and a resource(field, seasonal field etc.). PartyID and resources are already present in the Data Manager for Agriculture system. This association ensures that as a platform you get to build data science models around a common geometry of the resource and party dimension. Every data source (satellite, weather, farm operations) is tied to a party & resource. As you establish this mapping object on a per sensor level you power all the agronomic use cases to benefit from sensor data. API Endpoint: PATCH /sensor-mappings/{sensorMappingId} |
data-manager-for-agri | How To Use Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-events.md | -This article provides the properties and schema for Azure Data Manager for Agriculture events. For an introduction to event schemas, see [Azure Event Grid](https://learn.microsoft.com/azure/event-grid/event-schema) event schema. +This article provides the properties and schema for Azure Data Manager for Agriculture events. For an introduction to event schemas, see [Azure Event Grid](/azure/event-grid/event-schema) event schema. ## Prerequisites Here are example scenarios for consuming events in our service: 2. If there are modifications to data-plane resources such as party, fields, farms and other similar elements, you can react to changes and you can trigger workflows. ## Filtering events-You can filter Data Manager for Agriculture <a href="https://docs.microsoft.com/cli/azure/eventgrid/event-subscription" target="_blank"> events </a> by event type, subject, or fields in the data object. Filters in Event Grid match the beginning or end of the subject so that events that match can go to the subscriber. +You can filter Data Manager for Agriculture <a href="/cli/azure/eventgrid/event-subscription" target="_blank"> events </a> by event type, subject, or fields in the data object. Filters in Event Grid match the beginning or end of the subject so that events that match can go to the subscriber. For instance, for the PartyChanged event, to receive notifications for changes for a particular party with ID Party1234, you may use the subject filter "EndsWith" as shown: Subjects in an event schema provide 'starts with' and 'exact match' filters as w Similarly, to filter the same event for a group of party IDs, use the Advanced filter on partyId field in the event data object. In a single subscription, you may add five advanced filters with a limit of 25 values for each key filtered. -To learn more about how to apply filters, see <a href = "https://docs.microsoft.com/azure/event-grid/how-to-filter-events" target = "_blank"> filter events for Event Grid. </a> +To learn more about how to apply filters, see <a href = "/azure/event-grid/how-to-filter-events" target = "_blank"> filter events for Event Grid. </a> ## Subscribing to events You can subscribe to Data Manager for Agriculture events by using Azure portal or Azure Resource Manager client. Each of these provide the user with a set of functionalities. Refer to following resources to know more about each method. -<a href = "https://docs.microsoft.com/azure/event-grid/subscribe-through-portal#:~:text=Create%20event%20subscriptions%201%20Select%20All%20services.%202,event%20types%20option%20checked.%20...%20More%20items..." target = "_blank"> Subscribe to events using portal </a> +<a href = "/azure/event-grid/subscribe-through-portal" target = "_blank"> Subscribe to events using portal </a> -<a href = "https://docs.microsoft.com/azure/event-grid/sdk-overview" target = "_blank"> Subscribe to events using the ARM template client </a> +<a href = "/azure/event-grid/sdk-overview" target = "_blank"> Subscribe to events using the ARM template client </a> ## Practices for consuming events Applications that handle Data Manager for Agriculture events should follow a few * Check that the eventType is one you're prepared to process, and don't assume that all events you receive are the types you expect. * As messages can arrive out of order, use the modifiedTime and etag fields to understand the order of events for any particular object.-* Data Manager for Agriculture events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see <a href = "https://docs.microsoft.com/azure/event-grid/delivery-and-retry" target = "_blank">Event Grid message delivery and retry </a> +* Data Manager for Agriculture events guarantees at-least-once delivery to subscribers, which ensures that all messages are outputted. However due to retries or availability of subscriptions, duplicate messages may occasionally occur. To learn more about message delivery and retry, see <a href = "/azure/event-grid/delivery-and-retry" target = "_blank">Event Grid message delivery and retry </a> * Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future. Applications that handle Data Manager for Agriculture events should follow a few |Microsoft.AgFoodPlatform.FarmChangedV2| Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.FieldChangedV2|Published when a Field is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.SeasonalFieldChangedV2|Published when a Seasonal Field is created /updated/deleted in an Azure Data Manager for Agriculture resource-|Microsoft.AgFoodPlatform.BoundaryChangedV2|Published when a farm is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.CropChanged|Published when a Crop is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.CropProductChanged|Published when a Crop Product is created /updated/deleted in an Azure Data Manager for Agriculture resource |Microsoft.AgFoodPlatform.SeasonChanged|Published when a Season is created /updated/deleted in an Azure Data Manager for Agriculture resource For sensor mapping events, the data object contains following properties: |:--| :-| :-| sensorId| string| ID associated with the sensor. partyId| string| ID associated with the party.-boundaryId| string| ID associated with the boundary. sensorPartnerId| string| ID associated with the sensorPartner. | ID | string| Unique ID of resource. actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted eTag| string| Implements optimistic concurrency. description| string| Textual description of the resource. name| string| Name to identify resource. -Boundary events have the following data object: --|Property |Type |Description | -|:|:|:| -| ID | string | User defined ID of boundary | -|actionType | string | Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. | -|modifiedDateTime | string | Indicates the time at which the event was last modified. | -|createdDateTime | string | Indicates the time at which the resource was created. | -|status | string | Contains the user defined status of the object. | -|eTag | string | Implements optimistic concurrency. | -|partyId | string | ID of the party it belongs to. | -|parentId | string | ID of the parent boundary belongs. | -|parentType | string | Type of the parent boundary belongs to. Applicable values are Field, SeasonalField, Zone, Prescription, PlantTissueAnalysis, ApplicationData, PlantingData, TillageData, HarvestData etc. | -|description | string | Textual description of the resource. | -|properties | string | It contains user defined key ΓÇô value pair. | - Seasonal field events have the following data object: Property| Type| Description |:--| :-| :-| ID | string| User defined ID of the seasonal field farmId| string| User defined ID of the farm that seasonal field is associated with.-partyId| string| Id of the party it belongs to. +partyId| string| ID of the party it belongs to. seasonId| string| User defined ID of the season that seasonal field is associated with. fieldId| string| User defined ID of the field that seasonal field is associated with. name| string| User defined name of the seasonal field. Insight events have the following data object: Property| Type| Description |:--| :-| :-| modelId| string| ID of the associated model.|-resourceId| string| User-defined ID of the resource such as farm, field, boundary etc.| -resourceType| string | Name of the resource type. Applicable values are Party, Farm, Field, SeasonalField, Boundary etc.| +resourceId| string| User-defined ID of the resource such as farm, field etc.| +resourceType| string | Name of the resource type. Applicable values are Party, Farm, Field, SeasonalField etc.| partyId| string| ID of the party it belongs to.| modelVersion| string| Version of the associated model.| ID | string| User defined ID of the resource.| InsightAttachment events have the following data object: Property| Type| Description |:--| :-| :-| modelId| string| ID of the associated model.-resourceId| string| User-defined ID of the resource such as farm, field, boundary etc. +resourceId| string| User-defined ID of the resource such as farm, field etc. resourceType| string | Name of the resource type. partyId| string| ID of the party it belongs to. insightId| string| ID associated with the insight resource. Property| Type| Description |:--| :-| :-| | ID | string| User defined ID of the field. farmId| string| User defined ID of the farm that field is associated with.-partyId| string| Id of the party it belongs to. +partyId| string| ID of the party it belongs to. name| string| User defined name of the field. actionType| string| Indicates the change that triggered publishing of the event. Applicable values are created, updated, deleted. properties| Object| It contains user defined key-value pairs. AttachmentChanged event has the following data object Property| Type| Description |:--| :-| :-|-resourceId| string| User-defined ID of the resource such as farm, field, boundary etc. +resourceId| string| User-defined ID of the resource such as farm, field etc. resourceType| string | Name of the resource type. partyId| string| ID of the party it belongs to. | ID | string| User defined ID of the resource. PrescriptionChanged event has the following data object |Property | Type| Description| |:--| :-| :-| prescriptionMapId|string| User-defined ID of the associated prescription map.-partyId| string|Id of the party it belongs to. +partyId| string|ID of the party it belongs to. | ID | string| User-defined ID of the prescription. actionType| string| Indicates the change triggered during publishing of the event. Applicable values are Created, Updated, Deleted status| string| Contains the user-defined status of the prescription. NutrientAnalysisChanged event has the following data object: |:--| :-| :-| parentId| string| ID of the parent nutrient analysis belongs to. parentType| string| Type of the parent nutrient analysis belongs to. Applicable value(s) are PlantTissueAnalysis.-partyId| string|Id of the party it belongs to. +partyId| string|ID of the party it belongs to. | ID | string| User-defined ID of nutrient analysis. actionType| string| Indicates the change that is triggered during publishing of the event. Applicable values are Created, Updated, Deleted. properties| object| It contains user-defined key-value pairs. |
data-manager-for-agri | How To Use Nutrient Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-use-nutrient-apis.md | Here's how we have modeled tissue analysis in Azure Data Manager for Agriculture * Step 1: Create a **plant tissue analysis** resource for every sample you get tested. * Step 2: For each nutrient that is being tested, create a nutrient analysis resource with plant tissue analysis as parent created in step 1. * Step 3: Upload analysis report from the lab (for example: pdf, xlsx files) as attachment and associate with the 'plant tissue analysis' resource created in step 1. -* Step 4: If you have location (longitude, latitude) data, then create a point boundary with 'plant tissue analysis' as parent created in step 1. +* Step 4: If you have location (longitude, latitude) data, then create a point geometry with 'plant tissue analysis' as parent created in step 1. > [!Note]-> One plant tissue analysis resource is created per sample. One point boundary can be associated with it. +> One plant tissue analysis resource is created per sample. One point geometry can be associated with it. ## Next steps |
data-manager-for-agri | How To Write Weather Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-write-weather-extension.md | Hence the extension needs to provide a [**HandleBars template**](https://handleb This section is dedicated for the functionalities/capabilities built by Data Manager for Agriculture. In the case of weather extension, centroid calculation is one such functionality. -When users don't provide the latitude/longitude coordinates, Data Manager for Agriculture will be using the primary boundary of the field (ID passed by user) to compute the centroid. The computed centroid coordinates will be passed as the latitude and longitude to the extension (data provider). Hence for Data Manager for Agriculture to be able to understand the usage of location coordinates the functional parameters section is used. +When users don't provide the latitude/longitude coordinates, Data Manager for Agriculture will be using the primary geometry of the field (ID passed by user) to compute the centroid. The computed centroid coordinates will be passed as the latitude and longitude to the extension (data provider). Hence for Data Manager for Agriculture to be able to understand the usage of location coordinates the functional parameters section is used. For Data Manager for Agriculture to understand the usage of latitude and longitude in the `apiName` input parameters, the extension is expected to provide the `name` of key used for collecting location information followed by a **handlebar template** to imply how the latitude and longitude values need to be passed. |
data-manager-for-agri | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md | Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st [!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)] +## July 2023 ++### Weather API update: +We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs have been replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather). ++### New farm operations connector: +We've added support for Climate FieldView as a built-in data source. You can now auto sync planting, application and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md). ++### Common Data Model now with geo-spatial support: +WeΓÇÖve updated our data model to improve flexibility. The boundary object has been deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that may not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md). + ## June 2023 ### Use your license keys via key vault |
data-manager-for-agri | Sample Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/sample-events.md | The event samples given on this page represent an event notification. } ```` - 6. **Event type: Microsoft.AgFoodPlatform.BoundaryChangedV2** --````json - { - "data": { - "parentType": "Field", - "partyId": "amparty", - "actionType": "Created", - "modifiedDateTime": "2022-11-01T10:48:14Z", - "eTag": "af005dfc-0000-0700-0000-6360f96e0000", - "id": "amb", - "name": "string", - "description": "string", - "createdDateTime": "2022-11-01T10:48:14Z" - }, - "id": "v2-25fd01cf-72d4-401d-92ee-146de348e815", - "topic": "/subscriptions/{SUBSCRIPTION-ID}/resourceGroups/{RESOURCE-GROUP-NAME}/providers/Microsoft.AgFoodPlatform/farmBeats/{YOUR-RESOURCE-NAME}", - "subject": "/parties/amparty/boundaries/amb", - "eventType": "Microsoft.AgFoodPlatform.BoundaryChangedV2", - "dataVersion": "1.0", - "metadataVersion": "1", - "eventTime": "2022-11-01T10:48:14.2385557Z" - } - ```` - 7. **Event type: Microsoft.AgFoodPlatform.SeasonChanged** ````json { The event samples given on this page represent an event notification. { "data": { "partyId": "contoso-partyId",- "message": "Created job 'sat-ingestion-job-1' to fetch satellite data for boundary 'contoso-boundary' from startDate '08/07/2022' to endDate '10/07/2022' (both inclusive).", + "message": "Created job 'sat-ingestion-job-1' to fetch satellite data for resource 'contoso-field' from startDate '08/07/2022' to endDate '10/07/2022' (both inclusive).", "status": "Running", "lastActionDateTime": "2022-11-07T09:35:23.3141004Z", "isCancellationRequested": false, The event samples given on this page represent an event notification. { "data": { "partyId": "party1",- "message": "Created job 'job-biomass-13sdqwd' to calculate biomass values for boundary 'boundary1' from plantingStartDate '05/03/2020' to inferenceEndDate '10/11/2020' (both inclusive).", + "message": "Created job 'job-biomass-13sdqwd' to calculate biomass values for resource 'field1' from plantingStartDate '05/03/2020' to inferenceEndDate '10/11/2020' (both inclusive).", "status": "Waiting", "lastActionDateTime": "0001-01-01T00:00:00Z", "isCancellationRequested": false, The event samples given on this page represent an event notification. { "data": { "partyId": "party",- "message": "Created job 'job-soilmoisture-sf332q' to calculate soil moisture values for boundary 'boundary' from inferenceStartDate '05/01/2022' to inferenceEndDate '05/20/2022' (both inclusive).", + "message": "Created job 'job-soilmoisture-sf332q' to calculate soil moisture values for resource 'field1' from inferenceStartDate '05/01/2022' to inferenceEndDate '05/20/2022' (both inclusive).", "status": "Waiting", "lastActionDateTime": "0001-01-01T00:00:00Z", "isCancellationRequested": false, The event samples given on this page represent an event notification. { "data": { "modelId": "Microsoft.SoilMoisture",- "resourceType": "Boundary", - "resourceId": "boundary", + "resourceType": "Field", + "resourceId": "fieldId", "modelVersion": "1.0", "partyId": "party", "actionType": "Updated", The event samples given on this page represent an event notification. "data": { "insightId": "f5c2071c-c7ce-05f3-be4d-952a26f2490a", "modelId": "Microsoft.SoilMoisture",- "resourceType": "Boundary", - "resourceId": "boundary", + "resourceType": "Field", + "resourceId": "fieldId", "partyId": "party", "actionType": "Updated", "modifiedDateTime": "2022-11-03T18:21:26Z", The event samples given on this page represent an event notification. "data": { "sensorId": "sensor", "partyId": "ContosopartyId",- "boundaryId": "ContosoBoundary", "sensorPartnerId": "sensorpartner", "actionType": "Created", "status": "string", |
databox-online | Azure Stack Edge Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-alerts.md | This article describes how to view alerts and interpret alert severity for event ## Overview -The Alerts blade for an Azure Stack Edge device lets you review Azure Stack Edge deviceΓÇôrelated alerts in real-time. From this blade, you can centrally monitor the health issues of your Azure Stack Edge devices and the overall Microsoft Azure Azure Stack Edge solution. +The Alerts blade for an Azure Stack Edge device lets you review Azure Stack Edge deviceΓÇôrelated alerts in real-time. From this blade, you can centrally monitor the health issues of your Azure Stack Edge devices and the overall Microsoft Azure Stack Edge solution. The initial display is a high-level summary of alerts at each severity level. You can drill down to see individual alerts at each severity level. |
databox-online | Azure Stack Edge Gpu 2304 Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2304-release-notes.md | + + Title: Azure Stack Edge 2304 release notes +description: Describes critical open issues and resolutions for the Azure Stack Edge running 2304 release. +++ +++ Last updated : 08/21/2023++++# Azure Stack Edge 2304 release notes +++The following release notes identify the critical open issues and the resolved issues for the 2304 release for your Azure Stack Edge devices. Features and issues that correspond to a specific model of Azure Stack Edge are called out wherever applicable. ++The release notes are continuously updated, and as critical issues requiring a workaround are discovered, they're added. Before you deploy your device, carefully review the information contained in the release notes. ++This article applies to the **Azure Stack Edge 2304** release, which maps to software version **2.2.2257.1193**. ++## Supported update paths ++This software can be applied to your device if you're running **Azure Stack Edge 2207 or later** (2.2.2026.5318). ++You can update to the latest version using the following update paths: ++| Current version | Update to | Then apply | +| --| --| --| +|2205 and earlier |2207 |2304 +|2207 and later |2304 | ++## What's new ++The 2304 release has the following new features and enhancements: ++- **Fix for the Arc connectivity issue** - In the 2303 release, there was an issue with Arc agent where it couldn't connect to the Azure Stack Edge Kubernetes cluster. Owing to this issue, you weren't able to manage the Kubernetes cluster via Arc. ++ The 2304 release fixes the connectivity issue. To manage your Azure Stack Edge Kubernetes cluster via Arc, update to this release. +- Starting March 2023, Azure Stack Edge devices are required to be on the 2301 release or later to create a Kubernetes cluster. In preparation for this requirement, it is highly recommended that you update to the latest version as soon as possible. +- You can deploy Azure Kubernetes service (AKS) on an Azure Stack Edge cluster. This feature is supported only for SAP and PMEC customers. For more information, see [Deploy AKS on Azure Stack Edge](azure-stack-edge-deploy-aks-on-azure-stack-edge.md). ++## Issues fixed in this release ++| No. | Feature | Issue | +| | | | +|**1.**|Fix for the Arc connectivity issue |In the 2303 release, there was an issue with Arc agent where it couldn't connect to the Azure Stack Edge Kubernetes cluster. Owing to this issue, you weren't able to manage the Kubernetes cluster via Arc. <BR> The 2304 release fixes the connectivity issue. To manage your Azure Stack Edge Kubernetes cluster via Arc, update to this release. | ++<!--## Known issues in this release ++| No. | Feature | Issue | Workaround/comments | +| | | | | +|**1.**|Need known issues in 2303 |--> ++## Known issues from previous releases ++The following table provides a summary of known issues carried over from the previous releases. ++| No. | Feature | Issue | Workaround/comments | +| | | | | +| **1.** |Azure Stack Edge Pro + Azure SQL | Creating SQL database requires Administrator access. |Do the following steps instead of Steps 1-2 in [Create-the-sql-database](../iot-edge/tutorial-store-data-sql-server.md#create-the-sql-database). <br> 1. In the local UI of your device, enable compute interface. Select **Compute > Port # > Enable for compute > Apply.**<br> 2. Download `sqlcmd` on your client machine from [SQL command utility](/sql/tools/sqlcmd-utility). <br> 3. Connect to your compute interface IP address (the port that was enabled), adding a ",1401" to the end of the address.<br> 4. Final command will look like this: sqlcmd -S {Interface IP},1401 -U SA -P "Strong!Passw0rd". After this, steps 3-4 from the current documentation should be identical. | +| **2.** |Refresh| Incremental changes to blobs restored via **Refresh** are NOT supported |For Blob endpoints, partial updates of blobs after a Refresh, may result in the updates not getting uploaded to the cloud. For example, sequence of actions such as:<br> 1. Create blob in cloud. Or delete a previously uploaded blob from the device.<br> 2. Refresh blob from the cloud into the appliance using the refresh functionality.<br> 3. Update only a portion of the blob using Azure SDK REST APIs. These actions can result in the updated sections of the blob to not get updated in the cloud. <br>**Workaround**: Use tools such as robocopy, or regular file copy through Explorer or command line, to replace entire blobs.| +|**3.**|Throttling|During throttling, if new writes to the device aren't allowed, writes by the NFS client fail with a "Permission Denied" error.| The error will show as below:<br>`hcsuser@ubuntu-vm:~/nfstest$ mkdir test`<br>mkdir: can't create directory 'test': Permission deniedΓÇï| +|**4.**|Blob Storage ingestion|When using AzCopy version 10 for Blob storage ingestion, run AzCopy with the following argument: `Azcopy <other arguments> --cap-mbps 2000`| If these limits aren't provided for AzCopy, it could potentially send a large number of requests to the device, resulting in issues with the service.| +|**5.**|Tiered storage accounts|The following apply when using tiered storage accounts:<br> - Only block blobs are supported. Page blobs aren't supported.<br> - There's no snapshot or copy API support.<br> - Hadoop workload ingestion through `distcp` isn't supported as it uses the copy operation heavily.|| +|**6.**|NFS share connection|If multiple processes are copying to the same share, and the `nolock` attribute isn't used, you may see errors during the copy.ΓÇï|The `nolock` attribute must be passed to the mount command to copy files to the NFS share. For example: `C:\Users\aseuser mount -o anon \\10.1.1.211\mnt\vms Z:`.| +|**7.**|Kubernetes cluster|When applying an update on your device that is running a Kubernetes cluster, the Kubernetes virtual machines will restart and reboot. In this instance, only pods that are deployed with replicas specified are automatically restored after an update. |If you have created individual pods outside a replication controller without specifying a replica set, these pods won't be restored automatically after the device update. You'll need to restore these pods.<br>A replica set replaces pods that are deleted or terminated for any reason, such as node failure or disruptive node upgrade. For this reason, we recommend that you use a replica set even if your application requires only a single pod.| +|**8.**|Kubernetes cluster|Kubernetes on Azure Stack Edge Pro is supported only with Helm v3 or later. For more information, go to [Frequently asked questions: Removal of Tiller](https://v3.helm.sh/docs/faq/).| +|**9.**|Kubernetes |Port 31000 is reserved for Kubernetes Dashboard. Port 31001 is reserved for Edge container registry. Similarly, in the default configuration, the IP addresses 172.28.0.1 and 172.28.0.10, are reserved for Kubernetes service and Core DNS service respectively.|Don't use reserved IPs.| +|**10.**|Kubernetes |Kubernetes doesn't currently allow multi-protocol LoadBalancer services. For example, a DNS service that would have to listen on both TCP and UDP. |To work around this limitation of Kubernetes with MetalLB, two services (one for TCP, one for UDP) can be created on the same pod selector. These services use the same sharing key and spec.loadBalancerIP to share the same IP address. IPs can also be shared if you have more services than available IP addresses. <br> For more information, see [IP address sharing](https://metallb.universe.tf/usage/#ip-address-sharing).| +|**11.**|Kubernetes cluster|Existing Azure IoT Edge marketplace modules may require modifications to run on IoT Edge on Azure Stack Edge device.|For more information, see [Run existing IoT Edge modules from Azure Stack Edge Pro FPGA devices on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-modify-fpga-modules-gpu.md).| +|**12.**|Kubernetes |File-based bind mounts aren't supported with Azure IoT Edge on Kubernetes on Azure Stack Edge device.|IoT Edge uses a translation layer to translate `ContainerCreate` options to Kubernetes constructs. Creating `Binds` maps to `hostpath` directory and thus file-based bind mounts can't be bound to paths in IoT Edge containers. If possible, map the parent directory.| +|**13.**|Kubernetes |If you bring your own certificates for IoT Edge and add those certificates on your Azure Stack Edge device after the compute is configured on the device, the new certificates aren't picked up.|To work around this problem, you should upload the certificates before you configure compute on the device. If the compute is already configured, [Connect to the PowerShell interface of the device and run IoT Edge commands](azure-stack-edge-gpu-connect-powershell-interface.md#use-iotedge-commands). Restart `iotedged` and `edgehub` pods.| +|**14.**|Certificates |In certain instances, certificate state in the local UI may take several seconds to update. |The following scenarios in the local UI may be affected. <br> - **Status** column in **Certificates** page. <br> - **Security** tile in **Get started** page. <br> - **Configuration** tile in **Overview** page.<br> | +|**15.**|Certificates|Alerts related to signing chain certificates aren't removed from the portal even after uploading new signing chain certificates.| | +|**16.**|Web proxy |NTLM authentication-based web proxy isn't supported. || +|**17.**|Internet Explorer|If enhanced security features are enabled, you may not be able to access local web UI pages. | Disable enhanced security, and restart your browser.| +|**18.**|Kubernetes |Kubernetes doesn't support ":" in environment variable names that are used by .NET applications. This is also required for Event Grid IoT Edge module to function on Azure Stack Edge device and other applications. For more information, see [ASP.NET core documentation](/aspnet/core/fundamentals/configuration/?tabs=basicconfiguration#environment-variables).|Replace ":" by double underscore. For more information,see [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/53201)| +|**19.** |Azure Arc + Kubernetes cluster |By default, when resource `yamls` are deleted from the Git repository, the corresponding resources aren't deleted from the Kubernetes cluster. |To allow the deletion of resources when they're deleted from the git repository, set `--sync-garbage-collection` in Arc OperatorParams. For more information, see [Delete a configuration](../azure-arc/kubernetes/tutorial-use-gitops-connected-cluster.md#additional-parameters). | +|**20.**|NFS |Applications that use NFS share mounts on your device to write data should use Exclusive write. That ensures the writes are written to the disk.| | +|**21.**|Compute configuration |Compute configuration fails in network configurations where gateways or switches or routers respond to Address Resolution Protocol (ARP) requests for systems that don't exist on the network.| | +|**22.**|Compute and Kubernetes |If Kubernetes is set up first on your device, it claims all the available GPUs. Hence, it isn't possible to create Azure Resource Manager VMs using GPUs after setting up the Kubernetes. |If your device has 2 GPUs, then you can create one VM that uses the GPU and then configure Kubernetes. In this case, Kubernetes will use the remaining available one GPU. | +|**23.**|Custom script VM extension |There's a known issue in the Windows VMs that were created in an earlier release and the device was updated to 2103. <br> If you add a custom script extension on these VMs, the Windows VM Guest Agent (Version 2.7.41491.901 only) gets stuck in the update causing the extension deployment to time out. | To work around this issue: <br> 1. Connect to the Windows VM using remote desktop protocol (RDP). <br> 2. Make sure that the `waappagent.exe` is running on the machine: `Get-Process WaAppAgent`. <br> 3. If the `waappagent.exe` isn't running, restart the `rdagent` service: `Get-Service RdAgent` \| `Restart-Service`. Wait for 5 minutes.<br> 4. While the `waappagent.exe` is running, kill the `WindowsAzureGuest.exe` process. <br> 5. After you kill the process, the process starts running again with the newer version. <br> 6. Verify that the Windows VM Guest Agent version is 2.7.41491.971 using this command: `Get-Process WindowsAzureGuestAgent` \| `fl ProductVersion`.<br> 7. [Set up custom script extension on Windows VM](azure-stack-edge-gpu-deploy-virtual-machine-custom-script-extension.md). | +|**24.**|Multi-Process Service (MPS) |When the device software and the Kubernetes cluster are updated, the MPS setting isn't retained for the workloads. |[Re-enable MPS](azure-stack-edge-gpu-connect-powershell-interface.md#connect-to-the-powershell-interface) and redeploy the workloads that were using MPS. | +|**25.**|Wi-Fi |Wi-Fi doesn't work on Azure Stack Edge Pro 2 in this release. | +|**26.**|Azure IoT Edge |The managed Azure IoT Edge solution on Azure Stack Edge is running on an older, obsolete IoT Edge runtime that is at end of life. For more information, see [IoT Edge v1.1 EoL: What does that mean for me?](https://techcommunity.microsoft.com/t5/internet-of-things-blog/iot-edge-v1-1-eol-what-does-that-mean-for-me/ba-p/3662137). Although the solution does not stop working past end of life, there are no plans to update it. |To run the latest version of Azure IoT Edge [LTSs](../iot-edge/version-history.md#version-history) with the latest updates and features on their Azure Stack Edge, we **recommend** that you deploy a [customer self-managed IoT Edge solution](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md) that runs on a Linux VM. For more information, see [Move workloads from managed IoT Edge on Azure Stack Edge to an IoT Edge solution on a Linux VM](azure-stack-edge-move-to-self-service-iot-edge.md). | +|**27.**|AKS on Azure Stack Edge |When you update your AKS on Azure Stack Edge deployment from a previous preview version to 2303 release, there is an additional nodepool rollout. |The update may take longer. | +|**28.**|Azure portal |When the Arc deployment fails in this release, you will see a generic *NO PARAM* error code, as all the errors are not propagated in the portal. |There is no workaround for this behavior in this release. | +|**29.**|AKS on Azure Stack Edge |In this release, you can't modify the virtual networks once the AKS cluster is deployed on your Azure Stack Edge cluster.| To modify the virtual network, you will need to delete the AKS cluster, then modify virtual networks, and then recreate AKS cluster on your Azure Stack Edge. | +|**30.**|AKS on Azure Stack Edge |In this release, attaching the PVC takes a long time. As a result, some pods that use persistent volumes (PVs) come up slowly after the host reboots. |A workaround is to restart the nodepool VM by connecting via the Windows PowerShell interface of the device. | ++## Next steps ++- [Update your device](azure-stack-edge-gpu-install-update.md) |
databox-online | Azure Stack Edge Gpu Disconnected Scenario | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-disconnected-scenario.md | Before you disconnect your Azure Stack Edge device from the network that allows For Kubernetes deployment guidance, see [Choose the deployment type](azure-stack-edge-gpu-kubernetes-workload-management.md#choose-the-deployment-type). For IoT Edge deployment guidance, see [Run a compute workload with IoT Edge module on Azure Stack Edge](azure-stack-edge-gpu-deploy-compute-module-simple.md). > [!NOTE]- > Some workloads running in VMs, Kerberos, and IoT Edge may require connectivity to Azure. For example, some cognitive services require connectivity for billing. + > Some workloads running in VMs, Kerberos, and IoT Edge may require connectivity to Azure. For example, some Azure AI services require connectivity for billing. ## Key differences for disconnected use |
databox-online | Azure Stack Edge Gpu Install Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md | The procedure described in this article was performed using a different version ## About latest updates -The current update is Update 2303. This update installs two updates, the device update followed by Kubernetes updates. +The current update is Update 2304. This update installs two updates, the device update followed by Kubernetes updates. The associated versions for this update are: -- Device software version: Azure Stack Edge 2303 (2.2.2257.1113)-- Device Kubernetes version: Azure Stack Kubernetes Edge 2303 (2.2.2257.1113)+- Device software version: Azure Stack Edge 2304 (2.2.2257.1193) +- Device Kubernetes version: Azure Stack Kubernetes Edge 2304 (2.2.2257.1193) - Kubernetes server version: v1.24.6 - IoT Edge version: 0.1.0-beta15-- Azure Arc version: 1.8.14+- Azure Arc version: 1.10.6 - GPU driver version: 515.65.01 - CUDA version: 11.7 -For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2303-release-notes.md). +For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2304-release-notes.md). -**To apply 2303 update, your device must be running version 2207 or later.** +**To apply 2304 update, your device must be running version 2207 or later.** - If you are not running the minimum required version, you'll see this error: *Update package cannot be installed as its dependencies are not met.* -- You can update to 2207 from 2106 or later, and then install 2303.+- You can update to 2207 from 2106 or later, and then install 2304. ### Update Azure Kubernetes service on Azure Stack Edge > [!IMPORTANT] > Use the following procedure only if you are an SAP or a PMEC customer. -If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2303. +If you have Azure Kubernetes service deployed and your Azure Stack Edge device and Kubernetes versions are either 2207 or 2209, you must update in multiple steps to apply 2304. -Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2303: +Use the following steps to update your Azure Stack Edge version and Kubernetes version to 2304: -1. Update your device version to 2303. +1. Update your device version to 2304. 1. Update your Kubernetes version to 2210.-1. Update your Kubernetes version to 2303. +1. Update your Kubernetes version to 2304. -If you are running 2210, you can update both your device version and Kubernetes version directly to 2303. +If you are running 2210, you can update both your device version and Kubernetes version directly to 2304. -In Azure portal, the process will require two clicks, the first update gets your device version to 2303 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2303. +In Azure portal, the process will require two clicks, the first update gets your device version to 2304 and your Kubernetes version to 2210, and the second update gets your Kubernetes version upgraded to 2304. -From the local UI, you will have to run each update separately: update the device version to 2303, then update Kubernetes version to 2210, and then update Kubernetes version to 2303. +From the local UI, you will have to run each update separately: update the device version to 2304, then update Kubernetes version to 2210, and then update Kubernetes version to 2304. ### Updates for a single-node vs two-node |
databox-online | Azure Stack Edge Move To Self Service Iot Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-move-to-self-service-iot-edge.md | The high-level workflow is as follows: 1. Optional: If you have leaf IoT devices communicating with IoT Edge on Kubernetes, this step documents how to make changes to communicate with the IoT Edge on a VM. -## Step 1. Create an IoT Edge device on Linux using symmetric keys +## Step 1: Create an IoT Edge device on Linux using symmetric keys Create and provision an IoT Edge device on Linux using symmetric keys. For detailed steps, see [Create and provision an IoT Edge device on Linux using symmetric keys](../iot-edge/how-to-provision-single-device-linux-symmetric.md). -## Step 2. Install and provision an IoT Edge on a Linux VM +## Step 2: Install and provision an IoT Edge on a Linux VM Follow the steps at [Deploy IoT Edge on an Ubuntu VM on Azure Stack Edge](azure-stack-edge-gpu-deploy-iot-edge-linux-vm.md). For other supported Linux distributions, see [Linux containers](../iot-edge/support.md). -## Step 3. Deploy Azure IoT Edge modules from the Azure portal +## Step 3: Deploy Azure IoT Edge modules from the Azure portal Deploy Azure IoT modules to the new IoT Edge. For detailed steps, see [Deploy Azure IoT Edge modules from the Azure portal](../iot-edge/how-to-deploy-modules-portal.md). With the latest IoT Edge version, you can deploy your IoT Edge modules at scale. For more information, see [Deploy IoT Edge modules at scale using the Azure portal](../iot-edge/how-to-deploy-at-scale.md). -## Step 4. Remove Azure IoT Edge modules +## Step 4: Remove Azure IoT Edge modules Once your modules are successfully running on the new IoT Edge instance running on a VM, you can delete the whole IoT Edge device associated with that IoT Edge instance. From IoT Hub on the Azure portal, delete the IoT Edge device connected to the IoT Edge, as shown below. ![Screenshot showing delete IoT Edge device from IoT Edge instance in Azure portal UI.](media/azure-stack-edge-move-to-self-service-iot-edge/azure-stack-edge-delete-iot-edge-device.png) -## Step 5. Optional: Remove the IoT Edge service +## Step 5: Optional: Remove the IoT Edge service If you aren't using the Kubernetes cluster on Azure Stack Edge, use the following steps to [remove the IoT Edge service](azure-stack-edge-gpu-manage-compute.md#remove-iot-edge-service). This action will remove modules running on the IoT Edge device, the IoT Edge runtime, and the Kubernetes cluster that hosts the IoT Edge runtime. From the Azure Stack Edge resource on Azure portal, under the Azure IoT Edge ser > [!IMPORTANT] > Once the Kubernetes cluster is removed, there is no way to recover information from the Kubernetes cluster, whether it's IoT Edge-related or not. -## Step 6. Optional: Configure an IoT Edge device as a transparent gateway +## Step 6: Optional: Configure an IoT Edge device as a transparent gateway If your IoT Edge device on Azure Stack Edge was configured as a gateway for downstream IoT devices, you must configure the IoT Edge running on the Linux VM as a transparent gateway. For more information, see [Configure and IoT Edge device as a transparent gateway](../iot-edge/how-to-create-transparent-gateway.md). |
databox-online | Azure Stack Edge Technical Specifications Power Cords Regional | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-technical-specifications-power-cords-regional.md | Use the following table to find the correct cord specifications for your region: |China|250|10|RVV300/500 3X0.75|GB 2099.1|C13|2000| |Colombia|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Costa Rica|125|10|SVE 18/3|NEMA 5-15P|C13|1830|-|C├┤te D'Ivoire (Ivory Coast)|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| +|C├┤te D'Ivoire|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Croatia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Cyprus|250|5|H05VV-F 3x0.75|BS1363 SS145/A|C13|1800| |Czech Republic|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| Use the following table to find the correct cord specifications for your region: |Lithuania|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Luxembourg|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Macao Special Administrative Region|2250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|-|Macedonia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Malaysia|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Malta|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Mauritius|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| Use the following table to find the correct cord specifications for your region: |New Zealand|250|10|H05VV-F 3x1.00|AS/NZS 3112|C13|2438| |Nicaragua|125|10|SVE 18/3|NEMA 5-15P|C13|1830| |Nigeria|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800|+|North Macedonia|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Norway|250|10|H05Z1Z1 3x0.75|CEE 7|C13|1830| |Oman|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |Pakistan|250|5|H05VV-F 3x0.75|BS 1363 / SS145/A|C13|1800| |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
databox | Data Box System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-system-requirements.md | The software requirements include supported operating systems, file transfer pro [!INCLUDE [data-box-supported-file-systems-clients](../../includes/data-box-supported-file-systems-clients.md)] -> [!IMPORTANT] -> Connection to Data Box shares is not supported via REST for export orders. -+> [!IMPORTANT] +> Connection to Data Box shares is not supported via REST for export orders. +> Transporting data from on-premises NFS clients into Data Box using NFSv4 is supported. However, to copy data from Data Box to Azure, Data Box supports only REST-based transport. Azure file share with NFSv4.1 does not support REST for data access/transfer. ### Supported storage accounts > [!Note] The following table lists the ports that need to be opened in your firewall to a ## Next steps * [Deploy your Azure Data Box](data-box-deploy-ordered.md)+ |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
databox | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
ddos-protection | Ddos Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md | Distributed denial of service (DDoS) attacks are some of the largest availabilit Azure DDoS Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes. Azure DDoS Protection protects at layer 3 and layer 4 network layers. For web applications protection at layer 7, you need to add protection at the application layer using a WAF offering. For more information, see [Application DDoS protection](../web-application-firewall/shared/application-ddos-protection.md). -## Tiers +## Azure DDoS Protection: Tiers ### DDoS Network Protection Azure DDoS Network Protection, combined with application design best practices, DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added -For more information about the tiers, see [Tier comparison](ddos-protection-sku-comparison.md). -## Key benefits +For more information about the tiers, see [DDoS Protection tier comparison](ddos-protection-sku-comparison.md). +## Azure DDoS Protection: Key Features -### Always-on traffic monitoring +- **Always-on traffic monitoring:** Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. Azure DDoS Protection instantly and automatically mitigates the attack, once it's detected. -### Adaptive real time tuning +- **Adaptive real time tuning:** Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time. -### DDoS Protection telemetry, monitoring, and alerting +- **DDoS Protection analytics, metrics, and alerting:** Azure DDoS Protection applies three auto-tuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.+ - **Attack analytics:** +Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more. ++ - **Attack metrics:** + Summarized metrics from each attack are accessible through Azure Monitor. See [View and configure DDoS protection telemetry](telemetry.md) to learn more. ++ - **Attack alerting:** + Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal. See [View and configure DDoS protection alerts +](alerts.md) to learn more. -### Azure DDoS Rapid Response +- **Azure DDoS Rapid Response:** During an active attack, Azure DDoS Protection customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md). -### Native platform integration +- **Native platform integration:** Natively integrated into Azure. Includes configuration through the Azure portal. Azure DDoS Protection understands your resources and resource configuration. -### Turnkey protection +- **Turnkey protection:** Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Network Protection is enabled. No intervention or user definition is required. Similarly, simplified configuration immediately protects a public IP resource when DDoS IP Protection is enabled for it. -### Multi-Layered protection +- **Multi-Layered protection:** When deployed with a web application firewall (WAF), Azure DDoS Protection protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=/azure/virtual-network/toc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall). -### Extensive mitigation scale +- **Extensive mitigation scale:** All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks. -### Attack analytics -Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors/azure-ddos-protection.md) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more. --### Attack metrics - Summarized metrics from each attack are accessible through Azure Monitor. See [View and configure DDoS protection telemetry](telemetry.md) to learn more. --### Attack alerting - Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal. See [View and configure DDoS protection alerts -](alerts.md) to learn more. --### Cost guarantee +- **Cost guarantee:** Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks. -## Architecture +## Azure DDoS Protection: Architecture Azure DDoS Protection is designed for [services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default infrastructure-level DDoS protection applies, which defends against common network-layer attacks. To learn more about supported architectures, see [DDoS Protection reference architectures](./ddos-protection-reference-architectures.md). For DDoS IP Protection, there's no need to create a DDoS protection plan. Custom To learn about Azure DDoS Protection pricing, see [Azure DDoS Protection pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). +## Best Practices for DDoS Protection +Maximize the effectiveness of your DDoS protection strategy by following these best practices: + +- Design your applications and infrastructure with redundancy and resilience in mind. +- Implement a multi-layered security approach, including network, application, and data protection. +- Prepare an incident response plan to ensure a coordinated response to DDoS attacks. ++To learn more about best practices, see [Fundamental best practices](./fundamental-best-practices.md). + ## DDoS Protection FAQ For frequently asked questions, see the [DDoS Protection FAQ](ddos-faq.yml). |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
dedicated-hsm | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/troubleshoot.md | Only when fully finished with an HSM can it be deprovisioned and then Microsoft **DO NOT DELETE the Resource Group of your Dedicated HSM directly. It will not delete the HSM resource, you will continue to be billed as it places the HSM into a orphaned state. If did not follow correct procedures and end up in this situation, contact Microsoft Support.** -**Step 1** Zeorize the HSM. The Azure resource for an HSM cannot be deleted unless the HSM is in a "zeroized" state. Hence, all key material must have been deleted prior to trying to delete it as a resource. The quickest way to zeroize is to get the HSM admin password wrong 3 times (note: this refers to the HSM admin and not appliance level admin). Use command ΓÇÿhsm loginΓÇÖ and enter wrong password three times. The Luna shell does have a hsm -factoryreset command that zeroizes the HSM but it can only be executed via console on the serial port and customers do not have access to this. +**Step 1:** Zeroize the HSM. The Azure resource for an HSM cannot be deleted unless the HSM is in a "zeroized" state. Hence, all key material must have been deleted prior to trying to delete it as a resource. The quickest way to zeroize is to get the HSM admin password wrong 3 times (note: this refers to the HSM admin and not appliance level admin). Use command ΓÇÿhsm loginΓÇÖ and enter wrong password three times. The Luna shell does have a hsm -factoryreset command that zeroizes the HSM but it can only be executed via console on the serial port and customers do not have access to this. -**Step 2** Once HSM is zeroized, you can use either of the following commands to initiate the Delete Dedicated HSM resource +**Step 2:** Once HSM is zeroized, you can use either of the following commands to initiate the Delete Dedicated HSM resource > **Azure CLI**: az dedicated-hsm delete --resource-group \<RG name\> ΓÇô-name \<HSM name\> <br /> > **Azure PowerShell**: Remove-AzDedicatedHsm -Name \<HSM name\> -ResourceGroupName \<RG name\> -**Step 3** Once step 2 is successful, you can delete the resource group to delete the other resources associated with the dedicated HSM by using either Azure CLI or Azure PowerShell. +**Step 3:** Once **Step 2** is successful, you can delete the resource group to delete the other resources associated with the dedicated HSM by using either Azure CLI or Azure PowerShell. > **Azure CLI**: az group delete --name \<RG name\> <br /> > **Azure PowerShell**: Remove-AzResourceGroup -Name \<RG name\> |
defender-for-cloud | Adaptive Application Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md | To edit the rules for a group of machines: ![Add a custom rule.](./media/adaptive-application/adaptive-application-add-custom-rule.png) 1. If you're defining a known safe path, change the **Rule type** to 'Path' and enter a single path. You can include wildcards in the path. The following screens show some examples of how to use wildcards.- + :::image type="content" source="media/adaptive-application/wildcard-examples.png" alt-text="Screenshot that shows examples of using wildcards." lightbox="media/adaptive-application/wildcard-examples.png":::- + > [!TIP] > Some scenarios for which wildcards in a path might be useful: > On this page, you learned how to use adaptive application control in Microsoft D - [Understanding just-in-time (JIT) VM access](just-in-time-access-overview.md) - [Securing your Azure Kubernetes clusters](defender-for-kubernetes-introduction.md)-- View common question about [Adaptive application controls](faq-defender-for-servers.yml)+- View common question about [Adaptive application controls](faq-defender-for-servers.yml) |
defender-for-cloud | Advanced Configurations For Malware Scanning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/advanced-configurations-for-malware-scanning.md | Title: Microsoft Defender for Storage - advanced configurations for malware scanning description: Learn about the advanced configurations of Microsoft Defender for Storage malware scanning Previously updated : 08/08/2023 Last updated : 08/21/2023 -# Advanced configurations for malware scanning +# Advanced configurations for malware scanning Malware Scanning can be configured to send scanning results to the following: This configuration can be performed using REST API as well: Request URL: -``` +```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current/providers/Microsoft.Insights/diagnosticSettings/service?api-version=2021-05-01-preview ```+ Request Body: -``` +```rest { "properties": { "workspaceId": "/subscriptions/{subscriptionId}/resourcegroups/{resourceGroup}/providers/microsoft.operationalinsights/workspaces/{workspaceName}", For each storage account enabled with malware scanning, you can configure to sen 1. To configure the Event Grid custom topic destination, go to the relevant storage account, open the **Microsoft Defender for Cloud** tab, and select the settings to configure. > [!NOTE]-> When you set an Event Grid custom topic, you should set **Override Defender for Storage subscription-level settingsΓÇ¥ to **On** to make sure it overrides the subscription-level settings. +> When you set an Event Grid custom topic, you should set **Override Defender for Storage subscription-level settings** to **On** to make sure it overrides the subscription-level settings. :::image type="content" source="media/azure-defender-storage-configure/event-grid-settings.png" alt-text="Screenshot that shows where to enable an Event Grid destination for scan logs." lightbox="media/azure-defender-storage-configure/event-grid-settings.png"::: This configuration can be performed using REST API as well: Request URL: -``` +```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview ``` Request Body: -``` +```rest { "properties": { "isEnabled": true, Request Body: } } ```+ ## Override Defender for Storage subscription-level settings The subscription-level settings inherit Defender for Storage settings on each storage account in the subscription. Use Override Defender for Storage subscription-level settings to configure settings for individual storage accounts different from those configured on the subscription level. Overriding the settings of the subscriptions are usually used for the following - Configure custom settings for Malware Scanning. - Disable Microsoft Defender for Storage on specific storage accounts. -> [!NOTE] +> [!NOTE] > We recommend that you enable Defender for Storage on the entire subscription to protect all existing and future storage accounts in it. However, there are some cases where you would want to exclude specific storage accounts from Defender protection. If you've decided to exclude, follow the steps below to use the override setting and then disable the relevant storage account. If you are using Defender for Storage (classic), you can also [exclude storage accounts](defender-for-storage-classic-enable.md). ### Azure portal To configure the settings of individual storage accounts different from those co 1. To adjust the monthly threshold for malware scanning in your storage accounts, you can modify the parameter called **Set limit of GB scanned per month** to your desired value. This parameter determines the maximum amount of data that can be scanned for malware each month, specifically for each storage account. If you wish to allow unlimited scanning, you can uncheck this parameter. By default, the limit is set at 5,000 GB. - 1. To disable Defender for Storage on this storage account, set the status of Microsoft Defender for Storage to **Off**. :::image type="content" source="media/azure-defender-storage-configure/defender-for-storage-settings.png" alt-text="Screenshot that shows where to turn off Defender for Storage in the Azure portal." lightbox="media/azure-defender-storage-configure/defender-for-storage-settings.png"::: Create a PUT request with this endpoint. Replace the subscriptionId, resourceGro Request URL: -``` +```rest PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Storage/storageAccounts/{accountName}/providers/Microsoft.Security/DefenderForStorageSettings/current?api-version=2022-12-01-preview ``` Request Body: -``` +```rest { "properties": { "isEnabled": true, Request Body: 1. To disable Defender for Storage on this storage accounts, use the following request body: - ``` + ```rest { "properties": { "isEnabled": false, Make sure you add the parameter `overrideSubscriptionLevelSettings` and its valu ## Next steps -Learn more about [malware scanning settings](defender-for-storage-malware-scan.md). +Learn more about [malware scanning settings](defender-for-storage-malware-scan.md). |
defender-for-cloud | Agentless Container Registry Vulnerability Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-container-registry-vulnerability-assessment.md | In every subscription where this capability is enabled, all images stored in ACR Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerability Management) has the following capabilities: -- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-imagespowered-by-mdvm).-- **Language specific packages** – support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-matrix-defender-for-containers.md#registries-and-imagespowered-by-mdvm). +- **Scanning OS packages** - container vulnerability assessment has the ability to scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-for-azurepowered-by-mdvm). +- **Language specific packages** – support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [complete list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-for-azurepowered-by-mdvm). - **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services). - **Exploitability information** - Each vulnerability report is searched through exploitability databases to assist our customers with determining actual risk associated with each reported vulnerability. - **Reporting** - Container Vulnerability Assessment for Azure powered by Microsoft Defender Vulnerability Management (MDVM) provides vulnerability reports using following recommendations: Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi | [Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg).-- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP).+- **Query vulnerability information via subassessment API** - You can get scan results via [REST API](subassessment-rest-api.md). - **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md). - **Support for disabling vulnerabilities** - Learn how to [disable vulnerabilities on images](disable-vulnerability-findings-containers.md). |
defender-for-cloud | Alerts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md | +- Alerts are displayed in the portal for 90 days, even if the resource related to the alert was deleted during that time. This is because the alert might indicate a potential breach to your organization that needs to be further investigated. - Alerts can be exported to CSV format. - Alerts can also be streamed directly to a Security Information and Event Management (SIEM) such as Microsoft Sentinel, Security Orchestration Automated Response (SOAR), or IT Service Management (ITSM) solution. - Defender for Cloud leverages the [MITRE Attack Matrix](https://attack.mitre.org/matrices/enterprise/) to associate alerts with their perceived intent, helping formalize security domain knowledge. Alerts have a severity level assigned to help prioritize how to attend to each a - The specific trigger - The confidence level that there was malicious intent behind the activity that led to the alert - | Severity | Recommended response | |-|| | **High** | There is a high probability that your resource is compromised. You should look into it right away. Defender for Cloud has high confidence in both the malicious intent and in the findings used to issue the alert. For example, an alert that detects the execution of a known malicious tool such as Mimikatz, a common tool used for credential theft. | As the breath of threat coverage grows, so does the need to detect even the slig In the cloud, attacks can occur across different tenants, Defender for Cloud can combine AI algorithms to analyze attack sequences that are reported on each Azure subscription. This technique identifies the attack sequences as prevalent alert patterns, instead of just being incidentally associated with each other. -During an investigation of an incident, analysts often need extra context to reach a verdict about the nature of the threat and how to mitigate it. For example, even when a network anomaly is detected, without understanding what else is happening on the network or with regard to the targeted resource, it's difficult to understand what actions to take next. To help, a security incident can include artifacts, related events, and information. The additional information available for security incidents varies, depending on the type of threat detected and the configuration of your environment. -+During an investigation of an incident, analysts often need extra context to reach a verdict about the nature of the threat and how to mitigate it. For example, even when a network anomaly is detected, without understanding what else is happening on the network or with regard to the targeted resource, it's difficult to understand what actions to take next. To help, a security incident can include artifacts, related events, and information. The additional information available for security incidents varies, depending on the type of threat detected and the configuration of your environment. ### Correlating alerts into incidents Defender for Cloud correlates alerts and contextual signals into incidents. <a name="detect-threats"> </a> -## How does Defender for Cloud detect threats? +## How does Defender for Cloud detect threats? To detect real threats and reduce false positives, Defender for Cloud monitors resources, collects, and analyzes data for threats, often correlating data from multiple sources. You have a range of options for viewing your alerts outside of Defender for Clou - **Download CSV report** on the alerts dashboard provides a one-time export to CSV. - **Continuous export** from Environment settings allows you to configure streams of security alerts and recommendations to Log Analytics workspaces and Event Hubs. [Learn more](continuous-export.md).-- **Microsoft Sentinel connector** streams security alerts from Microsoft Defender for Cloud into Microsoft Sentinel. [Learn more ](../sentinel/connect-azure-security-center.md).+- **Microsoft Sentinel connector** streams security alerts from Microsoft Defender for Cloud into Microsoft Sentinel. [Learn more](../sentinel/connect-azure-security-center.md). Learn about [streaming alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md) and how to [continuously export data](continuous-export.md). - ## Next steps In this article, you learned about the different types of alerts available in Defender for Cloud. For more information, see: |
defender-for-cloud | Alerts Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md | Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in | **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines are not equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium | | **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low | | **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |-| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | +| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | | **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | | **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | | **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | | **Suspicious usage of VMAccess extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VMAccess extension was detected on your virtual machines. Attackers may abuse the VMAccess extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |-| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | +| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | | **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | | **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd) | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | | **Suspicious failed execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousFailure) | Suspicious failure of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such failures may be associated with malicious scripts run by this extension. | Execution | Medium | | **Unusual deletion of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualDeletion) | Unusual deletion of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | | **Unusual execution of custom script extension in your virtual machine**<br>(VM_CustomScriptExtensionUnusualExecution) | Unusual execution of a custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | | **Custom script extension with suspicious entry-point in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousEntryPoint) | Custom script extension with a suspicious entry-point was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. The entry-point refers to a suspicious GitHub repository. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium |-| **Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | - +| **Custom script extension with suspicious payload in your virtual machine**<br>(VM_CustomScriptExtensionSuspiciousPayload) | Custom script extension with a payload from a suspicious GitHub repository was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use custom script extensions to execute malicious code on your virtual machines via the Azure Resource Manager. | Execution | Medium | + ## <a name="alerts-azureappserv"></a>Alerts for Azure App Service [Further details and notes](defender-for-app-service-introduction.md) Microsoft Defender for Containers provides security alerts on the cluster level |**Suspicious external access to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.InternalSasUsedExternally| The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. This type of access is considered suspicious because the SAS token is typically only used in internal networks (from private IP addresses). <br>The activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium | |**Suspicious external operation to an Azure storage account with overly permissive SAS token (Preview)**<br>Storage.Blob_AccountSas.UnusualOperationFromExternalIp| The alert indicates that someone with an external (public) IP address accessed the storage account using an overly permissive SAS token with a long expiration date. The access is considered suspicious because operations invoked outside your network (not from private IP addresses) with this SAS token are typically used for a specific set of Read/Write/Delete operations, but other operations occurred, which makes this access suspicious. <br>This activity may indicate that a SAS token has been leaked by a malicious actor or leaked unintentionally from a legitimate source. <br>Even if the access is legitimate, using a high-permission SAS token with a long expiration date goes against security best practices and poses a potential security risk. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Medium | |**Unusual SAS token was used to access an Azure storage account from a public IP address (Preview)**<br>Storage.Blob_AccountSas.UnusualExternalAccess| The alert indicates that someone with an external (public) IP address has accessed the storage account using an account SAS token. The access is highly unusual and considered suspicious, as access to the storage account using SAS tokens typically comes only from internal (private) IP addresses. <br>It's possible that a SAS token was leaked or generated by a malicious actor either from within your organization or externally to gain access to this storage account. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan. | Exfiltration / Resource Development / Impact | Low |-|**Malicious file uploaded to storage account (Preview)**<br>Storage.Blob_AM.MalwareFound| The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | LateralMovement | High | +|**Malicious file uploaded to storage account**<br>Storage.Blob_AM.MalwareFound| The alert indicates that a malicious blob was uploaded to a storage account. This security alert is generated by the Malware Scanning feature in Defender for Storage. <br>Potential causes may include an intentional upload of malware by a threat actor or an unintentional upload of a malicious file by a legitimate user. <br>Applies to: Azure Blob (Standard general-purpose v2, Azure Data Lake Storage Gen2 or premium block blobs) storage accounts with the new Defender for Storage plan with the Malware Scanning feature enabled. | LateralMovement | High | ## <a name="alerts-azurecosmos"></a>Alerts for Azure Cosmos DB Microsoft Defender for Containers provides security alerts on the cluster level | **Access from a suspicious IP**<br>(CosmosDB_SuspiciousIp) | This Azure Cosmos DB account was successfully accessed from an IP address that was identified as a threat by Microsoft Threat Intelligence. | Initial Access | Medium | | **Access from an unusual location**<br>(CosmosDB_GeoAnomaly) | This Azure Cosmos DB account was accessed from a location considered unfamiliar, based on the usual access pattern. <br><br> Either a threat actor has gained access to the account, or a legitimate user has connected from a new or unusual geographic location | Initial Access | Low | | **Unusual volume of data extracted**<br>(CosmosDB_DataExfiltrationAnomaly) | An unusually large volume of data has been extracted from this Azure Cosmos DB account. This might indicate that a threat actor exfiltrated data. | Exfiltration | Medium |-| **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | High | +| **Extraction of Azure Cosmos DB accounts keys via a potentially malicious script**<br>(CosmosDB_SuspiciousListKeys.MaliciousScript) | A PowerShell script was run in your subscription and performed a suspicious pattern of key-listing operations to get the keys of Azure Cosmos DB accounts in your subscription. Threat actors use automated scripts, like Microburst, to list keys and find Azure Cosmos DB accounts they can access. <br><br> This operation might indicate that an identity in your organization was breached, and that the threat actor is trying to compromise Azure Cosmos DB accounts in your environment for malicious intentions. <br><br> Alternatively, a malicious insider could be trying to access sensitive data and perform lateral movement. | Collection | Medium | | **Suspicious extraction of Azure Cosmos DB account keys** (AzureCosmosDB_SuspiciousListKeys.SuspiciousPrincipal) | A suspicious source extracted Azure Cosmos DB account access keys from your subscription. If this source is not a legitimate source, this may be a high impact issue. The access key that was extracted provides full control over the associated databases and the data stored within. See the details of each specific alert to understand why the source was flagged as suspicious. | Credential Access | high | | **SQL injection: potential data exfiltration**<br>(CosmosDB_SqlInjection.DataExfiltration) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> The injected statement might have succeeded in exfiltrating data that the threat actor isn't authorized to access. <br><br> Due to the structure and capabilities of Azure Cosmos DB queries, many known SQL injection attacks on Azure Cosmos DB accounts can't work. However, the variation used in this attack may work and threat actors can exfiltrate data. | Exfiltration | Medium | | **SQL injection: fuzzing attempt**<br>(CosmosDB_SqlInjection.FailedFuzzingAttempt) | A suspicious SQL statement was used to query a container in this Azure Cosmos DB account. <br><br> Like other well-known SQL injection attacks, this attack won't succeed in compromising the Azure Cosmos DB account. <br><br> Nevertheless, it's an indication that a threat actor is trying to attack the resources in this account, and your application may be compromised. <br><br> Some SQL injection attacks can succeed and be used to exfiltrate data. This means that if the attacker continues performing SQL injection attempts, they may be able to compromise your Azure Cosmos DB account and exfiltrate data. <br><br> You can prevent this threat by using parameterized queries. | Pre-attack | Low | VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High - [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - [Continuously export Defender for Cloud data](continuous-export.md)+ |
defender-for-cloud | Asset Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md | For more information on related tools, see the following pages: - [Azure Resource Graph (ARG)](../governance/resource-graph/index.yml) - [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/)-- View common question about [asset inventory](faq-defender-for-servers.yml)+- View common question about [asset inventory](faq-defender-for-servers.yml) |
defender-for-cloud | Attack Path Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/attack-path-reference.md | Title: Reference list of attack paths and cloud security graph components description: This article lists Microsoft Defender for Cloud's list of attack paths based on resource. Previously updated : 04/13/2023 Last updated : 08/15/2023 # Reference list of attack paths and cloud security graph components Prerequisite: For a list of prerequisites, see the [Availability table](how-to-m | Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure secret that is used to authenticate to a SQL server | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an SQL server | | VM has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to storage account | An Azure virtual machine has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an Azure storage account |-| Internet expsed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account | +| Internet exposed VM has high severity vulnerabilities and has insecure secret that is used to authenticate to storage account | An Azure virtual machine is reachable from the internet, has high severity vulnerabilities and has secret that can authenticate to an Azure storage account | ### AWS EC2 instances Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl | EC2 instance has high severity vulnerabilities and has insecure plaintext secret that is used to authenticate to a RDS resource | An AWS EC2 instance has high severity vulnerabilities and has plaintext SSH private key that can authenticate to an AWS RDS resource | | Internet exposed AWS EC2 instance has high severity vulnerabilities and has insecure secret that has permission to S3 bucket via an IAM policy, or via a bucket policy, or via both an IAM policy and a bucket policy. | An AWS EC2 instance is reachable from the internet, has high severity vulnerabilities and has insecure secret that has permissions to S3 bucket via an IAM policy, a bucket policy or both | +### GCP VM Instances ++| Attack path display name | Attack path description | +|--|--| +| Internet exposed VM instance has high severity vulnerabilities | GCP VM instance '[VMInstanceName]' is reachable from the internet and has high severity vulnerabilities [Remote Code Execution]. | +| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. | +| Internet exposed VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities allowing remote code execution on the machine and assigned with Service Account with read permission to GCP Storage bucket '[BucketName]' containing sensitive data. | +| Internet exposed VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'. | +| Internet exposed VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' is reachable from the internet, has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. | +| Internet exposed VM instance has high severity vulnerabilities and a hosted database installed | GCP VM instance '[VMInstanceName]' with a hosted [DatabaseType] database is reachable from the internet and has high severity vulnerabilities. | +| Internet exposed VM with high severity vulnerabilities has plaintext SSH private key | GCP VM instance '[MachineName]' is reachable from the internet, has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. | +| VM instance with high severity vulnerabilities has read permissions to a data store | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions to a data store. | +| VM instance with high severity vulnerabilities has read permissions to a data store with sensitive data | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities [Remote Code Execution] and has read permissions to GCP Storage bucket '[BucketName]' containing sensitive data. | +| VM instance has high severity vulnerabilities and high permission to a project | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has '[Permissions]' permission to project '[ProjectName]'.| +| VM instance with high severity vulnerabilities has read permissions to a Secret Manager | GCP VM instance '[VMInstanceName]' has high severity vulnerabilities[Remote Code Execution] and has read permissions through IAM policy to GCP Secret Manager's secret '[SecretName]'. | +| VM instance with high severity vulnerabilities has plaintext SSH private key | GCP VM instance to align with all other attack paths. Virtual machine '[MachineName]' has high severity vulnerabilities [Remote Code Execution] and has plaintext SSH private key [SSHPrivateKey]. | + ### Azure data | Attack path display name | Attack path description | Prerequisite: [Enable agentless scanning](enable-vulnerability-assessment-agentl | Private AWS S3 bucket with sensitive data replicates data to internet exposed and publicly accessible AWS S3 bucket | Private AWS S3 bucket with sensitive data is replicating data to internet exposed and publicly accessible AWS S3 bucket| | RDS snapshot is publicly available to all AWS accounts (Preview) | RDS snapshot is publicly available to all AWS accounts | +### GCP data ++| Attack path display name | Attack path description | +|--|--| +| GCP Storage Bucket with sensitive data is publicly accessible | GCP Storage Bucket [BucketName] with sensitive data allows public read access without authorization required. | + ### Azure containers Prerequisite: [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This will also give you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. This section lists all of the cloud security graph components (connections and i | Insight | Description | Supported entities | |--|--|--|-| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance | +| Exposed to the internet | Indicates that a resource is exposed to the internet. Supports port filtering. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure virtual machine, AWS EC2, Azure storage account, Azure SQL server, Azure Cosmos DB, AWS S3, Kubernetes pod, Azure SQL Managed Instance, Azure MySQL Single Server, Azure MySQL Flexible Server, Azure PostgreSQL Single Server, Azure PostgreSQL Flexible Server, Azure MariaDB Single Server, Synapse Workspace, RDS Instance, GCP VM instance, GCP SQL admin instance | | Allows basic authentication (Preview) | Indicates that a resource allows basic (local user/password or key-based) authentication | Azure SQL Server, RDS Instance |-| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts | +| Contains sensitive data <br/> <br/> Prerequisite: [Enable data-aware security for storage accounts in Defender CSPM](data-security-posture-enable.md), or [leverage Microsoft Purview Data Catalog to protect sensitive data](information-protection.md). | Indicates that a resource contains sensitive data. | Azure Storage Account, Azure Storage Account Container, AWS S3 bucket, Azure SQL Server, Azure SQL Database, Azure Data Lake Storage Gen2, Azure Database for PostgreSQL, Azure Database for MySQL, Azure Synapse Analytics, Azure Cosmos DB accounts, GCP cloud storage bucket | | Moves data to (Preview) | Indicates that a resource transfers its data to another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster | | Gets data from (Preview) | Indicates that a resource gets its data from another resource | Storage account container, AWS S3, AWS RDS instance, AWS RDS cluster |-| Has tags | Lists the resource tags of the cloud resource | All Azure and AWS resources | +| Has tags | Lists the resource tags of the cloud resource | All Azure, AWS, and GCP resources | | Installed software | Lists all software installed on the machine. This insight is applicable only for VMs that have threat and vulnerability management integration with Defender for Cloud enabled and are connected to Defender for Cloud. | Azure virtual machine, AWS EC2 |-| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository | +| Allows public access | Indicates that a public read access is allowed to the resource with no authorization required. [Learn more](concept-data-security-posture-prepare.md#exposed-to-the-internetallows-public-access) | Azure storage account, AWS S3 bucket, GitHub repository, GCP cloud storage bucket | | Doesn't have MFA enabled | Indicates that the user account does not have a multi-factor authentication solution enabled | Azure AD User account, IAM user | | Is external user | Indicates that the user account is outside the organization's domain | Azure AD User account | | Is managed | Indicates that an identity is managed by the cloud provider | Azure Managed Identity | This section lists all of the cloud security graph components (connections and i | DEASM findings | Microsoft Defender External Attack Surface Management (DEASM) internet scanning findings | Public IP | | Privileged container | Indicates that a Kubernetes container runs in a privileged mode | Kubernetes container | | Uses host network | Indicates that a Kubernetes pod uses the network namespace of its host machine | Kubernetes pod |-| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image | -| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image | +| Has high severity vulnerabilities | Indicates that a resource has high severity vulnerabilities | Azure VM, AWS EC2, Container image, GCP VM instance | +| Vulnerable to remote code execution | Indicates that a resource has vulnerabilities allowing remote code execution | Azure VM, AWS EC2, Container image, GCP VM instance | | Public IP metadata | Lists the metadata of an Public IP | Public IP | | Identity metadata | Lists the metadata of an identity | Azure AD Identity | This section lists all of the cloud security graph components (connections and i |--|--|--|--| | Can authenticate as | Indicates that an Azure resource can authenticate to an identity and use its privileges | Azure VM, Azure VMSS, Azure Storage Account, Azure App Services, SQL Servers | Azure AD managed identity | | Has permission to | Indicates that an identity has permissions to a resource or a group of resources | Azure AD user account, Managed Identity, IAM user, EC2 instance | All Azure & AWS resources|-| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server | All Azure & AWS resources, All Kubernetes entities, All DevOps entities, Azure SQL database | -| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service | +| Contains | Indicates that the source entity contains the target entity | Azure subscription, Azure resource group, AWS account, Kubernetes namespace, Kubernetes pod, Kubernetes cluster, GitHub owner, Azure DevOps project, Azure DevOps organization, Azure SQL server, GCP project, GCP Folder, GCP Organization | All Azure, AWS, and GCP resources, All Kubernetes entities, All DevOps entities, Azure SQL database | +| Routes traffic to | Indicates that the source entity can route network traffic to the target entity | Public IP, Load Balancer, VNET, Subnet, VPC, Internet Gateway, Kubernetes service, Kubernetes pod| Azure VM, Azure VMSS, AWS EC2, Subnet, Load Balancer, Internet gateway, Kubernetes pod, Kubernetes service, GCP VM instance, GCP instance group | | Is running | Indicates that the source entity is running the target entity as a process | Azure VM, EC2, Kubernetes container | SQL, Arc-Enabled SQL, Hosted MongoDB, Hosted MySQL, Hosted Oracle, Hosted PostgreSQL, Hosted SQL Server, Container image, Kubernetes pod | | Member of | Indicates that the source identity is a member of the target identities group | Azure AD group, Azure AD user | Azure AD group | | Maintains | Indicates that the source Kubernetes entity manages the life cycle of the target Kubernetes entity | Kubernetes workload controller, Kubernetes replica set, Kubernetes stateful set, Kubernetes daemon set, Kubernetes jobs, Kubernetes cron job | Kubernetes pod | |
defender-for-cloud | Auto Deploy Vulnerability Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md | To assess your machines for vulnerabilities, you can use one of the following so Learn more in [View and remediate findings from vulnerability assessment solutions on your machines](remediate-vulnerability-findings-vm.md). - ## Next steps+ > [!div class="nextstepaction"] > [Remediate the discovered vulnerabilities](remediate-vulnerability-findings-vm.md) |
defender-for-cloud | Azure Devops Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md | The Microsoft Security DevOps uses the following Open Source tools: | [Terrascan](https://github.com/accurics/terrascan) | Terraform (HCL2), Kubernetes (JSON/YAML), Helm v3, Kustomize, Dockerfiles, Cloud Formation | [Apache License 2.0](https://github.com/accurics/terrascan/blob/master/LICENSE) | | [Trivy](https://github.com/aquasecurity/trivy) | container images, file systems, git repositories | [Apache License 2.0](https://github.com/aquasecurity/trivy/blob/main/LICENSE) | -## Prerequisites +## Prerequisites -- Admin privileges to the Azure DevOps organization are required to install the extension. +- Admin privileges to the Azure DevOps organization are required to install the extension. If you don't have access to install the extension, you must request access from your Azure DevOps organization's administrator during the installation process. If you don't have access to install the extension, you must request access from :::image type="content" source="media/msdo-azure-devops-extension/repo-git.png" alt-text="Screenshot that shows you where to navigate to, to select Azure repo git."::: -1. Select the relevant repository. +1. Select the relevant repository. :::image type="content" source="media/msdo-azure-devops-extension/repository.png" alt-text="Screenshot showing where to select your repository."::: -5. Select **Starter pipeline**. +1. Select **Starter pipeline**. :::image type="content" source="media/msdo-azure-devops-extension/starter-piepline.png" alt-text="Screenshot showing where to select starter pipeline."::: -1. Paste the following YAML into the pipeline: +1. Paste the following YAML into the pipeline: ```yml # Starter pipeline If you don't have access to install the extension, you must request access from displayName: 'Microsoft Security DevOps' ``` -9. To commit the pipeline, select **Save and run**. +1. To commit the pipeline, select **Save and run**. The pipeline will run for a few minutes and save the results. |
defender-for-cloud | Concept Agentless Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md | When you enable the agentless discovery for Kubernetes extension, the following Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature). - - **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS. - **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.- + ### What's the refresh interval? Agentless information in Defender CSPM is updated through a snapshot mechanism. It can take up to **24 hours** to see results in attack paths and the cloud security explorer. |
defender-for-cloud | Concept Agentless Data Collection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md | Agentless scanning for VMs provides vulnerability assessment and software invent |Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)| | Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) |-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected GCP accounts | +| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux |-| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs) | -| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK | +| Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) | +| Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Google-managed encryption key<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Customer-managed encryption key (CMEK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Customer-supplied encryption key (CSEK) | ## How agentless scanning for VMs works |
defender-for-cloud | Concept Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-attack-path.md | Last updated 05/07/2023 # Identify and analyze risks across your environment -<iframe src="https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> +> [!VIDEO https://aka.ms/docs/player?id=36a5c440-00e6-4bd8-be1f-a27fbd007119] One of the biggest challenges that security teams face today is the number of security issues they face on a daily basis. There are numerous security issues that need to be resolved and never enough resources to address them all. |
defender-for-cloud | Concept Cloud Security Posture Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md | Title: Overview of Cloud Security Posture Management (CSPM) description: Learn more about the new Defender CSPM plan and the other enhanced security features that can be enabled for your multicloud environment through the Defender Cloud Security Posture Management (CSPM) plan. Previously updated : 06/20/2023 Last updated : 08/10/2023 # Cloud Security Posture Management (CSPM) Microsoft Defender CSPM protects across all your multicloud workloads, but billi > > - The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1, 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï >-> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscription that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +> - This price includes free vulnerability assessments for 20 unique images per charged resource, whereby the count will be based on the previous month's consumption. Every subsequent scan will be charged at $0.29 per image digest. The majority of customers are not expected to incur any additional image scan charges. For subscriptions that are both under the Defender CSPM and Defender for Containers plans, free vulnerability assessment will be calculated based on free image scans provided via the Defender for Containers plan, as specified [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ## Plan availability The following table summarizes each plan and their cloud availability. | [Data exporting](export-to-siem.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Workflow automation](workflow-automation.md) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | Tools for remediation | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |-| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | +| Microsoft Cloud Security Benchmark | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Governance](governance-rules.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises | | [Regulatory compliance](concept-regulatory-compliance.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP, on-premises |-| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | -| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | -| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | +| [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | +| [Attack path analysis](how-to-manage-attack-path.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | +| [Agentless scanning for machines](concept-agentless-data-collection.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | | [Agentless discovery for Kubernetes](concept-agentless-containers.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure | | [Container registries vulnerability assessment](concept-agentless-containers.md), including registry scanning | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure |-| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | -| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS | +| [Data aware security posture](concept-data-security-posture.md) | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | +| EASM insights in network exposure | - | :::image type="icon" source="./media/icons/yes-icon.png"::: | Azure, AWS, GCP | > [!NOTE] > If you have enabled Defender for DevOps, you will only gain cloud security graph and attack path analysis to the artifacts that arrive through those connectors. |
defender-for-cloud | Concept Data Security Posture Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md | The table summarizes support for data-aware posture management. | | | |What Azure data resources can I discover? | [Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).| |What AWS data resources can I discover? | AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.|-|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).| +|What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region | +|What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> Microsoft.Authorization/roleAssignments/* (read, write, delete) **and** Microsoft.Security/pricings/* (read, write, delete) **and** Microsoft.Security/pricings/SecurityOperators (read, write)<br/><br/> Amazon S3 buckets: AWS account permission to run Cloud Formation (to create a role).<br/><br/>GCP storage buckets: Google account permission to run script (to create a role).| |What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.| |What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Australia Central; Australia Central 2; Australia East; Australia Southeast; Brazil South; Canada Central; Canada East; Central India; Central US; East Asia; East US; East US 2; France Central; Germany West Central; Japan East; Japan West: Jio India West: North Central US; North Europe; Norway East; South Africa North: South Central US; South India; Sweden Central; Switzerland North; UAE North; UK South; UK West: West Central US; West Europe; West US, West US3.<br/><br/> Discovery is done locally in the region.| |What AWS regions are supported? | Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/> Discovery is done locally in the region.|+|What GCP regions are supported? | europe-west1, us-east1, us-west1, us-central1, us-east4, asia-south1, northamerica-northeast1| |Do I need to install an agent? | No, discovery is agentless.| |What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesn’t include other costs except for the respective plan costs.| |What permissions do I need to view/edit data sensitivity settings? | You need one of these Azure Active directory roles: Global Administrator, Compliance Administrator, Compliance Data Administrator, Security Administrator, Security Operator.|+| What permissions do I need to perform onboarding? | You need one of these Azure Active directory roles: Security Admin, Contributor, Owner on the subscription level (where the GCP project/s reside in). For consuming the security findings: Security Reader, Security Admin,Reader, Contributor, Owner on the subscription level (where the GCP project/s reside). | ## Configuring data sensitivity settings Defender for Cloud starts discovering data immediately after enabling a plan, or - It takes up to 24 hours to see the results for a first-time discovery. - After files are updated in the discovered resources, data is refreshed within eight days. - A new Azure storage account that's added to an already discovered subscription is discovered within 24 hours or less.-- A new AWS S3 bucket that's added to an already discovered AWS account is discovered within 48 hours or less.+- A new AWS S3 bucket or GCP storage bucket that's added to an already discovered AWS account or Google account is discovered within 48 hours or less. ### Discovering AWS S3 buckets In order to protect AWS resources in Defender for Cloud, you set up an AWS conne - To connect AWS accounts, you need Administrator permissions on the account. - The role allows these permissions: S3 read only; KMS decrypt. +### Discovering GCP storage buckets ++In order to protect GCP resources in Defender for Cloud, you can set up a Google connector using a script template to onboard the GCP account. ++- To discover GCP storage buckets, Defender for Cloud updates the script template. +- The script template creates a new role in the Google account to allow permission for the Defender for Cloud scanner to access data in the GCP storage buckets. +- To connect Google accounts, you need Administrator permissions on the account. + ## Exposed to the internet/allows public access Defender CSPM attack paths and cloud security graph insights include information about storage resources that are exposed to the internet and allow public access. The following table provides more details. -**State** | **Azure storage accounts** | **AWS S3 Buckets** - | | -**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. -**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. -+**State** | **Azure storage accounts** | **AWS S3 Buckets** | **GCP Storage Buckets** | + | | | +**Exposed to the internet** | An Azure storage account is considered exposed to the internet if either of these settings enabled:<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enabled from all networks**<br/><br/> or<br/><br/> Storage_account_name > **Networking** > **Public network access** > **Enable from selected virtual networks and IP addresses**. | An AWS S3 bucket is considered exposed to the internet if the AWS account/AWS S3 bucket policies don't have a condition set for IP addresses. | All GCP storage buckets are exposed to the internet by default. | +**Allows public access** | An Azure storage account container is considered as allowing public access if these settings are enabled on the storage account:<br/><br/> Storage_account_name > **Configuration** > **Allow blob public access** > **Enabled**.<br/><br/>and **either** of these settings:<br/><br/> Storage_account_name > **Containers** > container_name > **Public access level** set to **Blob (anonymous read access for blobs only)**<br/><br/> Or, storage_account_name > **Containers** > container_name > **Public access level** set to **Container (anonymous read access for containers and blobs)**. | An AWS S3 bucket is considered to allow public access if both the AWS account and the AWS S3 bucket have **Block all public access** set to **Off**, and **either** of these settings is set:<br/><br/> In the policy, **RestrictPublicBuckets** isn't enabled, and the **Principal** setting is set to * and **Effect** is set to **Allow**.<br/><br/> Or, in the access control list, **IgnorePublicAcl** isn't enabled, and permission is allowed for **Everyone**, or for **Authenticated users**. | A GCP storage bucket is considered to allow public access if: it has an IAM (Identity and Access Management) role that meets these criteria: <br/><br/> The role is granted to the principal **allUsers** or **allAuthenticatedUsers**. <br/><br/>The role has at least one storage permission that *isn't* **storage.buckets.create** or **storage.buckets.list**. Public access in GCP is called “Public to internet“. ## Next steps |
defender-for-cloud | Defender For Cloud Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-glossary.md | Advanced Persistent Threats See the [video: Understanding APTs](/events/teched-2 ### **Arc-enabled Kubernetes** -Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center. See [What is Azure Arc-enabled Logic Apps? (Preview)](../logic-apps/azure-arc-enabled-logic-apps-overview.md). +Azure Arc-enabled Kubernetes allows you to attach and configure Kubernetes clusters running anywhere. You can connect your clusters running on other public cloud providers or clusters running on your on-premises data center. See [What is Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview). ### **ARG** To make sure that your server resources are secure, Microsoft Defender for Cloud ### Azure Policy for Kubernetes -A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). +A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. It's deployed as an AKS add-on in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). ## B |
defender-for-cloud | Defender For Containers Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md | To learn more about implementation details such as supported operating systems, ### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a> -When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and frictionless. These are the required components: +When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and collected automatically through Azure infrastructure with no additional cost or configuration considerations. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: -- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an AKS Security profile.--- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). The Azure Policy for Kubernetes pod is deployed as an AKS add-on.+- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an AKS Security profile. +- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an AKS add-on. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). :::image type="content" source="./media/defender-for-containers/architecture-aks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Azure Kubernetes Service, and Azure Policy." lightbox="./media/defender-for-containers/architecture-aks-cluster.png"::: When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, t ### Architecture diagram of Defender for Cloud and Arc-enabled Kubernetes clusters -For all clusters hosted outside of Azure, [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) is required to connect the clusters to Azure and provide Azure services such as Defender for Containers. +These components are required in order to receive the full protection offered by Microsoft Defender for Containers: ++- **[Azure Arc-enabled Kubernetes](/azure/azure-arc/kubernetes/overview)** - An agent based solution that connects your clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](/azure/azure-arc/kubernetes/extensions). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions. -When a non-Azure container is connected to Azure with Arc, the [Arc extension](../azure-arc/kubernetes/extensions.md) collects Kubernetes audit logs data from all control plane nodes in the cluster. The extension sends the log data to the Microsoft Defender for Cloud backend in the cloud for further analysis. The extension is registered with a Log Analytics workspace used as a data pipeline, but the audit log data isn't stored in the Log Analytics workspace. +- **Defender agent**: The DaemonSet that is deployed on each node, collects host signals using [eBPF technology](https://ebpf.io/) and Kubernetes audit logs, to provide runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension. -Workload configuration information is collected by Azure Policy for Kubernetes. As explained in [this Azure Policy for Kubernetes page](../governance/policy/concepts/policy-for-kubernetes.md), the policy extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) admission controller webhook for [Open Policy Agent](https://www.openpolicyagent.org/). Kubernetes admission controllers are plugins that enforce how your clusters are used. The add-on registers as a web hook to Kubernetes admission control and makes it possible to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. +- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](/azure/defender-for-cloud/kubernetes-workload-protections) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). > [!NOTE] > Defender for Containers support for Arc-enabled Kubernetes clusters is a preview feature. Workload configuration information is collected by Azure Policy for Kubernetes. ### Architecture diagram of Defender for Cloud and EKS clusters -These components are required in order to receive the full protection offered by Microsoft Defender for Containers: +When Defender for Cloud protects a cluster hosted in Elastic Kubernetes Service, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: - **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** – [AWS account’s CloudWatch](https://aws.amazon.com/cloudwatch/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis.--- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).--- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an Arc-enabled Kubernetes extension. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension.--- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your EKS clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions. +- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension. +- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). > [!NOTE] > Defender for Containers support for AWS EKS clusters is a preview feature. These components are required in order to receive the full protection offered by ### Architecture diagram of Defender for Cloud and GKE clusters<a name="jit-asc"></a> -These components are required in order to receive the full protection offered by Microsoft Defender for Containers: +When Defender for Cloud protects a cluster hosted in Google Kubernetes Engine, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers: - **[Kubernetes audit logs](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/)** – [GCP Cloud Logging](https://cloud.google.com/logging/) enables, and collects audit log data through an agentless collector, and sends the collected information to the Microsoft Defender for Cloud backend for further analysis. -- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md).--- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. It is deployed under AKS Security profile in AKS clusters and as an Arc extension in Arc enabled Kubernetes clusters. The Defender agent is deployed as an Arc-enabled Kubernetes extension. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension.--- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes).+- **[Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md)** - An agent based solution that connects your GKE clusters to Azure. Azure then is capable of providing services such as Defender, and Policy as [Arc extensions](../azure-arc/kubernetes/extensions.md). For more information, see [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md). The following two components are the required Arc extensions. +- **Defender agent**: The DaemonSet that is deployed on each node, collects signals from hosts using [eBPF technology](https://ebpf.io/), and provides runtime protection. The agent is registered with a Log Analytics workspace, and used as a data pipeline. However, the audit log data isn't stored in the Log Analytics workspace. The Defender agent is deployed as an Arc-enabled Kubernetes extension. +- **Azure Policy for Kubernetes**: A pod that extends the open-source [Gatekeeper v3](https://github.com/open-policy-agent/gatekeeper) and registers as a web hook to Kubernetes admission control making it possible to apply at-scale enforcements, and safeguards on your clusters in a centralized, consistent manner. The Azure Policy for Kubernetes pod is deployed as an Arc-enabled Kubernetes extension. For more information, see [Protect your Kubernetes workloads](kubernetes-workload-protections.md) and [Understand Azure Policy for Kubernetes clusters](/azure/governance/policy/concepts/policy-for-kubernetes). > [!NOTE] > Defender for Containers support for GCP GKE clusters is a preview feature. |
defender-for-cloud | Defender For Containers Vulnerability Assessment Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md | In every subscription where this capability is enabled, all images stored in ACR Container vulnerability assessment powered by Qualys has the following capabilities: -- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-akspowered-by-qualys).+- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurepowered-by-qualys). -- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-akspowered-by-qualys).+- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurepowered-by-qualys). - **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services). |
defender-for-cloud | Defender For Dns Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-alerts.md | -## Step 1. Contact +## Step 1: Contact 1. Contact the resource owner to determine whether the behavior was expected or intentional. 1. If the activity is expected, dismiss the alert. 1. If the activity is unexpected, treat the resource as potentially compromised and mitigate as described in the next step. -## Step 2. Immediate mitigation +## Step 2: Immediate mitigation 1. Isolate the resource from the network to prevent lateral movement. 1. Run a full antimalware scan on the resource, following any resulting remediation advice. |
defender-for-cloud | Defender For Dns Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md | Microsoft Defender for DNS doesn't use any agents. In this article, you learned about Microsoft Defender for DNS. -To protect your DNS layer, enable Microsoft Defender for DNS for each of your subscriptions as described in [Enable enhanced protections](enable-enhanced-security.md). --> [!div class="nextstepaction"] -> [Enable enhanced protections](enable-enhanced-security.md) - For related material, see the following article: -- Security alerts might be generated by Defender for Cloud or received from other security products. To export all of these alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).+Security alerts might be generated by Defender for Cloud or received from other security products. To export all of these alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md). + |
defender-for-cloud | Defender For Key Vault Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md | Depending on the *type* of access that occurred, some fields might not be availa > [!TIP] > Azure virtual machines are assigned Microsoft IPs. This means that an alert might contain a Microsoft IP even though it relates to activity performed from outside of Microsoft. So even if an alert has a Microsoft IP, you should still investigate as described on this page. -### Step 1. Identify the source +### Step 1: Identify the source 1. Verify whether the traffic originated from within your Azure tenant. If the key vault firewall is enabled, it's likely that you've provided access to the user or application that triggered this alert. 1. If you can't verify the source of the traffic, continue to [Step 2. Respond accordingly](#step-2-respond-accordingly). Depending on the *type* of access that occurred, some fields might not be availa > Microsoft Defender for Key Vault is designed to help identify suspicious activity caused by stolen credentials. **Don't** dismiss the alert simply because you recognize the user or application. Contact the owner of the application or the user and verify the activity was legitimate. You can create a suppression rule to eliminate noise if necessary. Learn more in [Suppress security alerts](alerts-suppression-rules.md). -### Step 2. Respond accordingly +### Step 2: Respond accordingly If you don't recognize the user or application, or if you think the access shouldn't have been authorized: - If the traffic came from an unrecognized IP Address: If you don't recognize the user or application, or if you think the access shoul 1. Contact your administrator. 1. Determine whether there's a need to reduce or revoke Azure Active Directory permissions. -### Step 3. Measure the impact +### Step 3: Measure the impact When the event has been mitigated, investigate the secrets in your key vault that were affected: 1. Open the **Security** page on your Azure key vault and view the triggered alert. 1. Select the specific alert that was triggered and review the list of the secrets that were accessed and the timestamp. 1. Optionally, if you have key vault diagnostic logs enabled, review the previous operations for the corresponding caller IP, user principal, or object ID. -### Step 4. Take action +### Step 4: Take action When you've compiled your list of the secrets, keys, and certificates that were accessed by the suspicious user or application, you should rotate those objects immediately. 1. Affected secrets should be disabled or deleted from your key vault. |
defender-for-cloud | Defender For Resource Manager Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md | -When you receive an alert from Microsoft Defender for Resource Manager, we recommend you investigate and respond to the alert as described below. Microsoft Defender for Resource Manager protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert. +When you receive an alert from Microsoft Defender for Resource Manager, we recommend you investigate and respond to the alert as described below. Defender for Resource Manager protects all connected resources, so even if you're familiar with the application or user that triggered the alert, it's important to verify the situation surrounding every alert. -## Step 1. Contact +## Step 1: Contact 1. Contact the resource owner to determine whether the behavior was expected or intentional. 1. If the activity is expected, dismiss the alert. 1. If the activity is unexpected, treat the related user accounts, subscriptions, and virtual machines as compromised and mitigate as described in the following step. -## Step 2. Investigate alerts from Microsoft Defender for Resource Manager +## Step 2: Investigate alerts from Microsoft Defender for Resource Manager -Security alerts from Microsoft Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events. +Security alerts from Defender for Resource Manager are based on threats detected by monitoring Azure Resource Manager operations. Defender for Cloud uses internal log sources of Azure Resource Manager as well as Azure Activity log, a platform log in Azure that provides insight into subscription-level events. -Microsoft Defender for Resource Manager provides visibility into activity that comes from third party service providers that have delegated access as part of the resource manager alerts. For example, `Azure Resource Manager operation from suspicious proxy IP address - delegated access`. +Defender for Resource Manager provides visibility into activity that comes from third party service providers that have delegated access as part of the resource manager alerts. For example, `Azure Resource Manager operation from suspicious proxy IP address - delegated access`. `Delegated access` refers to access with [Azure Lighthouse](/azure/lighthouse/overview) or with [Delegated administration privileges](/partner-center/dap-faq). Alerts that show `Delegated access` also include a customized description and re Learn more about [Azure Activity log](../azure-monitor/essentials/activity-log.md). -To investigate security alerts from Microsoft Defender for Resource +To investigate security alerts from Defender for Resource 1. Open Azure Activity log. To investigate security alerts from Microsoft Defender for Resource > [!TIP] > For a better, richer investigation experience, stream your Azure activity logs to Microsoft Sentinel as described in [Connect data from Azure Activity log](../sentinel/data-connectors/azure-activity.md). -## Step 3. Immediate mitigation +## Step 3: Immediate mitigation 1. Remediate compromised user accounts: - If theyΓÇÖre unfamiliar, delete them as they may have been created by a threat actor To investigate security alerts from Microsoft Defender for Resource ## Next steps -This page explained the process of responding to an alert from Microsoft Defender for Resource Manager. For related information see the following pages: +This page explained the process of responding to an alert from Defender for Resource Manager. For related information, see the following pages: - [Overview of Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md) - [Suppress security alerts](alerts-suppression-rules.md) |
defender-for-cloud | Defender For Storage Classic Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-classic-migrate.md | Storage accounts that were previously excluded from protected subscriptions in t ### Migrating from the classic Defender for Storage plan enabled with per-storage account pricing -If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The pricing plan remains the same in the new Defender for Storage, except for extra charges for malware scanning, which are charged per GB scanned (free during preview). +If the classic Defender for Storage plan is enabled with per-storage account pricing, you can switch to the new plan at either the subscription or resource level. The new Defender for Storage plan has the same pricing plan with the exception of malware scanning which may incur extra charges and is billed per GB scanned. ++You can learn more about Defender for Storage's pricing model on the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h). You can also [exclude specific storage accounts](../storage/common/azure-defender-storage-configure.md#) from protected subscriptions. In this article, you learned about migrating to the new Microsoft Defender for S > [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)+ |
defender-for-cloud | Defender For Storage Configure Malware Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-configure-malware-scan.md | With Malware Scanning, you can build your automation response using the followin - Blob index tags > [!TIP]-> We recommend you try the Ninja training instructions, a hands-on lab with detailed step-by-step instructions on how to try out and test malware scanning end-to-end with setting up responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities. +> We invite you to explore the Malware Scanning feature in Defender for Storage through our hands-on lab. Follow the [Ninja training](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/main/Labs/Modules/Module%2019%20-%20Defender%20for%20Storage.md) instructions for a detailed, step-by-step guide on how to set up and test Malware Scanning end-to-end, including configuring responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities. Here are some response options that you can use to automate your response: The event message is a JSON object that contains key-value pairs that provide de Here's an example of an event message: -``` ++```json { "id": "52d00da0-8f1a-4c3c-aa2c-24831967356b", "subject": "storageAccounts/<storage_account_name>/containers/app-logs-storage/blobs/EICAR - simulating malware.txt", In this article, you learned about Microsoft Defender for Storage. > [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)+ |
defender-for-cloud | Defender For Storage Infrastructure As Code Enablement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-infrastructure-as-code-enablement.md | We recommend that you enable Defender for Storage on the subscription level. Doi ## [Enable on a subscription](#tab/enable-subscription/) +### Terraform template ++To enable and configure Microsoft Defender for Storage at the subscription level using Terraform, you can use the following code snippet: ++``` +resource "azurerm_security_center_subscription_pricing" "DefenderForStorage" { + tier = "Standard" + resource_type = "StorageAccounts" + subplan = "DefenderForStorageV2" + + extension { + name = "OnUploadMalwareScanning" + additional_extension_properties = { + CapGBPerMonthPerStorageAccount = "5000" + } + } + + extension { + name = "SensitiveDataDiscovery" + } +} +``` ++**Modifying the monthly cap for malware scanning** ++To modify the monthly cap for malware scanning per storage account, adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month per storage account. If you want to permit unlimited scanning, assign the value "-1". The default limit is set at 5,000 GB. ++**Disabling features** ++If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can remove the corresponding extension block from the Terraform code. ++**Disabling the entire Defender for Storage plan** ++To disable the entire Defender for Storage plan, set the `tier` property value to **"Free"** and remove the `subPlan` and `extension` properties. ++Learn more about the `azurerm_security_center_subscription_pricing` resource by referring to the [azurerm_security_center_subscription_pricing documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/security_center_subscription_pricing). Additionally, you can find comprehensive details on the Terraform provider for Azure in the [Terraform AzureRM Provider documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs). + ### Bicep template To enable and configure Microsoft Defender for Storage at the subscription level using [Bicep](/azure/azure-resource-manager/bicep/overview?tabs=bicep), make sure your [target scope is set to subscription](/azure/azure-resource-manager/bicep/deploy-to-subscription?tabs=azure-cli#scope-to-subscription), and add the following to your Bicep template: resource StorageAccounts 'Microsoft.Security/pricings@2023-01-01' = { } ``` +**Modifying the monthly cap for malware scanning** + To modify the monthly cap for malware scanning per storage account, adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB. -If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **False** under Sensitive data discovery. +**Disabling features** -To disable the entire Defender for Storage plan, set the `pricingTier` property value to **Free** and remove the subPlan and extensions properties. +If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **False** under sensitive data discovery. ++**Disabling the entire Defender for Storage plan** ++To disable the entire Defender for Storage plan, set the `pricingTier` property value to **Free** and remove the `subPlan` and `extensions` properties. Learn more about the [Bicep template in the Microsoft security/pricings documentation](/azure/templates/microsoft.security/pricings?pivots=deployment-language-bicep&source=docs). To enable and configure Microsoft Defender for Storage at the subscription level } ``` +**Modifying the monthly cap for malware scanning** + To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `CapGBPerMonthPerStorageAccount` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB. -If you want to turn off the on-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **False** under Sensitive data discovery. +**Disabling features** -To disable the entire Defender plan, set the `pricingTier` property value to **Free** and remove the subPlan and extensions properties. +If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under sensitive data discovery. ++**Disabling the entire Defender for Storage plan** ++To disable the entire Defender plan, set the `pricingTier` property value to **Free** and remove the `subPlan` and `extension` properties. Learn more about the ARM template in the Microsoft.Security/Pricings documentation. ## [Enable on a storage account](#tab/enable-storage-account/) +### Terraform template - storage account ++To enable and configure Microsoft Defender for Storage at the storage account level using Terraform, import the [AzAPI provider](https://registry.terraform.io/providers/Azure/azapi/latest/docs) and use the following code snippet: ++``` +resource "azurerm_storage_account" "example" { ... } ++resource "azapi_resource_action" "enable_defender_for_Storage" { + type = "Microsoft.Security/defenderForStorageSettings@2022-12-01-preview" + resource_id = "${azurerm_storage_account.example.id}/providers/Microsoft.Security/defenderForStorageSettings/current" + method = "PUT" ++ body = jsonencode({ + properties = { + isEnabled = true + malwareScanning = { + onUpload = { + isEnabled = true + capGBPerMonth = 5000 + } + } + sensitiveDataDiscovery = { + isEnabled = true + } + overrideSubscriptionLevelSettings = true + } + }) +} +``` ++> [!NOTE] +> The `azapi_resource_action` used here is an action that is specific to the configuration of Microsoft Defender for Storage. It's different from the typical resource declarations in Terraform, and it's used to perform specific actions on the resource, such as enabling or disabling features. ++**Modifying the monthly cap for malware scanning** ++To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value "-1". The default limit is set at 5,000 GB. ++**Disabling features** ++If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections. ++**Disabling the entire Defender for Storage plan** ++To disable the entire Defender for Storage plan for the storage account, you can use the following code snippet: ++``` +resource "azurerm_storage_account" "example" { ... } ++resource "azapi_resource_action" "disable_defender_for_Storage" { + type = "Microsoft.Security/defenderForStorageSettings@2022-12-01-preview" + resource_id = "${azurerm_storage_account.example.id}/providers/Microsoft.Security/defenderForStorageSettings/current" + method = "PUT" ++ body = jsonencode({ + properties = { + isEnabled = true + overrideSubscriptionLevelSettings = false + } + }) +} +``` ++You can change the value of `overrideSubscriptionLevelSettings` to **True** to disable Defender for Storage plan for the storage account under subscriptions with Defender for Storage enabled at the subscription level. If you want to keep some features enabled, you can modify the properties accordingly. +Learn more about the __[Microsoft.Security/defenderForStorageSettings](/rest/api/defenderforcloud/defender-for-storage/create)__ API documentation for further customization and control over your storage account's security settings. Additionally, you can find comprehensive details on the Terraform provider for Azure in the [Terraform AzureRM Provider documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs). + ### Bicep template - storage account To enable and configure Microsoft Defender for Storage at the storage account level using Bicep, add the following to your Bicep template: resource defenderForStorageSettings 'Microsoft.Security/DefenderForStorageSettin } ``` -To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the capGBPerMonth parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB. +**Modifying the monthly cap for malware scanning** ++To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth parameter` to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB. -If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the `isEnabled` value to **false** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections. +**Disabling features** -To disable the entire Defender plan for the storage account, set the `isEnabled` property value to **false** and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties. +If you want to turn off the On-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections. ++**Disabling the entire Defender for Storage plan** ++To disable the entire Defender plan for the storage account, set the `isEnabled` property value to **False** and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties. Learn more about the [Microsoft.Security/DefenderForStorageSettings API](/rest/api/defenderforcloud/defender-for-storage/create) documentation. > [!TIP] > Malware Scanning can be configured to send scanning results to the following: <br> **Event Grid custom topic** - for near-real time automatic response based on every scanning result. Learn more how to [configure malware scanning to send scanning events to an Event Grid custom topic](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-storage-account#setting-up-event-grid-for-malware-scanning). <br> **Log Analytics workspace** - for storing every scan result in a centralized log repository for compliance and audit. Learn more how to [configure malware scanning to send scanning results to a Log Analytics workspace](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-storage-account#setting-up-logging-for-malware-scanning). -Learn more on how to set up response for malware scanning results. +Learn more on how to [set up response for malware scanning results.](/azure/defender-for-cloud/defender-for-storage-configure-malware-scan) ### ARM template - storage account To enable and configure Microsoft Defender for Storage at the storage account le } ``` -To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the capGBPerMonth parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value -1. The default limit is set at 5,000 GB. +**Modifying the monthly cap for malware scanning** ++To modify the monthly threshold for malware scanning in your storage accounts, simply adjust the `capGBPerMonth` parameter to your preferred value. This parameter sets a cap on the maximum data that can be scanned for malware each month, per storage account. If you want to permit unlimited scanning, assign the value "-1". The default limit is set at 5,000 GB. -If you want to turn off the On-upload malware scanning or Sensitive data threat detection features, you can change the isEnabled value to false under the malwareScanning or sensitiveDataDiscovery properties sections. +**Disabling features** -To disable the entire Defender plan for the storage account, set the isEnabled property value to false and remove the malwareScanning and sensitiveDataDiscovery sections from the properties. +If you want to turn off the on-upload malware scanning or sensitive data threat detection features, you can change the `isEnabled` value to **False** under the `malwareScanning` or `sensitiveDataDiscovery` properties sections. ++**Disabling the entire Defender for Storage plan** ++To disable the entire Defender plan for the storage account, set the `isEnabled` property value to **False** and remove the `malwareScanning` and `sensitiveDataDiscovery` sections from the properties. ## Next steps -Learn more about the [Microsoft.Security/DefenderForStorageSettings](/rest/api/defenderforcloud/defender-for-storage/create) API documentation. +Learn more about the [Microsoft.Security/DefenderForStorageSettings](/rest/api/defenderforcloud/defender-for-storage/create) API documentation. + |
defender-for-cloud | Defender For Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md | Title: Microsoft Defender for Storage - the benefits and features- description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 06/15/2023 Last updated : 08/21/2023 Defender for Storage includes: ## Getting started -With a simple agentless setup at scale, you can [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions. +With a simple agentless setup at scale, you can [enable Defender for Storage](tutorial-enable-storage-plan.md) at the subscription or resource levels through the portal or programmatically. When enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions. > [!NOTE] > If you already have the Defender for Storage (classic) enabled and want to access the new security features and pricing, you'll need to [migrate to the new pricing plan](defender-for-storage-classic-migrate.md). With a simple agentless setup at scale, you can [enable Defender for Storage](.. |-|:-| |Release state:|General Availability (GA)| |Feature availability:|- Activity monitoring (security alerts) - General Availability (GA)<br>- Malware Scanning ΓÇô Preview, **General Availability (GA) on September 1, 2023** <br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|-|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Malware Scanning is offered for free during the public preview but will **start being billed on September 1, 2023, at $0.15/GB (USD) of data ingested.** Customers are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per month per storage account and control costs using this feature.| +|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): Free during public preview\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Malware Scanning is offered for free during the public preview but will **start being billed on September 3, 2023, at $0.15/GB (USD) of data ingested.** Customers are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per month per storage account and control costs using this feature.| | Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring | |Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the [classic plan](/azure/defender-for-cloud/defender-for-storage-classic))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts| Defender for Storage continuously analyzes data and control plane logs from prot ### Malware Scanning (powered by Microsoft Defender Antivirus) > [!NOTE]-> Malware Scanning is offered for free during public preview. **Billing will begin when generally available (GA) on September 1, 2023 and priced at $0.15 (USD)/GB of data scanned.** You are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per storage account per month and control costs. +> Malware Scanning is offered for free during public preview. **Billing will begin when generally available (GA) on September 3, 2023 and priced at $0.15 (USD)/GB of data scanned.** You are encouraged to use the ΓÇ£Monthly cappingΓÇ¥ feature to define the cap on GB scanned per storage account per month and control costs. Malware Scanning in Defender for Storage helps protect storage accounts from malicious content by performing a full malware scan on uploaded content in near real time, applying Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements to handle untrusted content. Every file type is scanned, and scan results are returned for every file. The Malware Scanning capability is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale. This is a configurable feature in the new Defender for Storage plan that is priced per GB scanned. In summary, Malware Scanning, which is only available on the new plan for Blob s In this article, you learned about Microsoft Defender for Storage. -- [Enable Defender for Storage](enable-enhanced-security.md)+- [Enable Defender for Storage](tutorial-enable-storage-plan.md) - Check out [common questions](faq-defender-for-storage.yml) about Defender for Storage.- |
defender-for-cloud | Defender For Storage Malware Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md | Title: Malware scanning in Microsoft Defender for Storage description: Learn about the benefits and features of malware scanning in Microsoft Defender for Storage. Previously updated : 08/15/2023 Last updated : 08/23/2023 Some common use-cases and scenarios for malware scanning in Defender for Storage - **Machine learning training data:** the quality and security of the training data are critical for effective machine learning models. It's important to ensure these data sets are clean and safe, especially if they include user-generated content or data from external sources. + :::image type="content" source="media/defender-for-storage-malware-scan/malware-scan-tax-app-demo.gif" alt-text="animated GIF showing user-generated-content and data from external sources." lightbox="media/defender-for-storage-malware-scan/malware-scan-tax-app-demo.gif"::: + ## Prerequisites To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md). -You can [enable and configure Malware Scanning at scale](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription) for your subscriptions while maintaining granular control over configuring the feature for individual storage accounts. There are several ways to enable and configure Malware Scanning: [Azure built-in policy](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-at-scale-with-an-azure-built-in-policy) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#bicep-template) and [ARM template](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#arm-template), using the [Azure portal](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal), or directly with [REST API](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-with-rest-api). +You can [enable and configure Malware Scanning at scale](tutorial-enable-storage-plan.md) for your subscriptions while maintaining granular control over configuring the feature for individual storage accounts. There are several ways to enable and configure Malware Scanning: [Azure built-in policy](defender-for-storage-policy-enablement.md) (the recommended method), programmatically using Infrastructure as Code templates, including [Terraform](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#terraform-template), [Bicep](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription&branch=pr-en-us-248836#bicep-template), and [ARM](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#azure-resource-manager-template) templates, using the [Azure portal](defender-for-storage-azure-portal-enablement.md?tabs=enable-subscription), or directly with the [REST API](defender-for-storage-rest-api-enablement.md?tabs=enable-subscription). To enable and configure Malware Scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md). Learn how to configure Malware Scanning so that [every scan result is sent autom You may want to log your scan results for compliance evidence or investigating scan results. By setting up a Log Analytics Workspace destination, you can store every scan result in a centralized log repository that is easy to query. You can view the results by navigating to the Log Analytics destination workspace and looking for the `StorageMalwareScanningResults` table. -Learn more about [setting up Log Analytics results](../azure-monitor/logs/quick-create-workspace.md). +Learn more about [setting up logging for malware scanning](advanced-configurations-for-malware-scanning.md#setting-up-logging-for-malware-scanning). > [!TIP]-> We recommend you try a hands-on lab to try out Malware Scanning in Defender for Storage: the [Ninja](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/main/Labs/Modules/Module%2019%20-%20Defender%20for%20Storage.md) training instructions for detailed step-by-step instructions on how to test Malware Scanning end-to-end with setting up responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities. +> We invite you to explore the Malware Scanning feature in Defender for Storage through our hands-on lab. Follow the [Ninja training](https://github.com/Azure/Microsoft-Defender-for-Cloud/blob/main/Labs/Modules/Module%2019%20-%20Defender%20for%20Storage.md) instructions for a detailed, step-by-step guide on how to set up and test Malware Scanning end-to-end, including configuring responses to scanning results. This is part of the 'labs' project that helps customers get ramped up with Microsoft Defender for Cloud and provide hands-on practical experience with its capabilities. ## Cost control By default, a limit of 5 TB (5,000 GB) is established if no specific capping mec > [!TIP] > You can set the capping mechanism on either individual storage accounts or across an entire subscription (every storage account on the subscription will be allocated the limit defined on the subscription level). -Follow [these steps](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal) to configure the capping mechanism. +Follow [these steps](tutorial-enable-storage-plan.md#set-up-and-configure-microsoft-defender-for-storage) to configure the capping mechanism. ## Handling possible false positives Despite the scanning process, access to uploaded data remains unaffected, and th ## Next steps Learn more on how to [set up response for malware scanning](defender-for-storage-configure-malware-scan.md#setting-up-response-to-malware-scanning) results.++ |
defender-for-cloud | Defender For Storage Rest Api Enablement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-rest-api-enablement.md | We recommend that you enable Defender for Storage on the subscription level. Doi To enable and configure Microsoft Defender for Storage at the subscription level using REST API, create a PUT request with this endpoint (replace the `subscriptionId` in the endpoint URL with your own Azure subscription ID): -**PUT** https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2023-01-01 +``` +PUT +https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Security/pricings/StorageAccounts?api-version=2023-01-01 ++``` And add the following request body: ``` Learn more on how to [set up response for malware scanning](defender-for-storage ## Next steps -- Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).+- Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md). ++ |
defender-for-cloud | Defender For Storage Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-test.md | -After you [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md), you can test the service and run a proof of concept to familiarize yourself with its features and validate the advanced security capabilities effectively protect your storage accounts by generating real security alerts. This guide will walk you through testing various aspects of the security coverage offered by Defender for Storage. +After you [enable Microsoft Defender for Storage](tutorial-enable-storage-plan.md), you can test the service and run a proof of concept to familiarize yourself with its features and validate the advanced security capabilities effectively protect your storage accounts by generating real security alerts. This guide will walk you through testing various aspects of the security coverage offered by Defender for Storage. There are three main components to test: To test the sensitive data threat detection feature by uploading test data that :::image type="content" source="media/defender-for-storage-test/testing-sensitivity-2.png" alt-text="Screenshot showing how to test a file in Malware Scanning for Social Security Number information."::: - 1. Save the file with the updated information. -- 1. Upload the file you created to the **test-container** in the storage account. + 1. Save and upload the file to the **test-container** in the storage account. :::image type="content" source="media/defender-for-storage-test/testing-sensitivity-3.png" alt-text="Screenshot showing how to upload a file in Malware Scanning to test for Social Security Number information."::: To test the sensitive data threat detection feature by uploading test data that 1. Enable Defender for Storage on the storage account with the Sensitivity Data Discovery feature enabled. - Allow 1-2 hours for the Sensitive Data Discovery engine to scan the storage account. Be aware that the process may take up to 24 hours to complete. + Sensitive data discovery scans for sensitive information within the first 24 hours when enabled at the storage account level or when a new storage account is created under a subscription protected by this feature at the subscription level. Following this initial scan, the service will scan for sensitive information every 7 days from the time of enablement. ++ > [!NOTE] + > If you enable the feature and then add sensitive data on the days after enablement, the next scan for that newly added data will occur within the next 7-day scanning cycle, depending on the day of the week the data was added. 1. Change access level: Learn more about: + |
defender-for-cloud | Enable Agentless Scanning Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md | If you have Defender for Servers P2 already enabled and agentless scanning is tu After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud. +## Enable agentless scanning in GCP ++1. From Defender for Cloud's menu, select **Environment settings**. +1. Select the relevant project or organization. +1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**. ++ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-plan.png" alt-text="Screenshot that shows where to select the plan for GCP projects." lightbox="media/enable-agentless-scanning-vms/gcp-select-plan.png"::: ++1. In the settings pane, turn on ΓÇ»**Agentless scanning**. ++ :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-agentless.png" alt-text="Screenshot that shows where to select agentless scanning." lightbox="media/enable-agentless-scanning-vms/gcp-select-agentless.png"::: ++1. SelectΓÇ»**Save and Next: Configure Access**. +1. Copy the onboarding script. +1. Run the onboarding script in the GCP organization/project scope (GCP portal or gcloud CLI). +1. Select ΓÇ»**Next: Review and generate**. +1. Select ΓÇ»**Update**. + ## Exclude machines from scanning Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped. |
defender-for-cloud | Episode Thirty Five | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-five.md | Last updated 08/08/2023 # Security alert correlation -**Episode description**: In this episode of Defender for Cloud in the Field, Daniel Davrayev joins Yuri Diogenes to talk about security alert correlation capability in Defender for Cloud. Daniel talks about the importance of have a built-in capability to correlate alerts in Defender for Cloud, how this saves time for SOC analysts to investigate alert and respond to potential threats. Daniel also explains how data correlation works and demonstrate how this correlation appears in Defender for Cloud dashboard as a security incident. +**Episode description**: In this episode of Defender for Cloud in the Field, Daniel Davrayev joins Yuri Diogenes to talk about security alert correlation capability in Defender for Cloud. Daniel talks about the importance of have a built-in capability to correlate alerts in Defender for Cloud, how this capability saves time for SOC analysts to investigate alert and respond to potential threats. Daniel also explains how data correlation works and demonstrate how this correlation appears in Defender for Cloud dashboard as a security incident. > [!VIDEO https://aka.ms/docs/player?id=6573561d-70a6-4b4c-ad16-9efe747c9a61] Last updated 08/08/2023 ## Next steps > [!div class="nextstepaction"]-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md) +> [Defender CSPM support for GCP and more updates](episode-thirty-six.md) |
defender-for-cloud | Episode Thirty Seven | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-seven.md | + + Title: Capabilities to counter identity-based supply chain attacks | Defender for Cloud in the Field +description: Learn about Defender for Cloud's capability to counter identity-based supply chain attacks. + Last updated : 08/29/2023+++# Capabilities to counter identity-based supply chain attacks ++**Episode description**: In this episode of Defender for Cloud in the Field, Security Researcher, Hagai Kestenberg joins Yuri Diogenes to talk about Defender for Cloud capabilities to counter identity-based supply chain attacks. Hagai explains the different types of supply chain attacks and focuses on the risks of identity-based supply chain attacks. Hagai makes recommendations to mitigate this type of attack and explain the new capability in Defender for Resource Manager that can be used to identify this type of attack. Hagai also demonstrates the new alert generated by Defender for Resource Manager when this type of attack is identified. ++> [!VIDEO https://aka.ms/docs/player?id=d69fb652-46a7-4f8c-8632-8cf2cbc3685a] ++- [01:41](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=01m41s) - Intro +- [04:04](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=04m04s) - Understanding identity-based supply chain attacks +- [06:50](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=06m50s) - Identity-based supply chain attacks sample scenario +- [08:26](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=08m26s) - Best practices to prevent identity-based supply chain attacks +- [10:29](/shows/mdc-in-the-field/counter-identity-based-supply-chain-attacks#time=10m29s) - Demonstration ++## Recommended resources ++- [Learn more](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/announcing-microsoft-defender-for-cloud-capabilities-to-counter/ba-p/3876012) +- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) +- Learn more about [Microsoft Security](https://msft.it/6002T9HQY) ++- Follow us on social media: ++ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/) + - [Twitter](https://twitter.com/msftsecurity) ++- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) ++## Next steps ++> [!div class="nextstepaction"] +> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md) |
defender-for-cloud | Episode Thirty Six | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-thirty-six.md | + + Title: Defender CSPM support for GCP and more updates | Defender for Cloud in the Field +description: Learn about Defender for CSPM's support for GCP and more updates for Defender for Cloud. + Last updated : 08/29/2023+++# Defender CSPM support for GCP and more updates ++**Episode description**: In this episode of Defender for Cloud in the Field, Amit Biton joins Yuri Diogenes to talk about the new Defender CSPM support for GCP. Amit talks about the recent investments in multicloud and the alignment with Microsoft CNAPP strategy. Amit covers the capabilities that were released in Defender CSPM to cover GCP, including the new Microsoft Cloud Security Benchmark for GCP. Amit also demonstrate the use of Attack Path and Cloud Security explorer in a multicloud environment. ++> [!VIDEO https://aka.ms/docs/player?id=673a8d91-3b0e-4bfb-986c-888ae7532320] ++- [01:23](/shows/mdc-in-the-field/support-gcp#time=01m23s) - Overview of the new announcements for multicloud +- [05:09](/shows/mdc-in-the-field/support-gcp#time=05m09s) - Microsoft CNAPP strategy +- [08:55](/shows/mdc-in-the-field/support-gcp#time=08m55s) - Agentless capability +- [12:54](/shows/mdc-in-the-field/support-gcp#time=12m54s) - Demonstration ++## Recommended resources ++- [Learn more](/azure/defender-for-cloud/concept-cloud-security-posture-management) +- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) +- Learn more about [Microsoft Security](https://msft.it/6002T9HQY) ++- Follow us on social media: ++ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/) + - [Twitter](https://twitter.com/msftsecurity) ++- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) ++## Next steps ++> [!div class="nextstepaction"] +> [Capabilities to counter identity-based supply chain attacks](episode-thirty-seven.md) |
defender-for-cloud | Export To Siem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md | Before you set up the Azure services for exporting alerts, make sure you have: - if it **has the SecurityCenterFree solution**, you'll need a minimum of read permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/read` - if it **doesn't have the SecurityCenterFree solution**, you'll need write permissions for the workspace solution: `Microsoft.OperationsManagement/solutions/action` --> -### Step 1. Set up the Azure services +### Step 1: Set up the Azure services You can set up your Azure environment to support continuous export using either: You can set up your Azure environment to support continuous export using either: For more detailed instructions, see [Prepare Azure resources for exporting to Splunk and QRadar](export-to-splunk-or-qradar.md). -### Step 2. Connect the event hub to your preferred solution using the built-in connectors +### Step 2: Connect the event hub to your preferred solution using the built-in connectors Each SIEM platform has a tool to enable it to receive alerts from Azure Event Hubs. Install the tool for your platform to start receiving alerts. |
defender-for-cloud | Export To Splunk Or Qradar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-splunk-or-qradar.md | In order to stream Microsoft Defender for Cloud security alerts to IBM QRadar an To configure the Azure resources for QRadar and Splunk in the Azure portal: -## Step 1. Create an Event Hubs namespace and event hub with send permissions +## Step 1: Create an Event Hubs namespace and event hub with send permissions 1. In the [Event Hubs service](../event-hubs/event-hubs-create.md), create an Event Hubs namespace: 1. Select **Create**. To configure the Azure resources for QRadar and Splunk in the Azure portal: 1. Select **Create** to create the policy. :::image type="content" source="media/export-to-siem/create-shared-access-policy.png" alt-text="Screenshot of creating a shared policy in Microsoft Event Hubs." lightbox="media/export-to-siem/create-shared-access-policy.png"::: -## Step 2. **For streaming to QRadar SIEM** - Create a Listen policy +## Step 2: **For streaming to QRadar SIEM** - Create a Listen policy 1. Select **Add**, enter a unique policy name, and select **Listen**. 1. Select **Create** to create the policy. To configure the Azure resources for QRadar and Splunk in the Azure portal: :::image type="content" source="media/export-to-siem/create-shared-listen-policy.png" alt-text="Screenshot of creating a listen policy in Microsoft Event Hubs." lightbox="media/export-to-siem/create-shared-listen-policy.png"::: -## Step 3. Create a consumer group, then copy and save the name to use in the SIEM platform +## Step 3: Create a consumer group, then copy and save the name to use in the SIEM platform 1. In the Entities section of the Event Hubs event hub menu, select **Event Hubs** and select the event hub you created. To configure the Azure resources for QRadar and Splunk in the Azure portal: 1. Select **Consumer group**. -## Step 4. Enable continuous export for the scope of the alerts +## Step 4: Enable continuous export for the scope of the alerts 1. In the Azure search box, search for "policy" and go to the Policy. 1. In the Policy menu, select **Definitions**. To configure the Azure resources for QRadar and Splunk in the Azure portal: 1. Select **Review and Create** and **Create** to finish the process of defining the continuous export to Event Hubs. - Notice that when you activate continuous export policy on the tenant (root management group level), it automatically streams your alerts on any **new** subscription that will be created under this tenant. -## Step 5. **For streaming alerts to QRadar SIEM** - Create a storage account +## Step 5: **For streaming alerts to QRadar SIEM** - Create a storage account 1. Go to the Azure portal, select **Create a resource**, and select **Storage account**. If that option isn't shown, search for "storage account". 1. Select **Create**. To configure the Azure resources for QRadar and Splunk in the Azure portal: :::image type="content" source="media/export-to-siem/copy-storage-account-key.png" alt-text="Screenshot of copying storage account key." lightbox="media/export-to-siem/copy-storage-account-key.png"::: -## Step 6. **For streaming alerts to Splunk SIEM** - Create an Azure AD application +## Step 6: **For streaming alerts to Splunk SIEM** - Create an Azure AD application 1. In the menu search box, search for "Azure Active Directory" and go to Azure Active Directory. 1. Go to the Azure portal, select **Create a resource**, and select **Azure Active Directory**. If that option isn't shown, search for "active directory". To configure the Azure resources for QRadar and Splunk in the Azure portal: 1. After the secret is created, copy the Secret ID and save it for later use together with the Application ID and Directory (tenant) ID. -## Step 7. **For streaming alerts to Splunk SIEM** - Allow Azure AD to read from the event hub +## Step 7: **For streaming alerts to Splunk SIEM** - Allow Azure AD to read from the event hub 1. Go to the Event Hubs namespace you created. 1. In the menu, go to **Access control**. |
defender-for-cloud | How To Manage Attack Path | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-attack-path.md | Title: Identify and remediate attack paths description: Learn how to manage your attack path analysis and build queries to locate vulnerabilities in your multicloud environment. Previously updated : 07/10/2023 Last updated : 08/10/2023 # Identify and remediate attack paths You can check out the full list of [Attack path names and descriptions](attack-p | Aspect | Details | |--|--|-| Release state | GA (General Availability) | +| Release state | GA (General Availability) for Azure, AWS <Br> Preview for GCP | | Prerequisites | - [Enable agentless scanning](enable-vulnerability-assessment-agentless.md), or [Enable Defender for Server P1 (which includes MDVM)](defender-for-servers-introduction.md) or [Defender for Server P2 (which includes MDVM and Qualys)](defender-for-servers-introduction.md). <br> - [Enable Defender CSPM](enable-enhanced-security.md) <br> - Enable agentless container posture extension in Defender CSPM, or [Enable Defender for Containers](defender-for-containers-enable.md), and install the relevant agents in order to view attack paths that are related to containers. This also gives you the ability to [query](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) containers data plane workloads in security explorer. | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | +| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS, GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) | ## Features of the attack path overview page |
defender-for-cloud | How To Manage Cloud Security Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-manage-cloud-security-explorer.md | Title: Build queries with cloud security explorer description: Learn how to build queries in cloud security explorer to find vulnerabilities that exist on your multicloud environment. Previously updated : 08/10/2023 Last updated : 08/16/2023 # Build queries with cloud security explorer Defender for Cloud's contextual security capabilities assists security teams in Use the cloud security explorer, to proactively identify security risks in your cloud environment by running graph-based queries on the cloud security graph, which is Defender for Cloud's context engine. You can prioritize your security team's concerns, while taking your organization's specific context and conventions into account. -With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure and AWS). +With the cloud security explorer, you can query all of your security issues and environment context such as assets inventory, exposure to internet, permissions, and lateral movement between resources and across multiple clouds (Azure AWS, and GCP). Learn more about [the cloud security graph, attack path analysis, and the cloud security explorer](concept-attack-path.md). Learn more about [the cloud security graph, attack path analysis, and the cloud | Release state | GA (General Availability) | | Required plans | - Defender Cloud Security Posture Management (CSPM) enabled<br>- Defender for Servers P2 customers can use the explorer UI to query for keys and secrets, but must have Defender CSPM enabled to get the full value of the Explorer. | | Required roles and permissions: | - **Security Reader** <br> - **Security Admin** <br> - **Reader** <br> - **Contributor** <br> - **Owner** |-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/no-icon.png"::: Commercial clouds (GCP) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | +| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds (Azure, AWS) <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds - GCP (Preview) <br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | ## Prerequisites |
defender-for-cloud | Kubernetes Workload Protections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md | You can enable the Azure Policy for Kubernetes by one of two ways: - Enable for all current and future clusters using plan/connector settings - [Enabling for Azure subscriptions or on-premises](#enabling-for-azure-subscriptions-or-on-premises) - [Enabling for GCP projects](#enabling-for-gcp-projects)-- [Enable for existing clusters using recommendations (specific clusters or all clusters)](#manually-deploy-the-add-on-to-clusters-using-recommendations-on-specific-clusters).+- [Deploy Azure Policy for Kubernetes on existing clusters](#deploy-azure-policy-for-kubernetes-on-existing-clusters) -### Enable for all current and future clusters using plan/connector settings +### Enable Azure Policy for Kubernetes for all current and future clusters using plan/connector settings > [!NOTE] > When you enable this setting, the Azure Policy for Kubernetes pods are installed on the cluster. Doing so allocates a small amount of CPU and memory for the pods to use. This allocation might reach maximum capacity, but it doesn't affect the rest of the CPU and memory on the resource. You can enable the Azure Policy for Kubernetes by one of two ways: #### Enabling for Azure subscriptions or on-premises -When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting on initial configuration you can enable it afterwards manually. +When you enable Microsoft Defender for Containers, the "Azure Policy for Kubernetes" setting is enabled by default for the Azure Kubernetes Service, and for Azure Arc-enabled Kubernetes clusters in the relevant subscription. If you disable the setting on initial configuration, you can enable it afterwards manually. If you disabled the "Azure Policy for Kubernetes" settings under the containers plan, you can follow the below steps to enable it across all clusters in your subscription: If you disabled the "Azure Policy for Kubernetes" settings under the containers #### Enabling for GCP projects -When you enable Microsoft Defender for Containers on a GCP connector, the "Azure Policy Extension for Azure Arc" setting is enabled by default for the Google Kubernetes Engine in the relevant project. If you disable the setting on initial configuration you can enable it afterwards manually. +When you enable Microsoft Defender for Containers on a GCP connector, the "Azure Policy Extension for Azure Arc" setting is enabled by default for the Google Kubernetes Engine in the relevant project. If you disable the setting on initial configuration, you can enable it afterwards manually. If you disabled the "Azure Policy Extension for Azure Arc" settings under the GCP connector, you can follow the below steps to [enable it on your GCP connector](defender-for-containers-enable.md?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-gke&preserve-view=true#protect-google-kubernetes-engine-gke-clusters). -### Manually deploy the add-on to clusters using recommendations on specific clusters +### Deploy Azure Policy for Kubernetes on existing clusters -You can manually configure the Kubernetes data plane hardening add-on, or extension on specific cluster through the Recommendations page using the following recommendations: --- **Azure Recommendations** - `"Azure Policy add-on for Kubernetes should be installed and enabled on your clusters"`, or `"Azure policy extension for Kubernetes should be installed and enabled on your clusters"`.-- **GCP Recommendation** - `"GKE clusters should have Microsoft Defender's extension for Azure Arc installed"`.-- **AWS Recommendation** - `"EKS clusters should have Microsoft Defender's extension for Azure Arc installed"`.--Once enabled, the hardening recommendation becomes available (some of the recommendations require another configuration to work). +You can manually configure the Azure Policy for Kubernetes on existing Kubernetes clusters through the Recommendations page. Once enabled, the hardening recommendations become available (some of the recommendations require another configuration to work). > [!NOTE]-> For AWS it isn't possible to do onboarding at scale using the connector, but it can be installed on all clusters or specific clusters using the recommendation ["EKS clusters should have Microsoft Defender's extension for Azure Arc installed"](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/38307993-84fb-4636-8ce7-3a64466bb5cc). +> For AWS it isn't possible to do onboarding at scale using the connector, but it can be installed on all existing clusters or on specific clusters using the recommendation [Azure Arc-enabled Kubernetes clusters should have the Azure policy extension for Kubernetes should be installed](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/0642d770-b189-42ef-a2ce-9dcc3ec6c169/subscriptionIds~/%5B%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%2204cd6fff-ef34-415e-b907-3c90df65c0e5%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null). -**To deploy the add-on to specified clusters**: +**To deploy the** **Azure Policy for Kubernetes** **to specified clusters**: 1. From the recommendations page, search for the relevant recommendation:- - **Azure** - `Azure Kubernetes Service clusters should have the Azure Policy add-on for Kubernetes installed` or `Azure policy extension for Kubernetes should be installed and enabled on your clusters` - - **AWS** - `EKS clusters should have Microsoft Defender's extension for Azure Arc installed` - - **GCP** - `GKE clusters should have Microsoft Defender's extension for Azure Arc installed` - :::image type="content" source="./media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png" alt-text="Screenshot showing the Azure Kubernetes service clusters recommendation." lightbox="media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png"::: + - **Azure -** `"Azure Kubernetes Service clusters should have the Azure Policy add-on for Kubernetes installed"` + - **GCP** - `"GKE clusters should have the Azure Policy extension"`. + - **AWS and On-premises** - `"Azure Arc-enabled Kubernetes clusters should have the Azure policy extension for Kubernetes should be installed"`. + :::image type="content" source="./media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png" alt-text="Screenshot showing the Azure Kubernetes service clusters recommendation." lightbox="media/kubernetes-workload-protections/azure-kubernetes-service-clusters-recommendation.png"::: - > [!TIP] - > The recommendation is included in five different security controls and it doesn't matter which one you select in the next step. + > [!TIP] + > The recommendation is included in different security controls, and it doesn't matter which one you select in the next step. 1. From any of the security controls, select the recommendation to see the resources on which you can install the add-on. Once enabled, the hardening recommendation becomes available (some of the recomm ## View and configure the bundle of recommendations -Approximately 30 minutes after the add-on installation completes, Defender for Cloud shows the clusters’ health status for the following recommendations, each in the relevant security control as shown: +Approximately 30 minutes after the Azure Policy for Kubernetes installation completes, Defender for Cloud shows the clusters’ health status for the following recommendations, each in the relevant security control as shown: > [!NOTE]-> If you're installing the add-on/extension for the first time, these recommendations will appear as new additions in the list of recommendations. +> If you're installing the Azure Policy for Kubernetes for the first time, these recommendations will appear as new additions in the list of recommendations. > [!TIP] > Some recommendations have parameters that must be customized via Azure Policy to use them effectively. For example, to benefit from the recommendation **Container images should be deployed only from trusted registries**, you'll have to define your trusted registries. If you don't enter the necessary parameters for the recommendations that require configuration, your workloads will be shown as unhealthy. |
defender-for-cloud | Management Groups Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/management-groups-roles.md | You can add subscriptions to the management group that you created. Once the Azure roles have been assigned to the users, the tenant administrator should remove itself from the user access administrator role. -1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Active Directory admin center](https://aad.portal.azure.com). +1. Sign in to the [Azure portal](https://portal.azure.com). 2. In the navigation list, select **Azure Active Directory** and then select **Properties**. |
defender-for-cloud | Multi Factor Authentication Enforcement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/multi-factor-authentication-enforcement.md | Title: Security recommendations for multi-factor authentication description: Learn how to enforce multi-factor authentication for your Azure subscriptions using Microsoft Defender for Cloud Previously updated : 06/28/2023 Last updated : 08/22/2023 -# Manage multi-factor authentication (MFA) enforcement on your subscriptions +# Manage multi-factor authentication (MFA) on your subscriptions -If you're using passwords, only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO). +If you're using passwords only to authenticate your users, you're leaving an attack vector open. Users often use weak passwords or reuse them for multiple services. With [MFA](https://www.microsoft.com/security/business/identity/mfa) enabled, your accounts are more secure, and users can still authenticate to almost any application with single sign-on (SSO). -There are multiple ways to enable MFA for your Azure Active Directory (AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud. +There are multiple ways to enable MFA for your Azure Active Directory (Azure AD) users based on the licenses that your organization owns. This page provides the details for each in the context of Microsoft Defender for Cloud. ## MFA and Microsoft Defender for Cloud Defender for Cloud places a high value on MFA. The security control that contributes the most to your secure score is **Enable MFA**. -The recommendations in the Enable MFA control ensure you're meeting the recommended practices for users of your subscriptions: +The following recommendations in the Enable MFA control ensure you're meeting the recommended practices for users of your subscriptions: - Accounts with owner permissions on Azure resources should be MFA enabled - Accounts with write permissions on Azure resources should be MFA enabled - Accounts with read permissions on Azure resources should be MFA enabled -There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, conditional access (CA) policy. ++There are three ways to enable MFA and be compliant with the two recommendations in Defender for Cloud: security defaults, per-user assignment, and conditional access (CA) policy. ### Free option - security defaults Customers with Microsoft 365 can use **Per-user assignment**. In this scenario, ### MFA for Azure AD Premium customers -For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you'll need [Azure Active Directory (AD) tenant permissions](../active-directory/roles/permissions-reference.md). +For an improved user experience, upgrade to Azure AD Premium P1 or P2 for **conditional access (CA) policy** options. To configure a CA policy, you need [Azure Active Directory (Azure AD) tenant permissions](../active-directory/roles/permissions-reference.md). Your CA policy must: Learn more in the [Azure Conditional Access documentation](../active-directory/c ## Identify accounts without multi-factor authentication (MFA) enabled -You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or using Azure Resource Graph. +You can view the list of user accounts without MFA enabled from either the Defender for Cloud recommendations details page, or by using the Azure Resource Graph. ### View the accounts without MFA enabled in the Azure portal To see which accounts don't have MFA enabled, use the following Azure Resource G 1. Open **Azure Resource Graph Explorer**. - :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Launching Azure Resource Graph Explorer** recommendation page" ::: + :::image type="content" source="./media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png" alt-text="Screenshot showing launching the Azure Resource Graph Explorer** recommendation page." lightbox="media/multi-factor-authentication-enforcement/opening-resource-graph-explorer.png"::: 1. Enter the following query and select **Run query**. - ```kusto + ``` securityresources- | where type == "microsoft.security/assessments" - | where properties.displayName contains "Accounts with owner permissions on Azure resources should be MFA enabled" - | where properties.status.code == "Unhealthy" + | where type =~ "microsoft.security/assessments/subassessments" + | where id has "assessments/dabc9bc4-b8a8-45bd-9a5a-43000df8aa1c" or id has "assessments/c0cb17b2-0607-48a7-b0e0-903ed22de39b" or id has "assessments/6240402e-f77c-46fa-9060-a7ce53997754" + | parse id with start "/assessments/" assessmentId "/subassessments/" userObjectId + | summarize make_list(userObjectId) by strcat(tostring(properties.displayName), " (", assessmentId, ")") + | project ["Recommendation Name"] = Column1 , ["Account ObjectIDs"] = list_userObjectId ``` 1. The `additionalData` property reveals the list of account object IDs for accounts that don't have MFA enforced. > [!NOTE]- > The accounts are shown as object IDs rather than account names to protect the privacy of the account holders. + > The 'Account ObjectIDs' column contains the list of account object IDs for accounts that don't have MFA enforced per recommendation. ++ > [!TIP] + > Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get). ++## Limitations ++- Conditional Access feature to enforce MFA on external users/tenants isn't supported yet. +- Conditional Access policy applied to Azure AD roles (such as all global admins, external users, external domain, etc.) isn't supported yet. +- External MFA solutions such as Okta, Ping, Duo, and more aren't supported within the identity MFA recommendations. -> [!TIP] -> Alternatively, you can use the Defender for Cloud REST API method [Assessments - Get](/rest/api/defenderforcloud/assessments/get). ## Next steps -To learn more about recommendations that apply to other Azure resource types, see the following article: +To learn more about recommendations that apply to other Azure resource types, see the following articles: - [Protecting your network in Microsoft Defender for Cloud](protect-network-resources.md) - Check out [common questions](faq-general.yml) about MFA. |
defender-for-cloud | Plan Multicloud Security Determine Multicloud Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-multicloud-security-determine-multicloud-dependencies.md | In Defender for Cloud, you enable specific plans to get Cloud Workload Platform - [Defender for Containers](./defender-for-containers-introduction.md): Help secure your Kubernetes clusters with security recommendations and hardening, vulnerability assessments, and runtime protection. - [Defender for SQL](./defender-for-sql-usage.md): Protect SQL databases running in AWS and GCP. -### What agent do I need? +### What extension do I need? -The following table summarizes agent requirements for CWPP. +The following table summarizes extension requirements for CWPP. -| Agent |Defender for Servers|Defender for Containers|Defender fo SQL on Machines| +| Extension |Defender for Servers|Defender for Containers|Defender for SQL on Machines| |::|::|::|::| |Azure Arc Agent | Γ£ö | Γ£ö | Γ£ö |-|Microsoft Defender for Endpoint extension |Γ£ö| -|Vulnerability assessment| Γ£ö| | +|Microsoft Defender for Endpoint extension |Γ£ö||| +|Vulnerability assessment| Γ£ö| || +|Agentless Disk Scanning| Γ£ö | Γ£ö || |Log Analytics or Azure Monitor Agent (preview) extension|Γ£ö| |Γ£ö| |Defender agent| | Γ£ö| | |Azure Policy for Kubernetes | | Γ£ö| | The following components and requirements are needed to receive full protection - **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.-To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. + To autoprovision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. - **Defender for Endpoint capabilities**: The [Microsoft Defender for Endpoint](./integration-defender-for-endpoint.md?tabs=linux) agent provides comprehensive endpoint detection and response (EDR) capabilities. - **Vulnerability assessment**: Using either the integrated [Qualys vulnerability scanner](./deploy-vulnerability-assessment-vm.md), or the [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) solution. - **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines. #### Check networking requirements -Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Auto-provisioning is enabled by default. +Machines must meet [network requirements](../azure-arc/servers/network-requirements.md?tabs=azure-cloud) before onboarding the agents. Autoprovisioning is enabled by default. ### Defender for Containers To receive the full benefits of Defender for SQL on your multicloud workload, yo - **Azure Arc agent**: AWS and GCP machines connect to Azure using Azure Arc. The Azure Arc agent connects them. - The Azure Arc agent is needed to read security information on the host level and allow Defender for Cloud to deploy the agents/extensions required for complete protection.- - To auto-provision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. + - To autoprovision the Azure Arc agent, the OS configuration agent on [GCP VM instances](./quickstart-onboard-gcp.md?pivots=env-settings) and the AWS Systems Manager (SSM) agent for [AWS EC2 instances](./quickstart-onboard-aws.md?pivots=env-settings) must be configured. [Learn more](../azure-arc/servers/agent-overview.md) about the agent. - **Log Analytics agent/[Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) (in preview)**: Collects security-related configuration information and event logs from machines - **Automatic SQL server discovery and registration**: Supports automatic discovery and registration of SQL servers |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
defender-for-cloud | Quickstart Onboard Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md | In this section of the wizard, you select the Defender for Cloud plans that you 1. By default, the **Containers** plan is set to **On**. This setting is necessary to have Defender for Containers protect your AWS EKS clusters. Ensure that you've fulfilled the [network requirements](./defender-for-containers-enable.md?pivots=defender-for-container-eks&source=docs&tabs=aks-deploy-portal%2ck8s-deploy-asc%2ck8s-verify-asc%2ck8s-remove-arc%2caks-removeprofile-api#network-requirements) for the Defender for Containers plan. > [!NOTE]- > Azure Arc-enabled Kubernetes, the Azure Arc extension for Microsoft Defender, and the Azure Arc extension for Azure Policy should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Azure Arc, if necessary), as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). + > Azure Arc-enabled Kubernetes, the Azure Arc extensions for Defender agent, and Azure Policy for Kubernetes should be installed. Use the dedicated Defender for Cloud recommendations to deploy the extensions (and Azure Arc, if necessary), as explained in [Protect Amazon Elastic Kubernetes Service clusters](defender-for-containers-enable.md?tabs=defender-for-container-eks). Optionally, select **Configure** to edit the configuration as required. If you choose to turn off this configuration, the **Threat detection (control plane)** feature is also disabled. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md). |
defender-for-cloud | Quickstart Onboard Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md | To complete this quickstart, you need: | Regions: | Australia East, Central US, West Europe | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Microsoft Azure operated by 21Vianet) | -During the preview, the maximum number of GitHub repositories that you can onboard to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 GitHub repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded. --If your organization is interested in onboarding more than 2,000 GitHub repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding). ## Connect your GitHub account |
defender-for-cloud | Regulatory Compliance Dashboard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/regulatory-compliance-dashboard.md | When you enable Defender for Cloud on an Azure subscription, the [Microsoft clou The regulatory compliance dashboard shows the status of all the assessments within your environment for your chosen standards and regulations. As you act on the recommendations and reduce risk factors in your environment, your compliance posture improves. +> [!TIP] +> Compliance data from Defender for Cloud now seamlessly integrates with [Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager), allowing you to centrally assess and manage compliance across your organization's entire digital estate. When you add any standard to your compliance dashboard (including compliance standards monitoring other clouds like AWS and GCP), the resource-level compliance data is automatically surfaced in Compliance Manager for the same standard. Compliance Manager thus provides improvement actions and status across your cloud infrastructure and all other digital assets in this central tool. For more information, see [Multicloud support in Microsoft Purview Compliance Manager](/microsoft-365/compliance/compliance-manager-multicloud). + In this tutorial you'll learn how to: > [!div class="checklist"]-> * Evaluate your regulatory compliance using the regulatory compliance dashboard -> * Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products -> * Improve your compliance posture by taking action on recommendations -> * Download PDF/CSV reports as well as certification reports of your compliance status -> * Setup alerts on changes to your compliance status -> * Export your compliance data as a continuous stream and as weekly snapshots +> +> - Evaluate your regulatory compliance using the regulatory compliance dashboard +> - Check MicrosoftΓÇÖs compliance offerings (currently in preview) for Azure, Dynamics 365 and Power Platform products +> - Improve your compliance posture by taking action on recommendations +> - Download PDF/CSV reports as well as certification reports of your compliance status +> - Setup alerts on changes to your compliance status +> - Export your compliance data as a continuous stream and as weekly snapshots If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. Use the regulatory compliance dashboard to help focus your attention on the gaps :::image type="content" source="./media/regulatory-compliance-dashboard/compliance-drilldown.png" alt-text="Screenshot that shows the exploration of the details of compliance with a specific standard." lightbox="media/regulatory-compliance-dashboard/compliance-drilldown.png"::: - The following list has a numbered item that matches each location in the image above, and describes what is in the image: -- Select a compliance standard to see a list of all controls for that standard. (1) + The following list has a numbered item that matches each location in the image above, and describes what is in the image: ++- Select a compliance standard to see a list of all controls for that standard. (1) - View the subscription(s) that the compliance standard is applied on. (2) - Select a Control to see more details. Expand the control to view the assessments associated with the selected control. Select an assessment to view the list of resources associated and the actions to remediate compliance concerns. (3)-- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4) +- Select Control details to view Overview, Your Actions and Microsoft Actions tabs. (4) - In the Your Actions tab, you can see the automated and manual assessments associated to the control. (5) - Automated assessments show the number of failed resources and resource types, and link you directly to the remediation experience to address those recommendations. (6) - The manual assessments can be manually attested, and evidence can be linked to demonstrate compliance. (7) The regulatory compliance has both automated and manual assessments that may nee 1. Select a compliance control to expand it. -1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue. +1. Select any of the failing assessments that appear in the dashboard to view the details for that recommendation. Each recommendation includes a set of remediation steps to resolve the issue. 1. Select a particular resource to view more details and resolve the recommendation for that resource. <br>For example, in the **Azure CIS 1.1.0** standard, select the recommendation **Disk encryption should be applied on virtual machines**. The regulatory compliance has both automated and manual assessments that may nee For more information about how to apply recommendations, see [Implementing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md). -1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves. +1. After you take action to resolve recommendations, you'll see the result in the compliance dashboard report because your compliance score improves. > [!NOTE] > Assessments run approximately every 12 hours, so you will see the impact on your compliance data only after the next run of the relevant assessment. The regulatory compliance has automated and manual assessments that may need to :::image type="content" source="media/release-notes/audit-reports-list-regulatory-compliance-dashboard-ga.png" alt-text="Filtering the list of available Azure Audit reports using tabs and filters."::: - For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate. + For example, from the PCI tab you can download a ZIP file containing a digitally signed certificate demonstrating Microsoft Azure, Dynamics 365, and Other Online Services' compliance with ISO22301 framework, together with the necessary collateral to interpret and present the certificate. > [!NOTE] > When you download one of these certification reports, you'll be shown the following privacy notice:- > + > > _By downloading this file, you are giving consent to Microsoft to store the current user and the selected subscriptions at the time of download. This data is used in order to notify you in case of changes or updates to the downloaded audit report. This data is used by Microsoft and the audit firms that produce the certification/reports only when notification is required._ ### Check compliance offerings status -Transparency provided by the compliance offerings (currently in preview) , allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform. +Transparency provided by the compliance offerings (currently in preview), allows you to view the certification status for each of the services provided by Microsoft prior to adding your product to the Azure platform. **To check the compliance offerings status**: Use continuous export data to an Azure Event Hubs or a Log Analytics workspace: :::image type="content" source="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png" alt-text="Continuously export a weekly snapshot of regulatory compliance data." lightbox="media/regulatory-compliance-dashboard/export-compliance-data-snapshot.png"::: > [!TIP]-> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance) +> You can also manually export reports about a single point in time directly from the regulatory compliance dashboard. Generate these **PDF/CSV reports** or **Azure and Dynamics certification reports** using the **Download report** or **Audit reports** toolbar options. See [Assess your regulatory compliance](#assess-your-regulatory-compliance) ## Run workflow automations when there are changes to your compliance For example, you might want Defender for Cloud to email a specific user when a c In this tutorial, you learned about using Defender for CloudΓÇÖs regulatory compliance dashboard to: > [!div class="checklist"]-> * View and monitor your compliance posture regarding the standards and regulations that are important to you. -> * Improve your compliance status by resolving relevant recommendations and watching the compliance score improve. +> +> - View and monitor your compliance posture regarding the standards and regulations that are important to you. +> - Improve your compliance status by resolving relevant recommendations and watching the compliance score improve. The regulatory compliance dashboard can greatly simplify the compliance process, and significantly cut the time required for gathering compliance evidence for your Azure, hybrid, and multicloud environment. |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 08/07/2023 Last updated : 08/22/2023 # What's new in Microsoft Defender for Cloud? Updates in August include: |Date |Update | |-|-|+| August 22 | [Recommendation release: Microsoft Defender for Storage should be enabled with malware scanning and sensitive data threat detection](#recommendation-release-microsoft-defender-for-storage-should-be-enabled-with-malware-scanning-and-sensitive-data-threat-detection) +| August 17 | [Extended properties in Defender for Cloud security alerts are masked from activity logs](#extended-properties-in-defender-for-cloud-security-alerts-are-masked-from-activity-logs) +| August 15 | [Preview release of GCP support in Defender CSPM](#preview-release-of-gcp-support-in-defender-cspm)| | August 7 | [New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions](#new-security-alerts-in-defender-for-servers-plan-2-detecting-potential-attacks-abusing-azure-virtual-machine-extensions)+| August 1 | [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | ++### Recommendation release: Microsoft Defender for Storage should be enabled with malware scanning and sensitive data threat detection ++August 22, 2023 ++A new recommendation in Defender for Storage has been released. This recommendation ensures that Defender for Storage is enabled at the subscription level with malware scanning and sensitive data threat detection capabilities. ++| Recommendation | Description | +|--|--| +| Microsoft Defender for Storage should be enabled with malware scanning and sensitive data threat detection | Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes malware scanning and sensitive data threat detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. With a simple agentless setup at scale, when enabled at the subscription level, all existing and newly created storage accounts under that subscription will be automatically protected. You can also exclude specific storage accounts from protected subscriptions.| ++This new recommendation will replace the current recommendation `Microsoft Defender for Storage should be enabled` (assessment key 1be22853-8ed1-4005-9907-ddad64cb1417). However, this recommendation will still be available in Azure Government clouds. ++Learn more about [Microsoft Defender for Storage](defender-for-storage-introduction.md). ++### Extended properties in Defender for Cloud security alerts are masked from activity logs ++August 17, 2023 ++We recently changed the way security alerts and activity logs are integrated. To better protect sensitive customer information, we no longer include this information in activity logs. Instead, we mask it with asterisks. However, this information is still available through the alerts API, continuous export, and the Defender for Cloud portal. ++Customers who rely on activity logs to export alerts to their SIEM solutions should consider using a different solution, as it isn't the recommended method for exporting Defender for Cloud security alerts. ++For instructions on how to export Defender for Cloud security alerts to SIEM, SOAR and other third party applications, see [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). ++### Preview release of GCP support in Defender CSPM ++August 15, 2023 ++We're announcing the preview release of the Defender CSPM contextual cloud security graph and attack path analysis with support for GCP resources. You can leverage the power of Defender CSPM for comprehensive visibility and intelligent cloud security across GCP resources. ++ Key features of our GCP support include: ++- **Attack path analysis** - Understand the potential routes attackers might take. +- **Cloud security explorer** - Proactively identify security risks by running graph-based queries on the security graph. +- **Agentless scanning** - Scan servers and identify secrets and vulnerabilities without installing an agent. +- **Data-aware security posture** - Discover and remediate risks to sensitive data in Google Cloud Storage buckets. ++Learn more about [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options). ### New security alerts in Defender for Servers Plan 2: Detecting potential attacks abusing Azure virtual machine extensions Azure virtual machine extensions are small applications that run post-deployment - Resetting credentials and creating administrative users - Encrypting disks -Here is a table of the new alerts. +Here's a table of the new alerts. |Alert (alert type)|Description|MITRE tactics|Severity| |-|-|-|-|-| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines are not equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium | -| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July, 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low | +| **Suspicious failure installing GPU extension in your subscription (Preview)**<br>(VM_GPUExtensionSuspiciousFailure) | Suspicious intent of installing a GPU extension on unsupported VMs. This extension should be installed on virtual machines equipped with a graphic processor, and in this case the virtual machines aren't equipped with such. These failures can be seen when malicious adversaries execute multiple installations of such extension for crypto-mining purposes. | Impact | Medium | +| **Suspicious installation of a GPU extension was detected on your virtual machine (Preview)**<br>(VM_GPUDriverExtensionUnusualExecution)<br>*This alert was [released in July 2023](#new-security-alert-in-defender-for-servers-plan-2-detecting-potential-attacks-leveraging-azure-vm-gpu-driver-extensions).* | Suspicious installation of a GPU extension was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. This activity is deemed suspicious as the principal's behavior departs from its usual patterns. | Impact | Low | | **Run Command with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousScript) | A Run Command with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High |-| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | +| **Suspicious unauthorized Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousFailure) | Suspicious unauthorized usage of Run Command has failed and was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may attempt to use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Medium | | **Suspicious Run Command usage was detected on your virtual machine (Preview)**<br>(VM_RunCommandSuspiciousUsage) | Suspicious usage of Run Command was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Run Command to execute malicious code with high privileges on your virtual machines via the Azure Resource Manager. This activity is deemed suspicious as it hasn't been commonly seen before. | Execution | Low | | **Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines (Preview)**<br>(VM_SuspiciousMultiExtensionUsage) | Suspicious usage of multiple monitoring or data collection extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse such extensions for data collection, network traffic monitoring, and more, in your subscription. This usage is deemed suspicious as it hasn't been commonly seen before. | Reconnaissance | Medium | | **Suspicious installation of disk encryption extensions was detected on your virtual machines (Preview)**<br>(VM_DiskEncryptionSuspiciousUsage) | Suspicious installation of disk encryption extensions was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may abuse the disk encryption extension to deploy full disk encryptions on your virtual machines via the Azure Resource Manager in an attempt to perform ransomware activity. This activity is deemed suspicious as it hasn't been commonly seen before and due to the high number of extension installations. | Impact | Medium | | **Suspicious usage of VMAccess extension was detected on your virtual machines (Preview)**<br>(VM_VMAccessSuspiciousUsage) | Suspicious usage of VMAccess extension was detected on your virtual machines. Attackers may abuse the VMAccess extension to gain access and compromise your virtual machines with high privileges by resetting access or managing administrative users. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Persistence | Medium |-| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | +| **Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_DSCExtensionSuspiciousScript) | Desired State Configuration (DSC) extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | | **Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines (Preview)**<br>(VM_DSCExtensionSuspiciousUsage) | Suspicious usage of a Desired State Configuration (DSC) extension was detected on your virtual machines by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the Desired State Configuration (DSC) extension to deploy malicious configurations, such as persistence mechanisms, malicious scripts, and more, with high privileges, on your virtual machines. This activity is deemed suspicious as the principal's behavior departs from its usual patterns, and due to the high number of the extension installations. | Impact | Low | | **Custom script extension with a suspicious script was detected on your virtual machine (Preview)**<br>(VM_CustomScriptExtensionSuspiciousCmd)<br>*(This alert already exists and has been improved with more enhanced logic and detection methods.)* | Custom script extension with a suspicious script was detected on your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use Custom script extension to execute malicious code with high privileges on your virtual machine via the Azure Resource Manager. The script is deemed suspicious as certain parts were identified as being potentially malicious. | Execution | High | - See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions). + See the [extension-based alerts in Defender for Servers](alerts-reference.md#alerts-for-azure-vm-extensions). For a complete list of alerts, see the [reference table for all security alerts in Microsoft Defender for Cloud](alerts-reference.md). +### Business model and pricing updates for Defender for Cloud plans ++August 1, 2023 ++Microsoft Defender for Cloud has three plans that offer service layer protection: ++- Defender for Key Vault ++- Defender for Resource Manager +- Defender for DNS ++These plans have transitioned to a new business model with different pricing and packaging to address customer feedback regarding spending predictability and simplifying the overall cost structure. ++**Business model and pricing changes summary**: ++Existing customers of Defender for Key-Vault, Defender for Resource Manager, and Defender for DNS keep their current business model and pricing unless they actively choose to switch to the new business model and price. ++- **Defender for Resource Manager**: This plan has a fixed price per subscription per month. Customers can switch to the new business model by selecting the Defender for Resource Manager new per-subscription model. ++Existing customers of Defender for Key-Vault, Defender for Resource Manager, and Defender for DNS keep their current business model and pricing unless they actively choose to switch to the new business model and price. ++- **Defender for Resource Manager**: This plan has a fixed price per subscription per month. Customers can switch to the new business model by selecting the Defender for Resource Manager new per-subscription model. +- **Defender for Key Vault**: This plan has a fixed price per vault, per month with no overage charge. Customers can switch to the new business model by selecting the Defender for Key Vault new per-vault model +- **Defender for DNS**: Defender for Servers Plan 2 customers gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS are no longer charged for Defender for DNS. Defender for DNS is no longer available as a standalone plan. ++Learn more about the pricing for these plans in the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h). + ## July 2023 Updates in July include: We have added four new Azure Active Directory authentication-related recommendat ### Two recommendations related to missing Operating System (OS) updates were released to GA -The recommendations `System updates should be installed on your machines (powered by Update management center)` and `Machines should be configured to periodically check for missing system updates` have been released for General Availability. +The recommendations `System updates should be installed on your machines (powered by Azure Update Manager)` and `Machines should be configured to periodically check for missing system updates` have been released for General Availability. To use the new recommendation, you need to: After completing these steps, you can remove the old recommendation `System upda The two versions of the recommendations: - [`System updates should be installed on your machines`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesRecommendationDetailsWithRulesBlade/assessmentKey/4ab6e3c5-74dd-8b35-9ab9-f61b30875b27/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)-- [`System updates should be installed on your machines (powered by Update management center)`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesV2RecommendationDetailsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null)+- [`System updates should be installed on your machines (powered by Azure Update Manager)`](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/SystemUpdatesV2RecommendationDetailsBlade/assessmentKey/e1145ab1-eb4f-43d8-911b-36ddf771d13f/subscriptionIds~/%5B%220cd6095b-b140-41ec-ad1d-32f2f7493386%22%2C%220ee78edb-a0ad-456c-a0a2-901bf542c102%22%2C%2284ca48fe-c942-42e5-b492-d56681d058fa%22%2C%22b2a328a7-ffff-4c09-b643-a4758cf170bc%22%2C%22eef8b6d5-94da-4b36-9327-a662f2674efb%22%2C%228d5565a3-dec1-4ee2-86d6-8aabb315eec4%22%2C%22e0fd569c-e34a-4249-8c24-e8d723c7f054%22%2C%22dad45786-32e5-4ef3-b90e-8e0838fbadb6%22%2C%22a5f9f0d3-a937-4de5-8cf3-387fce51e80c%22%2C%220368444d-756e-4ca6-9ecd-e964248c227a%22%2C%22e686ef8c-d35d-4e9b-92f8-caaaa7948c0a%22%2C%222145a411-d149-4010-84d4-40fe8a55db44%22%2C%22212f9889-769e-45ae-ab43-6da33674bd26%22%2C%22615f5f56-4ba9-45cf-b644-0c09d7d325c8%22%2C%22487bb485-b5b0-471e-9c0d-10717612f869%22%2C%22cb9eb375-570a-4e75-b83a-77dd942bee9f%22%2C%224bbecc02-f2c3-402a-8e01-1dfb1ffef499%22%2C%22432a7068-99ae-4975-ad38-d96b71172cdf%22%2C%22c0620f27-ac38-468c-a26b-264009fe7c41%22%2C%22a1920ebd-59b7-4f19-af9f-5e80599e88e4%22%2C%22b43a6159-1bea-4fa2-9407-e875fdc0ff55%22%2C%22d07c0080-170c-4c24-861d-9c817742986a%22%2C%22ae71ef11-a03f-4b4f-a0e6-ef144727c711%22%2C%2255a24be0-d9c3-4ecd-86b6-566c7aac2512%22%2C%227afc2d66-d5b4-4e84-970b-a782e3e4cc46%22%2C%2252a442a2-31e9-42f9-8e3e-4b27dbf82673%22%2C%228c4b5b03-3b24-4ed0-91f5-a703cd91b412%22%2C%22e01de573-132a-42ac-9ee2-f9dea9dd2717%22%2C%22b5c0b80f-5932-4d47-ae25-cd617dac90ce%22%2C%22e4e06275-58d1-4081-8f1b-be12462eb701%22%2C%229b4236fe-df75-4289-bf00-40628ed41fd9%22%2C%2221d8f407-c4c4-452e-87a4-e609bfb86248%22%2C%227d411d23-59e5-4e2e-8566-4f59de4544f2%22%2C%22b74d5345-100f-408a-a7ca-47abb52ba60d%22%2C%22f30787b9-82a8-4e74-bb0f-f12d64ecc496%22%2C%22482e1993-01d4-4b16-bff4-1866929176a1%22%2C%2226596251-f2f3-4e31-8a1b-f0754e32ad73%22%2C%224628298e-882d-4f12-abf4-a9f9654960bb%22%2C%224115b323-4aac-47f4-bb13-22af265ed58b%22%2C%22911e3904-5112-4232-a7ee-0d1811363c28%22%2C%22cd0fa82d-b6b6-4361-b002-050c32f71353%22%2C%22dd4c2dac-db51-4cd0-b734-684c6cc360c1%22%2C%22d2c9544f-4329-4642-b73d-020e7fef844f%22%2C%22bac420ed-c6fc-4a05-8ac1-8c0c52da1d6e%22%2C%2250ff7bc0-cd15-49d5-abb2-e975184c2f65%22%2C%223cd95ff9-ac62-4b5c-8240-0cd046687ea0%22%2C%2213723929-6644-4060-a50a-cc38ebc5e8b1%22%2C%2209fa8e83-d677-474f-8f73-2a954a0b0ea4%22%2C%22ca38bc19-cf50-48e2-bbe6-8c35b40212d8%22%2C%22bf163a87-8506-4eb3-8d14-c2dc95908830%22%2C%221278a874-89fc-418c-b6b9-ac763b000415%22%2C%223b2fda06-3ef6-454a-9dd5-994a548243e9%22%2C%226560575d-fa06-4e7d-95fb-f962e74efd7a%22%2C%22c3547baf-332f-4d8f-96bd-0659b39c7a59%22%2C%222f96ae42-240b-4228-bafa-26d8b7b03bf3%22%2C%2229de2cfc-f00a-43bb-bdc8-3108795bd282%22%2C%22a1ffc958-d2c7-493e-9f1e-125a0477f536%22%2C%2254b875cc-a81a-4914-8bfd-1a36bc7ddf4d%22%2C%22407ff5d7-0113-4c5c-8534-f5cfb09298f5%22%2C%22365a62ee-6166-4d37-a936-03585106dd50%22%2C%226d17b59e-06c4-4203-89d2-de793ebf5452%22%2C%229372b318-ed3a-4504-95a6-941201300f78%22%2C%223c1bb38c-82e3-4f8d-a115-a7110ba70d05%22%2C%22c6dcd830-359f-44d0-b4d4-c1ba95e86f48%22%2C%2209e8ad18-7bdb-43b8-80c4-43ee53460e0b%22%2C%22dcbdac96-1896-478d-89fc-c95ed43f4596%22%2C%22d23422cf-c0f2-4edc-a306-6e32b181a341%22%2C%228c2c7b23-848d-40fe-b817-690d79ad9dfd%22%2C%221163fbbe-27e7-4b0f-8466-195fe5417043%22%2C%223905431d-c062-4c17-8fd9-c51f89f334c4%22%2C%227ea26ded-0260-4e78-9336-285d4d9e33d2%22%2C%225ccdbd03-f1b1-4b59-a609-300685e17ce3%22%2C%22bcdc6eb0-74cd-40b6-b3a9-584b33cea7b6%22%2C%22d557e825-27b1-4819-8af5-dc2429af91c9%22%2C%222bb50811-92b6-43a1-9d80-745962d9c759%22%2C%22409111bf-3097-421c-ad68-a44e716edf58%22%2C%2249e3f635-484a-43d1-b953-b29e1871ba88%22%2C%22b77ec8a9-04ed-48d2-a87a-e5887b978ba6%22%2C%22075423e9-7d33-4166-8bdf-3920b04e3735%22%2C%22ef143bbb-6a7e-4a3f-b64f-2f23330e0116%22%2C%2224afc59a-f969-4f83-95c9-3b70f52d833d%22%2C%22a8783cc5-1171-4c34-924f-6f71a20b21ec%22%2C%220079a9bb-e218-496a-9880-d27ad6192f52%22%2C%226f53185c-ea09-4fc3-9075-318dec805303%22%2C%22588845a8-a4a7-4ab1-83a1-1388452e8c0c%22%2C%22b68b2f37-1d37-4c2f-80f6-c23de402792e%22%2C%22eec2de82-6ab2-4a84-ae5f-57e9a10bf661%22%2C%22227531a4-d775-435b-a878-963ed8d0d18f%22%2C%228cff5d56-95fb-4a74-ab9d-079edb45313e%22%2C%22e72e5254-f265-4e95-9bd2-9ee8e7329051%22%2C%228ae1955e-f748-4273-a507-10159ba940f9%22%2C%22f6869ac6-2a40-404f-acd3-d07461be771a%22%2C%2285b3dbca-5974-4067-9669-67a141095a76%22%2C%228168a4f2-74d6-4663-9951-8e3a454937b7%22%2C%229ec1d932-0f3f-486c-acc6-e7d78b358f9b%22%2C%2279f57c16-00fe-48da-87d4-5192e86cd047%22%2C%22bac044cf-49e1-4843-8dda-1ce9662606c8%22%2C%22009d0e9f-a42a-470e-b315-82496a88cf0f%22%2C%2268f3658f-0090-4277-a500-f02227aaee97%22%5D/showSecurityCenterCommandBar~/false/assessmentOwners~/null) will both be available until the [Log Analytics agent is deprecated on August 31, 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), which is when the older version (`System updates should be installed on your machines`) of the recommendation will be deprecated as well. Both recommendations return the same results and are available under the same control `Apply system updates`. -The new recommendation `System updates should be installed on your machines (powered by Update management center)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview. +The new recommendation `System updates should be installed on your machines (powered by Azure Update Manager)`, has a remediation flow available through the Fix button, which can be used to remediate any results through the Update Management Center (Preview). This remediation process is still in Preview. -The new recommendation `System updates should be installed on your machines (powered by Update management center)`, isn't expected to affect your Secure Score, as it has the same results as the old recommendation `System updates should be installed on your machines`. +The new recommendation `System updates should be installed on your machines (powered by Azure Update Manager)`, isn't expected to affect your Secure Score, as it has the same results as the old recommendation `System updates should be installed on your machines`. The prerequisite recommendation ([Enable the periodic assessment property](../update-center/assessment-options.md#periodic-assessment)) has a negative effect on your Secure Score. You can remediate the negative effect with the available [Fix button](implement-security-recommendations.md). Learn more about [enabling Microsoft Defender for Endpoint](integration-defender You no longer need an agent on your Azure VMs and Azure Arc machines to make sure the machines have all of the latest security or critical system updates. -The new system updates recommendation, `System updates should be installed on your machines (powered by Update management center)` in the `Apply system updates` control, is based on the [Update management center (preview)](../update-center/overview.md). The recommendation relies on a native agent embedded in every Azure VM and Azure Arc machines instead of an installed agent. The Quick Fix in the new recommendation leads you to a one-time installation of the missing updates in the Update management center portal. +The new system updates recommendation, `System updates should be installed on your machines (powered by Azure Update Manager)` in the `Apply system updates` control, is based on the [Update management center (preview)](../update-center/overview.md). The recommendation relies on a native agent embedded in every Azure VM and Azure Arc machines instead of an installed agent. The Quick Fix in the new recommendation leads you to a one-time installation of the missing updates in the Update management center portal. To use the new recommendation, you need to: The related [policy definition](https://portal.azure.com/#view/Microsoft_Azure_P ## Next steps For past changes to Defender for Cloud, see [Archive for what's new in Defender for Cloud?](release-notes-archive.md).+++ |
defender-for-cloud | Subassessment Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/subassessment-rest-api.md | ++ + Title: Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments +description: Learn about container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments ++ Last updated : 08/16/2023++++# Container vulnerability assessments powered by Microsoft Defender Vulnerability Management subassessments ++API Version: 2019-01-01-preview ++Get security subassessments on all your scanned resources inside a scope. ++## Overview ++You can access vulnerability assessment results pragmatically for both registry and runtime recommendations using the subassessments rest API. ++For more information on how to get started with our REST API, see [Azure REST API reference](/rest/api/azure/). Use the following information for specific information for the container vulnerability assessment results powered by Microsoft Defender Vulnerability Management. ++## HTTP Requests ++### Get ++#### GET ++`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments/{subAssessmentName}?api-version=2019-01-01-preview` ++#### URI Parameters ++| Name | In | Required | Type | Description | +| -- | -- | -- | | | +| assessmentName | path | True | string | The Assessment Key - Unique key for the assessment type | +| scope | path | True | string | Scope of the query. Can be subscription (/subscriptions/0b06d9ea-afe6-4779-bd59-30e5c2d9d13f) or management group (/providers/Microsoft.Management/managementGroups/mgName). | +| subAssessmentName | path | True | string | The Sub-Assessment Key - Unique key for the subassessment type | +| api-version | query | True | string | API version for the operation | ++#### Responses ++| Name | Type | Description | +| - | | - | +| 200 OK | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/get#securitysubassessment) | OK | +| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/get#clouderror) | Error response describing why the operation failed. | ++### List ++#### GET ++`https://management.azure.com/{scope}/providers/Microsoft.Security/assessments/{assessmentName}/subAssessments?api-version=2019-01-01-preview` ++#### URI parameters ++| **Name** | **In** | **Required** | **Type** | **Description** | +| | | | -- | | +| **assessmentName** | path | True | string | The Assessment Key - Unique key for the assessment type | +| **scope** | path | True | string | Scope of the query. The scope for AzureContainerVulnerability is the registry itself. | +| **api-version** | query | True | string | API version for the operation | ++#### Responses ++| Name | Type | Description | +| | | | +| 200 OK | [SecuritySubAssessmentList](/rest/api/defenderforcloud/sub-assessments/list#securitysubassessmentlist) | OK | +| Other Status Codes | [CloudError](/rest/api/defenderforcloud/sub-assessments/list#clouderror) | Error response describing why the operation failed. | ++## Security ++### azure_auth ++Azure Active Directory OAuth2 Flow ++Type: oauth2 +Flow: implicit +Authorization URL: `https://login.microsoftonline.com/common/oauth2/authorize` ++Scopes ++| Name | Description | +| | -- | +| user_impersonation | impersonate your user account | ++### Example ++### HTTP ++#### GET ++`https://management.azure.com/subscriptions/ 6ebb89c4-0e91-4f62-888f-c9518e662293/resourceGroups/myResourceGroup/providers/Microsoft.ContainerRegistry/registries/myRegistry/providers/Microsoft.Security/assessments/ cf02effd-8e33-4b84-a012-1e61cf1a5638/subAssessments?api-version=2019-01-01-preview` ++#### Sample Response ++```json +{ + "value": [ + { + "type": "Microsoft.Security/assessments/subAssessments", + "id": "/subscriptions/3905431d-c062-4c17-8fd9-c51f89f334c4/resourceGroups/PytorchEnterprise/providers/Microsoft.ContainerRegistry/registries/ptebic/providers/Microsoft.Security/assessments/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5/subassessments/3f069764-2777-3731-9698-c87f23569a1d", + "name": "3f069764-2777-3731-9698-c87f23569a1d", + "properties": { + "id": "CVE-2021-39537", + "displayName": "CVE-2021-39537", + "status": { + "code": "NotApplicable", + "severity": "High", + "cause": "Exempt", + "description": "Disabled parent assessment" + }, + "remediation": "Create new image with updated package libncursesw5 with version 6.2-0ubuntu2.1 or higher.", + "description": "This vulnerability affects the following vendors: Gnu, Apple, Red_Hat, Ubuntu, Debian, Suse, Amazon, Microsoft, Alpine. To view more details about this vulnerability please visit the vendor website.", + "timeGenerated": "2023-08-08T08:14:13.742742Z", + "resourceDetails": { + "source": "Azure", + "id": "/repositories/public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121/images/sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6" + }, + "additionalData": { + "assessedResourceType": "AzureContainerRegistryVulnerability", + "artifactDetails": { + "repositoryName": "public/azureml/aifx/stable-ubuntu2004-cu116-py39-torch1121", + "registryHost": "ptebic.azurecr.io", + "digest": "sha256:7f107db187ff32acfbc47eaa262b44d13d725f14dd08669a726a81fba87a12d6", + "tags": [ + "biweekly.202305.2" + ], + "artifactType": "ContainerImage", + "mediaType": "application/vnd.docker.distribution.manifest.v2+json", + "lastPushedToRegistryUTC": "2023-05-15T16:00:40.2938142Z" + }, + "softwareDetails": { + "osDetails": { + "osPlatform": "linux", + "osVersion": "ubuntu_linux_20.04" + }, + "packageName": "libncursesw5", + "category": "OS", + "fixReference": { + "id": "USN-6099-1", + "url": "https://ubuntu.com/security/notices/USN-6099-1", + "description": "USN-6099-1: ncurses vulnerabilities 2023 May 23", + "releaseDate": "2023-05-23T00:00:00+00:00" + }, + "vendor": "ubuntu", + "version": "6.2-0ubuntu2", + "evidence": [ + "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s", + "dpkg-query -f '${Package}:${Source}:\\n' -W | grep -e ^libncursesw5:.* -e .*:libncursesw5: | cut -f 1 -d ':' | xargs dpkg-query -s" + ], + "language": "", + "fixedVersion": "6.2-0ubuntu2.1", + "fixStatus": "FixAvailable" + }, + "vulnerabilityDetails": { + "cveId": "CVE-2021-39537", + "references": [ + { + "title": "CVE-2021-39537", + "link": "https://nvd.nist.gov/vuln/detail/CVE-2021-39537" + } + ], + "cvss": { + "2.0": null, + "3.0": { + "base": 7.8, + "cvssVectorString": "CVSS:3.0/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H/E:P/RL:U/RC:R" + } + }, + "workarounds": [], + "publishedDate": "2020-08-04T00:00:00", + "lastModifiedDate": "2023-07-07T00:00:00", + "severity": "High", + "cpe": { + "uri": "cpe:2.3:a:ubuntu:libncursesw5:*:*:*:*:*:ubuntu_linux_20.04:*:*", + "part": "Applications", + "vendor": "ubuntu", + "product": "libncursesw5", + "version": "*", + "update": "*", + "edition": "*", + "language": "*", + "softwareEdition": "*", + "targetSoftware": "ubuntu_linux_20.04", + "targetHardware": "*", + "other": "*" + }, + "weaknesses": { + "cwe": [ + { + "id": "CWE-787" + } + ] + }, + "exploitabilityAssessment": { + "exploitStepsVerified": false, + "exploitStepsPublished": false, + "isInExploitKit": false, + "types": [], + "exploitUris": [] + } + }, + "cvssV30Score": 7.8 + } + } + } + ] +} +``` ++## Definitions ++| Name | Description | +| | | +| AzureResourceDetails | Details of the Azure resource that was assessed | +| CloudError | Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This definition also follows the OData error response format.). | +| CloudErrorBody | The error detail | +| AzureContainerVulnerability | More context fields for container registry Vulnerability assessment | +| CVE | CVE Details | +| CVSS | CVSS Details | +| ErrorAdditionalInfo | The resource management error additional info. | +| SecuritySubAssessment | Security subassessment on a resource | +| SecuritySubAssessmentList | List of security subassessments | +| ArtifactDetails | Details for the affected container image | +| SoftwareDetails | Details for the affected software package | +| FixReference | Details on the fix, if available | +| OS Details | Details on the os information | +| VulnerabilityDetails | Details on the detected vulnerability | +| CPE | Common Platform Enumeration | +| Cwe | Common weakness enumeration | +| VulnerabilityReference | Reference links to vulnerability | +| ExploitabilityAssessment | Reference links to an example exploit | ++### AzureContainerRegistryVulnerability (MDVM) ++Additional context fields for Azure container registry vulnerability assessment ++| **Name** | **Type** | **Description** | +| -- | -- | -- | +| assessedResourceType | string: AzureContainerRegistryVulnerability | Subassessment resource type | +| cvssV30Score | Numeric | CVSS V3 Score | +| vulnerabilityDetails | VulnerabilityDetails | | +| artifactDetails | ArtifactDetails | | +| softwareDetails | SoftwareDetails | | ++### ArtifactDetails ++Context details for the affected container image ++| **Name** | **Type** | **Description** | +| -- | -- | | +| repositoryName | String | Repository name | +| RepositoryHost | String | Repository host | +| lastPublishedToRegistryUTC | Timestamp | UTC timestamp for last publish date | +| artifactType | String: ContainerImage | | +| mediaType | String | Layer media type | +| Digest | String | Digest of vulnerable image | +| Tags | String[] | Tags of vulnerable image | ++### Software Details ++Details for the affected software package ++| **Name** | **Type** | **Description** | +| | | | +| fixedVersion | String | Fixed Version | +| category | String | Vulnerability category ΓÇô OS or Language | +| osDetails | OsDetails | | +| language | String | Language of affected package (for example, Python, .NET) could also be empty | +| version | String | | +| vendor | String | | +| packageName | String | | +| fixStatus | String | Unknown, FixAvailable, NoFixAvailable, Scheduled, WontFix | +| evidence | String[] | Evidence for the package | +| fixReference | FixReference | | ++### FixReference ++Details on the fix, if available ++| **Name** | **Type** | **description** | +| -- | | | +| ID | String | Fix ID | +| Description | String | Fix Description | +| releaseDate | Timestamp | Fix timestamp | +| url | String | URL to fix notification | ++### OS Details ++Details on the os information ++| **Name** | **Type** | **Description** | +| - | -- | -- | +| osPlatform | String | For example: Linux, Windows | +| osName | String | For example: Ubuntu | +| osVersion | String | | ++### VulnerabilityDetails ++Details on the detected vulnerability ++| **Severity** | **Severity** | **The sub-assessment severity level** | +| | -- | - | +| LastModifiedDate | Timestamp | | +| publishedDate | Timestamp | Published date | +| ExploitabilityAssessment | ExploitabilityAssessment | | +| CVSS | Dictionary <string, CVSS> | Dictionary from cvss version to cvss details object | +| Workarounds | Workaround[] | Published workarounds for vulnerability | +| References | VulnerabilityReference | | +| Weaknesses | Weakness[] | | +| cveId | String | CVE ID | +| Cpe | CPE | | ++### CPE (Common Platform Enumeration) ++| **Name** | **Type** | **Description** | +| | -- | | +| language | String | Language tag | +| softwareEdition | String | | +| Version | String | Package version | +| targetSoftware | String | Target Software | +| vendor | String | Vendor | +| product | String | Product | +| edition | String | | +| update | String | | +| other | String | | +| part | String | Applications Hardware OperatingSystems | +| uri | String | CPE 2.3 formatted uri | ++### Weakness ++| **Name** | **Type** | **Description** | +| -- | -- | | +| Cwe | Cwe[] | | ++### Cwe (Common weakness enumeration) ++CWE details ++| **Name** | **Type** | **description** | +| -- | -- | | +| ID | String | CWE ID | ++### VulnerabilityReference ++Reference links to vulnerability ++| **Name** | **Type** | **Description** | +| -- | -- | - | +| link | String | Reference url | +| title | String | Reference title | ++### ExploitabilityAssessment ++Reference links to an example exploit ++| **Name** | **Type** | **Description** | +| | -- | | +| exploitUris | String[] | | +| exploitStepsPublished | Boolean | Had the exploits steps been published | +| exploitStepsVerified | Boolean | Had the exploit steps verified | +| isInExploitKit | Boolean | Is part of the exploit kit | +| types | String[] | Exploit types, for example: NotAvailable, Dos, Local, Remote, WebApps, PrivilegeEscalation | ++### AzureResourceDetails ++Details of the Azure resource that was assessed ++| **Name** | **Type** | **Description** | +| -- | -- | | +| ID | string | Azure resource ID of the assessed resource | +| source | string: Azure | The platform where the assessed resource resides | ++### CloudError ++Common error response for all Azure Resource Manager APIs to return error details for failed operations. (This response also follows the OData error response format.). ++| **Name** | **Type** | **Description** | +| -- | | -- | +| error.additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. | +| error.code | string | The error code. | +| error.details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#clouderrorbody)[] | The error details. | +| error.message | string | The error message. | +| error.target | string | The error target. | ++### CloudErrorBody ++The error detail. ++| **Name** | **Type** | **Description** | +| -- | | -- | +| additionalInfo | [ErrorAdditionalInfo](/rest/api/defenderforcloud/sub-assessments/list#erroradditionalinfo)[] | The error additional info. | +| code | string | The error code. | +| details | [CloudErrorBody](/rest/api/defenderforcloud/sub-assessments/list#clouderrorbody)[] | The error details. | +| message | string | The error message. | +| target | string | The error target. | ++### ErrorAdditionalInfo ++The resource management error additional info. ++| **Name** | **Type** | **Description** | +| -- | -- | - | +| info | object | The additional info. | +| type | string | The additional info type. | ++### SecuritySubAssessment ++Security subassessment on a resource ++| **Name** | **Type** | **Description** | +| -- | | | +| ID | string | Resource ID | +| name | string | Resource name | +| properties.additionalData | AdditionalData: AzureContainerRegistryVulnerability | Details of the subassessment | +| properties.category | string | Category of the subassessment | +| properties.description | string | Human readable description of the assessment status | +| properties.displayName | string | User friendly display name of the subassessment | +| properties.id | string | Vulnerability ID | +| properties.impact | string | Description of the impact of this subassessment | +| properties.remediation | string | Information on how to remediate this subassessment | +| properties.resourceDetails | ResourceDetails: [AzureResourceDetails](/rest/api/defenderforcloud/sub-assessments/list#azureresourcedetails) | Details of the resource that was assessed | +| properties.status | [SubAssessmentStatus](/rest/api/defenderforcloud/sub-assessments/list#subassessmentstatus) | Status of the subassessment | +| properties.timeGenerated | string | The date and time the subassessment was generated | +| type | string | Resource type | ++### SecuritySubAssessmentList ++List of security subassessments ++| **Name** | **Type** | **Description** | +| -- | | - | +| nextLink | string | The URI to fetch the next page. | +| value | [SecuritySubAssessment](/rest/api/defenderforcloud/sub-assessments/list?tabs=HTTP#securitysubassessment)[] | Security subassessment on a resource | |
defender-for-cloud | Support Matrix Cloud Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-cloud-environment.md | In the support table, **NA** indicates that the feature isn't available. [Defender CSPM](concept-cloud-security-posture-management.md)| GA | NA | NA [Defender for APIs](defender-for-apis-introduction.md). [Review support preview regions](defender-for-apis-prepare.md#cloud-and-region-support). | Preview | NA | NA [Defender for App Service](defender-for-app-service-introduction.md) | GA | NA | NA-[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | Preview | NA | NA +[Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) | GA | NA | NA [Defender for Azure SQL database servers](defender-for-sql-introduction.md) | GA | GA | GA<br/><br/>A subset of alerts/vulnerability assessments is available.<br/>Behavioral threat protection isn't available. [Defender for Containers](defender-for-containers-introduction.md)<br/>[Review detailed feature support](support-matrix-defender-for-containers.md) | GA | GA | GA [Defender for DevOps](defender-for-devops-introduction.md) |Preview | NA | NA |
defender-for-cloud | Support Matrix Defender For Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md | This article summarizes support information for the [Defender for Containers pla | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing Tier | Azure clouds availability | |--|--|--|--|--|--|--| | Compliance-Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet |-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | -| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-akspowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | +| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | +| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | [Vulnerability assessment (powered by Qualys) - running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender agent | Defender for Containers | Commercial clouds | | [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - registry scan | ACR, Private ACR | Preview | | Agentless | Defender for Containers | Commercial clouds | | [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - running images | AKS | Preview | | Defender agent | Defender for Containers | Commercial clouds | This article summarizes support information for the [Defender for Containers pla | Discovery/provisioning-Defender agent auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | Discovery/provisioning-Azure Policy for Kubernetes auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | -### Registries and images support for AKS - powered by Qualys +### Registries and images support for Azure - powered by Qualys | Aspect | Details | |--|--| This article summarizes support information for the [Defender for Containers pla | OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35| | Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go | -### Registries and images - powered by MDVM +### Registries and images for Azure - powered by MDVM [!INCLUDE [Registries and images support powered by MDVM](./includes/registries-images-mdvm.md)] -### Kubernetes distributions and configurations +### Kubernetes distributions and configurations - Azure | Aspect | Details | |--|--|-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> | +| Kubernetes distributions and configurations | **Supported**<br> ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br> ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)| -<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested. +<sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested on Azure. -<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. +<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. > [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az | Discovery and provisioning | Auto provisioning of Defender agent | - | - | - | - | - | | Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | - | - | - | - | - | -### Images support-EKS +### Images support - AWS | Aspect | Details | |--|--| | Registries and images | **Unsupported** <br>ΓÇó Images that have at least one layer over 2 GB<br> ΓÇó Public repositories and manifest lists <br>ΓÇó Images in the AWS management account aren't scanned so that we don't create resources in the management account. | -### Kubernetes distributions/configurations support-EKS +### Kubernetes distributions/configurations support - AWS | Aspect | Details | |--|--|-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> | +| Kubernetes distributions and configurations | **Supported**<br>ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)| <sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested. -<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. +<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. > [!NOTE]-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). +> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). ### Private link restrictions Outbound proxy without authentication and outbound proxy with basic authenticati | Discovery and provisioning | Auto provisioning of Defender agent | GKE | Preview | - | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | GKE | Preview | - | Agentless | Defender for Containers | -### Kubernetes distributions/configurations support-GKE +### Kubernetes distributions/configurations support - GCP | Aspect | Details | |--|--|-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br><br />**Unsupported**<br /> ΓÇó Private network clusters<br /> ΓÇó GKE autopilot<br /> ΓÇó GKE AuthorizedNetworksConfig | +| Kubernetes distributions and configurations | **Supported**<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br><br />**Unsupported**<br /> ΓÇó Private network clusters<br /> ΓÇó GKE autopilot<br /> ΓÇó GKE AuthorizedNetworksConfig | <sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested. -<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. +<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. > [!NOTE]-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). +> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). ### Private link restrictions Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported. -## On-premises Arc-enabled machines +## On-premises, Arc-enabled Kubernetes clusters | Domain | Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing tier | |--|--| -- | -- | -- | -- | --| | Compliance | Docker CIS | Arc enabled VMs | Preview | - | Log Analytics agent | Defender for Servers Plan 2 |-| Vulnerability Assessment | Registry scan - [OS packages](#registries-and-images-support--on-premises) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | -| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-support--on-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | +| Vulnerability Assessment | Registry scan - [OS packages](#registries-and-images-supporton-premises) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | +| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-supporton-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - | | Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy for Kubernetes | Defender for Containers | | Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers |-| Runtime protection for [supported OS](#registries-and-images-support--on-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender agent | Defender for Containers | +| Runtime protection for [supported OS](#registries-and-images-supporton-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender agent | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free | | Discovery and provisioning | Collection of control plane threat data | Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers | | Discovery and provisioning | Auto provisioning of Defender agent | Arc enabled K8s clusters | Preview | Preview | Agentless | Defender for Containers | | Discovery and provisioning | Auto provisioning of Azure Policy for Kubernetes | Arc enabled K8s clusters | Preview | - | Agentless | Defender for Containers | -### Registries and images support -on-premises +### Registries and images support - on-premises | Aspect | Details | |--|--| Outbound proxy without authentication and outbound proxy with basic authenticati | Aspect | Details | |--|--|-| Kubernetes distributions and configurations | **Supported**<br> ΓÇó Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters<br>ΓÇó [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) with [Kubernetes RBAC](../aks/concepts-identity.md#kubernetes-rbac) <br> ΓÇó [Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)<br> ΓÇó [Google Kubernetes Engine (GKE) Standard](https://cloud.google.com/kubernetes-engine/) <br><br> **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> | +| Kubernetes distributions and configurations | **Supported via Arc enabled Kubernetes** <sup>[1](#footnote1)</sup> <sup>[2](#footnote2)</sup><br>ΓÇó [Azure Kubernetes Service hybrid](/azure/aks/hybrid/aks-hybrid-options-overview)<br> ΓÇó [Kubernetes](https://kubernetes.io/docs/home/)<br> ΓÇó [AKS Engine](https://github.com/Azure/aks-engine)<br> ΓÇó [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/)<br> ΓÇó [Red Hat OpenShift](https://www.openshift.com/learn/topics/kubernetes/) (version 4.6 or newer)<br> ΓÇó [VMware Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid)<br> ΓÇó [Rancher Kubernetes Engine](https://rancher.com/docs/rke/latest/en/)<br> | <sup><a name="footnote1"></a>1</sup> Any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters should be supported, but only the specified clusters have been tested. -<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. +<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension. > [!NOTE]-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). +> For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations). #### Supported host operating systems Defender for Containers relies on the **Defender agent** for several features. T - Ubuntu 20.04 - Ubuntu 22.04 -Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, will only get partial coverage. +Ensure your Kubernetes node is running on one of the verified supported operating systems. Clusters with different host operating systems, only get partial coverage. #### Network restrictions Outbound proxy without authentication and outbound proxy with basic authenticati - Learn how [Defender for Cloud collects data using the Log Analytics Agent](monitoring-components.md). - Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Review the [platforms that support Defender for Cloud](security-center-os-coverage.md).+ |
defender-for-cloud | Support Matrix Defender For Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-storage.md | description: Learn about the permissions required to enable Defender for Storage Previously updated : 08/14/2023 Last updated : 08/21/2023 # Required permissions for enabling Defender for Storage and its features -This article lists the permissions required to [enable Defender for Storage](../storage/common/azure-defender-storage-configure.md) and its features. +This article lists the permissions required to [enable Defender for Storage](tutorial-enable-storage-plan.md) and its features. Microsoft Defender for Storage is an Azure-native layer of security intelligence that detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. |
defender-for-cloud | Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/troubleshooting-guide.md | Common connector issues: - Standards should be assigned on the security connector. To check, go to the **Environment settings** in the Defender for Cloud left menu, select the connector, and select **Settings**. There should be standards assigned. You can select the three dots to check if you have permissions to assign standards. - Connector resource should be present in Azure Resource Graph (ARG). Use the following ARG query to check: `resources | where ['type'] =~ "microsoft.security/securityconnectors"` - Make sure that sending Kubernetes audit logs is enabled on the AWS or GCP connector so that you can get [threat detection alerts for the control plane](alerts-reference.md#alerts-k8scluster).-- Make sure that Azure Arc and the Azure Policy Arc extension were installed successfully.-- Make sure that agents are installed to your Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations:- - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed** - - **GKE clusters should have the Azure Policy extension installed** +- Make sure that The Defender agent and the Azure Policy for Kubernetes Arc extensions were installed successfully to your Elastic Kubernetes Service (EKS) and Google Kubernetes Engine (GKE) clusters. You can verify and install the agent with the following Defender for Cloud recommendations: - **EKS clusters should have Microsoft Defender's extension for Azure Arc installed** - **GKE clusters should have Microsoft Defender's extension for Azure Arc installed**+ - **Azure Arc-enabled Kubernetes clusters should have the Azure Policy extension installed** + - **GKE clusters should have the Azure Policy extension installed** - If youΓÇÖre experiencing issues with deleting the AWS or GCP connector, check if you have a lock (in this case there might be an error in the Azure Activity log, hinting at the presence of a lock). - Check that workloads exist in the AWS account or GCP project. You should [check which account](https://app.vssps.visualstudio.com/profile/view :::image type="content" source="./media/troubleshooting-guide/authorize-select-tenant.png" alt-text="Screenshot of the Azure DevOps profile page that is used to select an account."::: -The first time you authorize the Microsoft Security application, you are given the ability to select an account. However, each time you login after that, the page defaults to the logged in account without giving you the chance to select an account. +The first time you authorize the Microsoft Security application, you are given the ability to select an account. However, each time you log in after that, the page defaults to the logged in account without giving you the chance to select an account. **To change the default account**: |
defender-for-cloud | Tutorial Enable Container Aws | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md | -# Protect your Amazon Web Service (AWS) accounts containers with Defender for Containers +# Protect your Amazon Web Service (AWS) containers with Defender for Containers Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. To protect your EKS clusters, you need to enable the Containers plan on the rele > [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md). -## Deploy the Defender agent in Azure +## Deploy the Defender agent in EKS clusters Azure Arc-enabled Kubernetes, the Defender agent, and Azure Policy for Kubernetes should be installed and running on your EKS clusters. There's a dedicated Defender for Cloud recommendation that can be used to install these extensions (and Azure Arc if necessary): |
defender-for-cloud | Tutorial Enable Container Gcp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-gcp.md | -# Protect your Google Cloud Platform (GCP) project containers with Defender for Containers +# Protect your Google Cloud Platform (GCP) containers with Defender for Containers Defender for Containers in Microsoft Defender for Cloud is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. There are two dedicated Defender for Cloud recommendations you can use to instal 1. In the Defender for Cloud menu, select **Recommendations**. -1. From Defender for Cloud's **Recommendations** page, search for one of the recommendations by name. +1. From Defender for Cloud's **Recommendations** page, search for each one of the recommendations above by name. :::image type="content" source="media/tutorial-enable-containers-gcp/recommendation-search.png" alt-text="Screenshot showing how to search for the recommendation." lightbox="media/tutorial-enable-containers-gcp/recommendation-search-expanded.png"::: |
defender-for-cloud | Tutorial Enable Containers Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-arc.md | If you would prefer to [assign a custom workspace](defender-for-containers-enabl > [!NOTE] > To enable or disable individual Defender for Containers capabilities, either globally or for specific resources, see [How to enable Microsoft Defender for Containers components](defender-for-containers-enable.md). -## Deploy the Defender agent on Arc-enabled Kubernetes clusters that were onboarded to an Azure subscription +## Deploy the Defender agent on Arc-enabled Kubernetes clusters You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender agent](/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc&tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api#deploy-the-defender-agent) with REST API, Azure CLI or with a Resource Manager template. You can enable the Defender for Containers plan and deploy all of the relevant c 1. Navigate to the Recommendations page. -1. Search for and select the `Azure Arc-enabled Kubernetes clusters should have Defender for Cloud's extension installed` recommendation. +1. Search for and select the `Azure Arc-enabled Kubernetes clusters should have the Defender extension installed` recommendation. :::image type="content" source="media/tutorial-enable-containers-azure/extension-recommendation.png" alt-text="Microsoft Defender for Cloud's recommendation for deploying the Defender agent for Azure Arc-enabled Kubernetes clusters." lightbox="media/tutorial-enable-containers-azure/extension-recommendation.png"::: |
defender-for-cloud | Tutorial Enable Cspm Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-cspm-plan.md | You have the ability to enable the **Defender CSPM** plan, which offers extra pr For availability and to learn more about the features offered by each plan, see the [Defender CSPM plan options](concept-cloud-security-posture-management.md#defender-cspm-plan-options). -You can learn more about Defender for CSPM's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +You can learn more about Defender CSPM's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ## Prerequisites You can learn more about Defender for CSPM's pricing on [the pricing page](https - In order to gain access to all of the features available from the CSPM plan, the plan must be enabled by the **Subscription Owner**. -## Enable the Defender for CSPM plan +## Enable the Defender CSPM plan When you enable Defender for Cloud, you automatically receive the protections offered by the Foundational CSPM capabilities. In order to gain access to the other features provided by Defender CSPM, you need to enable the Defender CSPM plan on your subscription. -**To enable the Defender for CSPM plan on your subscription**: +**To enable the Defender CSPM plan on your subscription**: 1. Sign in to the [Azure portal](https://portal.azure.com). |
defender-for-cloud | Tutorial Enable Databases Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-databases-plan.md | Database protection includes: - [Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md) -Defender for Databases protects four database protection plans at their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). +These four database protection plans are priced separately. Get more info about Defender for Cloud's pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). ## Prerequisites Defender for Databases protects four database protection plans at their own cost When you enable database protection, you enable all four of the Defender plans and protect all of the supported databases on your subscription. -**To enable Defender for App Service on your subscription**: +**To enable Defender for Databases on your subscription**: 1. Sign in to the [Azure portal](https://portal.azure.com). These plans protect all of the supported databases in your subscription. - [Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) - [Overview of Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Overview of Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)+ |
defender-for-cloud | Tutorial Enable Storage Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md | Title: Protect your storage accounts with the Microsoft Defender for Storage plan description: Learn how to enable the Defender for Storage on your Azure subscription for Microsoft Defender for Cloud. Previously updated : 08/01/2023 Last updated : 08/21/2023 # Deploy Microsoft Defender for Storage To enable and configure Microsoft Defender for Storage and ensure maximum protec > [!TIP] > The Malware Scanning feature has advanced configurations to help security teams support different workflows and requirements. -- [Override subscription-level settings to configure specific storage accounts](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#override-defender-for-storage-subscription-level-settings) with custom configurations that differ from the settings configured at the subscription level.+- [Override subscription-level settings to configure specific storage accounts](advanced-configurations-for-malware-scanning.md#override-defender-for-storage-subscription-level-settings) with custom configurations that differ from the settings configured at the subscription level. -There are several ways to enable and configure Defender for Storage: [Azure built-in policy](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-at-scale-with-an-azure-built-in-policy) (recommended method), programmatically using Infrastructure as Code templates, including [Bicep](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#bicep-template) and [ARM template](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#arm-template), using the [Azure portal](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#azure-portal), or directly with [REST API](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription#enable-and-configure-with-rest-api). +There are several ways to enable and configure Defender for Storage: using the [Azure built-in policy](defender-for-storage-policy-enablement.md) (the recommended method), programmatically using Infrastructure as Code templates, including [Terraform](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#terraform-template), [Bicep](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#bicep-template), and [ARM](defender-for-storage-infrastructure-as-code-enablement.md?tabs=enable-subscription#azure-resource-manager-template) templates, using the [Azure portal](defender-for-storage-azure-portal-enablement.md?tabs=enable-subscription), or directly with the [REST API](defender-for-storage-rest-api-enablement.md?tabs=enable-subscription). Enabling Defender for Storage via a policy is recommended because it facilitates enablement at scale and ensures that a consistent security policy is applied across all existing and future storage accounts within the defined scope (such as entire management groups). This keeps the storage accounts protected with Defender for Storage according to the organization's defined configuration. Enabling Defender for Storage via a policy is recommended because it facilitates ## Next steps - Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).+ |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 08/14/2023 Last updated : 08/22/2023 # Important upcoming changes to Microsoft Defender for Cloud > [!IMPORTANT] > The information on this page relates to pre-release products or features, which may be substantially modified before they are commercially released, if ever. Microsoft makes no commitments or warranties, express or implied, with respect to the information provided here.+ [Defender for Servers](#defender-for-servers) On this page, you can learn about changes that are planned for Defender for Cloud. It describes planned modifications to the product that might affect things like your secure score or workflows. If you're looking for the latest release notes, you can find them in the [What's | [Replacing the "Key Vaults should have purge protection enabled" recommendation with combined recommendation "Key Vaults should have deletion protection enabled"](#replacing-the-key-vaults-should-have-purge-protection-enabled-recommendation-with-combined-recommendation-key-vaults-should-have-deletion-protection-enabled) | June 2023| | [Changes to the Defender for DevOps recommendations environment source and resource ID](#changes-to-the-defender-for-devops-recommendations-environment-source-and-resource-id) | August 2023 | | [DevOps Resource Deduplication for Defender for DevOps](#devops-resource-deduplication-for-defender-for-devops) | August 2023 |-| [Business model and pricing updates for Defender for Cloud plans](#business-model-and-pricing-updates-for-defender-for-cloud-plans) | August 2023 | | [Update naming format of Azure Center for Internet Security standards in regulatory compliance](#update-naming-format-of-azure-center-for-internet-security-standards-in-regulatory-compliance) | August 2023 | | [Preview alerts for DNS servers to be deprecated](#preview-alerts-for-dns-servers-to-be-deprecated) | August 2023 | | [Deprecate and replace recommendations App Service Client Certificates](#deprecate-and-replace-recommendations-app-service-client-certificates) | August 2023 | | [Classic connectors for multicloud will be retired](#classic-connectors-for-multicloud-will-be-retired) | September 2023 |+| [Replacing secret scanning recommendation results in Defender for DevOps from CredScan with GitHub Advanced Security for Azure DevOps powered secret scanning](#replacing-secret-scanning-recommendation-results-in-defender-for-devops-from-credscan-with-github-advanced-security-for-azure-devops-powered-secret-scanning) | September 2023 | | [Change to the Log Analytics daily cap](#change-to-the-log-analytics-daily-cap) | September 2023 |+| [Deprecating and replacing "Microsoft Defender for Storage plan should be enabled" recommendation](#deprecating-and-replacing-microsoft-defender-for-storage-plan-should-be-enabled-recommendation) | September 2023| | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | August 2024 | +### Replacing secret scanning recommendation results in Defender for DevOps from CredScan with GitHub Advanced Security for Azure DevOps powered secret scanning ++**Estimated date for change: September 2023** ++Currently, the recommendations for secret scanning in Azure DevOps repositories by Defender for DevOps are based on the results of CredScan, which is manually run using the Microsoft Security DevOps Extension. However, this mechanism of running secret scanning is being deprecated in September 2023. Instead, you can see secret scanning results generated by GitHub Advanced Security for Azure DevOps (GHAzDO). ++As GHAzDO enters Public Preview, we're working towards unifying the secret scanning experience across both GitHub Advanced Security and GHAzDO. This unification enables you to receive detections across all branches, git history, and secret leak protection via push protection to your repositories. This process can all be done with a single button press, without requiring any pipeline runs. ++For more information about GHAzDO Secret Scanning, see [Set up secret scanning](/azure/devops/repos/security/configure-github-advanced-security-features#set-up-secret-scanning). + ### Classic connectors for multicloud will be retired **Estimated date for change: September 15, 2023** The Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA The following table explains how each capability will be provided after the Log Analytics agent retirement: -| **Feature** | **Support** | **Alternative** | +| **Feature** | **Deprecation plan** | **Alternative** | | | | | | Defender for Endpoint/Defender for Cloud integration for down level machines (Windows Server 2012 R2, 2016) | Defender for Endpoint integration that uses the legacy Defender for Endpoint sensor and the Log Analytics agent (for Windows Server 2016 and Windows Server 2012 R2 machines) wonΓÇÖt be supported after August 2024. | Enable the GA [unified agent](/microsoft-365/security/defender-endpoint/configure-server-endpoints#new-windows-server-2012-r2-and-2016-functionality-in-the-modern-unified-solution) integration to maintain support for machines, and receive the full extended feature set. For more information, see [Enable the Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md#windows). | | OS-level threat detection (agent-based) | OS-level threat detection based on the Log Analytics agent wonΓÇÖt be available after August 2024. A full list of deprecated detections will be provided soon. | OS-level detections are provided by Defender for Endpoint integration and are already GA. |-| Adaptive application controls | The [current GA version](adaptive-application-controls.md) based on the Log Analytics agent will be deprecated in August 2024, along with the preview version based on the Azure monitoring agent. | The next generation of this feature is currently under evaluation, further information will be provided soon. | -| Endpoint protection discovery recommendations | The current [GA and preview recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender for CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. | -| Missing OS patches (system updates) | Recommendations to apply system updates based on the Log Analytics agent wonΓÇÖt be available after August 2024. | [New recommendations](release-notes.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), based on integration with Update Management Center, are already in GA, with no agent dependencies. | +| Adaptive application controls | The [current GA version](adaptive-application-controls.md) based on the Log Analytics agent will be deprecated in August 2024, along with the preview version based on the Azure monitoring agent. | Adaptive Application Controls feature as it is today will be discontinued, and new capabilities in the application control space (on top of what Defender for Endpoint and Windows Defender Application Control offer today) will be considered as part of future Defender for Servers roadmap. | +| Endpoint protection discovery recommendations | The current [GA recommendations](endpoint-protection-recommendations-technical.md) to install endpoint protection and fix health issues in the detected solutions will be deprecated in August 2024. The preview recommendations available today over Azure Monitor agent (AMA) will be deprecated when the alternative is provided over Agentless Disk Scanning capability. | A new agentless version will be provided for discovery and configuration gaps by April 2024. As part of this upgrade, this feature will be provided as a component of Defender for Servers plan 2 and Defender for CSPM, and wonΓÇÖt cover on-premises or Arc-connected machines. | +| Missing OS patches (system updates) | Recommendations to apply system updates based on the Log Analytics agent wonΓÇÖt be available after August 2024. The preview version available today over Guest Configuration agent will be deprecated when the alternative is provided over MDVM premium capabilities. Support of this feature for Docker-hub and VMMS will be deprecated in Aug 2024 and will be considered as part of future Defender for Servers roadmap.| [New recommendations](release-notes.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), based on integration with Update Management Center, are already in GA, with no agent dependencies. | | OS misconfigurations (Azure Security Benchmark recommendations) | The [current GA version](apply-security-baseline.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The current preview version that uses the Guest Configuration agent will be deprecated as the Microsoft Defender Vulnerability Management integration becomes available. | A new version, based on integration with Premium Microsoft Defender Vulnerability Management, will be available early in 2024, as part of Defender for Servers plan 2. |-| File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. | A new version of this feature, either agent-based or agentless, will be available by April 2024. | +| File integrity monitoring | The [current GA version](file-integrity-monitoring-enable-log-analytics.md) based on the Log Analytics agent wonΓÇÖt be available after August 2024. The FIM [Public Preview version](file-integrity-monitoring-enable-ama.md) based on Azure Monitor Agent (AMA), will be deprecated when the alternative is provided over Defender for Endpoint.| A new version of this feature will be provided based on Microsoft Defender for Endpoint integration by April 2024. | | The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion | The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion over the defined tables will remain supported via the AMA agent for the machines under subscriptions covered by Defender for Servers P2. Every machine is eligible for the benefit only once, even if both Log Analytics agent and Azure Monitor agent are installed on it. | | ##### Log analytics and Azure Monitoring agents autoprovisioning experience Customers that rely on the `resourceID` to query DevOps recommendation data will Queries will need to be updated to include both the old and new `resourceID` to show both, for example, total over time. -Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations. +Additionally, customers that have created custom queries using the DevOps workbook will need to update the assessment keys for the impacted DevOps security recommendations. The template DevOps workbook is planned to be updated to reflect the new recommendations, although during the actual migration, customers may experience some errors with the workbook. -The recommendations page's experience will have minimal impact and deprecated assessments may continue to show for a maximum of 14 days if new scan results aren't submitted. +The experience on the recommendations page will be impacted and require customers to query under "All recommendations" to view the new DevOps recommendations. For Azure DevOps, deprecated assessments may continue to show for a maximum of 14 days if new pipelines are not run. Refer to [Defender for DevOps Common questions](/azure/defender-for-cloud/faq-defender-for-devops#why-don-t-i-see-recommendations-for-findings-) for details. ### DevOps Resource Deduplication for Defender for DevOps If you don't have an instance of a DevOps organization onboarded more than once Customers will have until July 31, 2023 to resolve this issue. After this date, only the most recent DevOps Connector created where an instance of the DevOps organization exists will remain onboarded to Defender for DevOps. For example, if Organization Contoso exists in both connectorA and connectorB, and connectorB was created after connectorA, then connectorA will be removed from Defender for DevOps. -### Business model and pricing updates for Defender for Cloud plans --**Estimated date for change: August 2023** --Microsoft Defender for Cloud has three plans that offer service layer protection: --- Defender for Key Vault--- Defender for Azure Resource Manager--- Defender for DNS--These plans are transitioning to a new business model with different pricing and packaging to address customer feedback regarding spending predictability and simplifying the overall cost structure. --**Business model and pricing changes summary**: --Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price. --- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.--Existing customers of Defender for Key-Vault, Defender for Azure Resource Manager, and Defender for DNS will keep their current business model and pricing unless they actively choose to switch to the new business model and price. --- **Defender for Azure Resource Manager**: This plan will have a fixed price per subscription per month. Customers will have the option to switch to the new business model by selecting the Defender for Azure Resource Manager new per-subscription model.--- **Defender for Key Vault**: This plan will have a fixed price per vault at per month with no overage charge. Customers will have the option to switch to the new business model by selecting the Defender for Key Vault new per-vault model--- **Defender for DNS**: Defender for Servers Plan 2 customers will gain access to Defender for DNS value as part of Defender for Servers Plan 2 at no extra cost. Customers that have both Defender for Server Plan 2 and Defender for DNS will no longer be charged for Defender for DNS. Defender for DNS will no longer be available as a standalone plan.--For more information on all of these plans, check out the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h) - ### Update naming format of Azure Center for Internet Security standards in regulatory compliance **Estimated date for change: August 2023** At that time, all billable data types will be capped if the daily cap is met. Th Learn more about [workspaces with Microsoft Defender for Cloud](../azure-monitor/logs/daily-cap.md#workspaces-with-microsoft-defender-for-cloud). +## Deprecating and replacing "Microsoft Defender for Storage plan should be enabled" recommendation ++**Estimated date for change: September 2023** ++The recommendation `Microsoft Defender for Storage plan should be enabled` will be deprecated on public clouds and will remain available on Azure Government cloud. This recommendation will be replaced by a new recommendation: `Microsoft Defender for Storage plan should be enabled with Malware Scanning and Sensitive Data Threat Detection`. This recommendation ensures that Defender for Storage is enabled at the subscription level with malware scanning and sensitive data threat detection capabilities. ++| Policy Name | Description | Policy Effect | Version | +|--|--|--|--| +| [Microsoft Defender for Storage should be enabled](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f640d2586-54d2-465f-877f-9ffc1d2109f4) | Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes malware scanning and sensitive data threat detection.This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. | Audit, disabled | 1.0.0 | ++Learn more about [Microsoft Defender for Storage](defender-for-storage-introduction.md). + ## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).++ |
defender-for-cloud | Workflow Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md | This article describes the workflow automation feature of Microsoft Defender for 1. Select **(+) Add**. - :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of the screen to create a logic app." lightbox="media/workflow-automation/logic-apps-create-new.png"::: + :::image type="content" source="media/workflow-automation/logic-apps-create-new.png" alt-text="Screenshot of where to create a logic app." lightbox="media/workflow-automation/logic-apps-create-new.png"::: 1. Fill out all required fields and select **Review + Create**. |
defender-for-iot | Cli Ot Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md | Use the following commands to restore data on your OT network sensor using the m |User |Command |Full command syntax | |||| |**support** | `system restore` | No attributes |-|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | No attributes | +|**cyberx**, or **support** with [root access](references-work-with-defender-for-iot-cli-commands.md#access-the-system-root-as-a-support-user) | ` cyberx-xsense-system-restore` | `-f` `<filename>` | For example, for the *support* user: |
defender-for-iot | Concept Supported Protocols | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md | Horizon provides: :::image type="content" source="media/concept-supported-protocols/sdk-horizon.png" alt-text="Infographic that describes features provided by the Horizon SDK." border="false"::: -### Collaborate with the Horizon community --Join our community to help lead the way towards digital transformation and industry-wide collaboration for protocol support! --The Horizon ICS community shares knowledge between domain experts in critical infrastructures, building management, production lines, transportation systems, and leading industries. For example, our community shares tutorials, discussion forums, instructor-led training, educational white papers, and more. --To join the Horizon community, email us at: [horizon-community@microsoft.com](mailto:horizon-community@microsoft.com) -- ## Next steps For more information: |
defender-for-iot | How To Create Data Mining Queries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md | Title: Create data mining queries and reports in Defender for IoT description: Learn how to create granular reports about network devices. Previously updated : 12/05/2022 Last updated : 08/28/2023 The following out-of-the-box reports are listed in the **Recommended** area, rea | **Excluded CVEs** | Lists all detected devices that have CVEs that were manually excluded from the **CVEs** report. | | **Active Devices (Last 24 Hours)** | Lists all detective devices that have had active traffic within the last 24 hours. | | **Remote Access** | Lists all detected devices that communicate through remote session protocols. |-| **CVEs** | Lists all detected devices with known vulnerabilities, along with CVSSv2 risk scores. <br> <br> Select **Edit** to delete and exclude specific CVEs from the report. <br><br> **Tip**: Delete CVEs to exclude them from the list to have your attack vector reports to reflect your network more accurately. | +| **CVEs** | Lists all detected devices with known vulnerabilities, along with CVSS risk scores. <br> <br> Select **Edit** to delete and exclude specific CVEs from the report. <br><br> **Tip**: Delete CVEs to exclude them from the list to have your attack vector reports to reflect your network more accurately. | | **Nonactive Devices (Last 7 Days)** | Lists all detected devices that haven't communicated for the past seven days. | Select a report to view todayΓÇÖs data. Use the :::image type="icon" source="media/how-to-generate-reports/refresh-icon.png" border="false"::: **Refresh**, :::image type="icon" source="media/how-to-generate-reports/expand-all-icon.png" border="false"::: **Expand all**, and :::image type="icon" source="media/how-to-generate-reports/collapse-all-icon.png" border="false"::: **Collapse all** options to update and change your report views. |
defender-for-iot | How To Work With The Sensor Device Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md | The following table lists available responses for each notification, and when we | **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnet Configuration** and [configure subnets](how-to-manage-individual-sensors.md#update-the-ot-sensor-network-configuration). <br />- **Dismiss**: Remove the notification. |**Dismiss** | | **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. | Set with new operating system only if not already configured manually. <br><br>If the operating system has already been configured: **Dismiss**. | | **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**: <br />Remove the notification. |**Dismiss** |-| **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling| ## View a device map for a specific zone |
defender-for-iot | How To Work With Threat Intelligence Packages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages.md | Title: Maintain threat intelligence packages on OT network sensors - Microsoft Defender for IoT description: Learn how to maintain threat intelligence packages on OT network sensors. Previously updated : 02/09/2023 Last updated : 08/28/2023 Microsoft Defender for IoT regularly delivers threat intelligence package update Threat intelligence packages contain signatures, such as malware signatures, CVEs, and other security content. +CVE scores shown are aligned with the [National Vulnerability Database (NVD)](https://nvd.nist.gov/vuln-metrics/cvss), and CVSS v3 scores are shown if they're relevant. If there's no CVSS v3 score relevant, the CVSS v2 score is shown instead. + > [!TIP] > We recommend ensuring that your OT network sensors always have the latest threat intelligence package installed so that you always have the full context of a threat before an environment is affected, and increased relevancy, accuracy, and actionable recommendations. > |
defender-for-iot | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md | Title: OT monitoring software versions - Microsoft Defender for IoT description: This article lists Microsoft Defender for IoT on-premises OT monitoring software versions, including release and support dates and highlights for new features. Previously updated : 07/03/2023 Last updated : 08/09/2023 # OT monitoring software versions This version includes the following updates and enhancements: - [UI enhancements for downloading PCAP files from the sensor](how-to-view-alerts.md#access-alert-pcap-data) - [*cyberx* and *cyberx_host* users aren't enabled by default](roles-on-premises.md#default-privileged-on-premises-users) +> [!NOTE] +> Due to internal improvements to the OT sensor's device inventory, column edits made to your device inventory aren't retained after updating to version 23.1.2. If you'd previously edited the columns shown in your device inventory, you'll need to make those same edits again after updating your sensor. +> + ## Versions 22.3.x ### 22.3.10 |
defender-for-iot | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md | Title: What's new in Microsoft Defender for IoT description: This article describes features available in Microsoft Defender for IoT, across both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 08/09/2023 Last updated : 08/28/2023 Features released earlier than nine months ago are described in the [What's new > Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > +## August 2023 ++|Service area |Updates | +||| +| **OT networks** | [Defender for IoT's CVEs align to CVSS v3](#defender-for-iots-cves-align-to-cvss-v3) | ++### Defender for IoT's CVEs align to CVSS v3 ++CVE scores shown in the OT sensor and on the Azure portal are aligned with the [National Vulnerability Database (NVD)](https://nvd.nist.gov/vuln-metrics/cvss), and starting with Defender for IoT's August threat intelligence update, CVSS v3 scores are shown if they're relevant. If there's no CVSS v3 score relevant, the CVSS v2 score is shown instead. ++View CVE data from the Azure portal, either on a Defender for IoT's device detail's **Vulnerabilities** tab, with resources available with the Microsoft Sentinel solution, or in a data mining query on your OT sensor. For more information, see: ++- [Maintain threat intelligence packages on OT network sensors](how-to-work-with-threat-intelligence-packages.md) +- [View full device details](how-to-manage-device-inventory-for-organizations.md#view-full-device-details) +- [Tutorial: Investigate and detect threats for IoT devices with Microsoft Sentinel](iot-advanced-threat-monitoring.md) +- [Create data mining queries](how-to-create-data-mining-queries.md) + ## July 2023 |Service area |Updates | |
deployment-environments | Concept Environments Reliability Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-reliability-availability.md | + + Title: Reliability and availability in Azure Deployment Environments +description: Learn how Azure Deployment Environments supports disaster recovery. Understand reliability and availability within a single region and across regions. ++++ Last updated : 08/25/2023++++# Reliability in Azure Deployment Environments ++This article describes reliability support in Azure Deployment Environments, and covers intra-regional resiliency with availability zones and inter region resiliency with disaster recovery. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/well-architected/resiliency/overview). ++## Availability zone support ++Azure availability zones consist of at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. Availability zones are designed to ensure high availability if a local zone fails. In the case of a local zone failure, availability zones allow the services to fail over to the other availability zones to provide continuity in service with minimal interruption. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. ++Availability zone support for all resources in Azure Deployment Environments is enabled automatically. There's no action for you to take. ++Regions supported: +- West US 2 +- South Central US +- UK South +- West Europe +- East US +- Australia East +- East US 2 +- North Europe +- West US 3 +- Japan East +- East Asia +- Central India +- Korea Central +- Canada Central ++For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/reliability/availability-zones-overview). ++## Disaster recovery: cross-region failover ++Azure provides protection from regional or large geography disasters by making use of another region if there's a region-wide disaster. ++You can replicate the following Deployment Environments resources in an alternate region to prevent data loss if a cross-region failover occurs: + +- Dev center +- Project +- Catalog +- Catalog items +- Dev center environment type +- Project environment type +- Environments ++++For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](/azure/site-recovery/azure-to-azure-architecture). ++## Related content ++- To learn more about how Azure supports reliability, see [Azure reliability](/azure/reliability). +- To learn more about Deployment Environments resources, see [Azure Deployment Environments key concepts](./concept-environments-key-concepts.md). +- To get started with Deployment Environments, see [Quickstart: Create and configure the Azure Deployment Environments dev center](./quickstart-create-and-configure-devcenter.md). |
deployment-environments | Tutorial Deploy Environments In Cicd Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/tutorial-deploy-environments-in-cicd-github.md | You use a workflow that features three branches: main, dev, and test. This workflow is a small example for the purposes of this tutorial. Real world workflows may be more complex. +Before beginning this tutorial, you can familiarize yourself with Deployment Environments resources and concepts by reviewing [Key concepts for Azure Deployment Environments](/azure/deployment-environments/concept-environments-key-concepts). + In this tutorial, you learn how to: > [!div class="checklist"] |
devtest-labs | How To Move Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-labs.md | -# Move DevTest Labs to another region +# Move DevTest Labs and schedules to another region -To move a lab, create a copy of an existing lab in another region. +You can move DevTest Labs and their associated schedules to another reason. To move a lab, create a copy of an existing lab in another region. When you've moved your lab, and you have a virtual machine (VM) in the target region, you can move your lab schedules. In this article, you learn how to: > [!div class="checklist"] In this article, you learn how to: > - Deploy the template to create the new lab in the target region. > - Configure the new lab. > - Move data to the new lab.+> - Move schedules to the new lab. > - Delete the resources in the source region. ## Prerequisites In this article, you learn how to: - the Stored Secrets - PAT tokens of the private Artifact Repos to move the private repos together with the lab. -## Prepare to move +- When moving a lab schedule, ensure a Compute VM exists in the target region. -To get started, export and modify a Resource Manager template. +## Move a lab ++The following section describes how to create and customize an ARM template to move a lab from one region to another. ++You can move a schedule without moving a lab, if you have a VM in the target region. If you want to move a schedule without moving a lab, see [Move a schedule](#move-a-schedule). ++### Prepare to move a lab -### Prepare your Virtual Network +When you move a lab, there are some steps you must take to prepare for the move. You need to: ++- Prepare the virtual network +- Export an ARM template of the lab +- Modify the template +- Deploy the template to move the lab +- Configure the new lab +- Swap the OS disks of the Compute VMs under the new VMs +- Clean up the original lab ++#### Prepare the Virtual Network ++To get started, export and modify a Resource Manager template. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you don't have [Resource Group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) under the target region, create one now. -1. Move your current Virtual Network to the new region and resource group using the steps included in the article, "[Move an Azure virtual network to another region](../virtual-network/move-across-regions-vnet-portal.md)". +1. Move the current Virtual Network to the new region and resource group using the steps included in the article, "[Move an Azure virtual network to another region](../virtual-network/move-across-regions-vnet-portal.md)". Alternately, you can create a new virtual network, if you don't have to keep the original one. -### Export an ARM template of your lab. +#### Export an ARM template of the lab -Next, you export a JSON template contains settings that describe your lab. +Next, you export a JSON template contains settings that describe the lab. To export a template by using Azure portal: To export a template by using Azure portal: This zip file contains the .json files that comprise the template and scripts to deploy the template. It contains all the resources under your lab listed in ARM template format, except for the Shared Image Gallery resources. -### Modify the template +#### Modify the template In order for the ARM template to deploy correctly in the new region, you must change a few parts of the template. To update the template by using Azure portal: 1. In **Search the Marketplace**, type **template deployment**, and then press **ENTER**. 1. Select **Template deployment**.-- ![Azure Resource Manager templates library](../storage/common/media/storage-account-move/azure-resource-manager-template-library.png) + + :::image type="content" source="./media/how-to-move-labs/azure-resource-manager-template-library.png" alt-text="Screenshot that shows the Azure Marketplace with template deployment selected."::: 1. Select **Create**. To update the template by using Azure portal: } ``` - 1. Find the `"type": "microsoft.devtestlab/labs/virtualnetworks"` resource. If you created a new virtual network earlier in these steps, you must add the actual subnet name in `/subnets/[SUBNET_NAME]`. If you chose to move the Vnet to a new region, you should skip this step. + 1. Find the `"type": "microsoft.devtestlab/labs/virtualnetworks"` resource. If you created a new virtual network earlier in these steps, you must add the actual subnet name in `/subnets/[SUBNET_NAME]`. If you chose to move the virtual network to a new region, you should skip this step. 1. Find the `"type": "microsoft.devtestlab/labs/virtualmachines"` resource. To update the template by using Azure portal: 1. In the editor, save the template. -## Deploy to move +### Deploy the template to move the lab Deploy the template to create a new lab in the target region. Deploy the template to create a new lab in the target region. 1. Select the bell icon (notifications) from the top of the screen to see the deployment status. You shall see **Deployment in progress**. Wait until the deployment is completed. -### Configure the new lab +#### Configure the new lab While most Lab resources have been replicated under the new region using the ARM template, a few edits still need to be moved manually. 1. Add the Compute Gallery back to the lab if there are any in the original one. 1. Add the policies "Virtual machines per user", "Virtual machines per lab" and "Allowed Virtual machine sizes" back to the moved lab -### Swap the OS disks of the Compute VMs under the new VMs. +#### Swap the OS disks of the Compute VMs under the new VMs Note the VMs under the new Lab have the same specs as the ones under the old Lab. The only difference is their OS Disks. Note the VMs under the new Lab have the same specs as the ones under the old Lab 1. Swap the OS disk of the Compute VM under the new lab with the new disk. To learn how, see the article, "[Change the OS disk used by an Azure VM using PowerShell](../virtual-machines/windows/os-disk-swap.md)". +## Move a schedule ++There are two ways to move a schedule: ++ - Manually recreate the schedules on the moved VMs. This process can be time consuming and error prone. This approach is most useful when you have a few schedules and VMs. + - Export and redeploy the schedules by using ARM templates. ++Use the following steps to export and redeploy your schedule in another Azure region by using an ARM template: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++2. Go to the source resource group that held your VMs. ++3. On the **Resource Group Overview** page, under **Resources**, select **Show hidden types**. ++4. Select all resources with the type **microsoft.devtestlab/schedules**. + +5. Select **Export template**. ++ :::image type="content" source="./media/how-to-move-labs/move-compute-schedule.png" alt-text="Screenshot that shows the hidden resources in a resource group, with schedules selected."::: ++6. On the **Export resource group template** page, select **Deploy**. ++7. On the **Custom deployment** page, select **Edit template**. + +8. In the template code, change all instances of `"location": "<old location>"` to `"location": "<new location>"` and then select **Save**. ++9. On the **Custom deployment** page, enter values that match the target VM: ++ |Name|Value| + |-|-| + |**Subscription**|Select an Azure subscription.| + |**Resource group**|Select the resource group name. | + |**Region**|Select a location for the lab schedule. For example, **Central US**. | + |**Schedule Name**|Must be a globally unique name. | + |**VirtualMachine_xxx_externalId**|Must be the target VM. | + + :::image type="content" source="./media/how-to-move-labs/move-schedule-custom-deployment.png" alt-text="Screenshot that shows the custom deployment page, with new location values for the relevant settings."::: ++ >[!IMPORTANT] + >Each schedule must have a globally unique name; you will need to change the schedule name for the new location. ++10. Select **Review and create** to create the deployment. ++11. When the deployment is complete, verify that the new schedule is configured correctly on the new VM. + ## Discard or clean up -After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare-to-move) and [Move](#deploy-to-move) sections of this article. +After the deployment, if you want to start over, you can delete the target lab, and repeat the steps described in the [Prepare](#prepare-to-move-a-lab) and [Move](#deploy-the-template-to-move-the-lab) sections of this article. To commit the changes and complete the move, you must delete the original lab. To remove a lab by using the Azure portal: 1. Select **Delete**, and confirm. +You can also choose to clean up the original schedules if they're no longer used. Go to the original schedule resource group (where you exported templates from in step 5 above) and delete the schedule resource. + ## Next steps In this article, you moved DevTest Labs from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to: - [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)-- [Move Microsoft.DevtestLab/schedules to another region](./how-to-move-schedule-to-new-region.md)+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md) |
devtest-labs | How To Move Schedule To New Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/how-to-move-schedule-to-new-region.md | - Title: Move a schedule to another region -description: This article explains how to move a top level schedule to another Azure region. --- Previously updated : 05/09/2022---# Move a schedule to another region --In this article, you'll learn how to move a schedule by using an Azure Resource Manager (ARM) template. --DevTest Labs supports two types of schedules. --- Schedules apply only to compute virtual machines (VMs): schedules are stored as microsoft.devtestlab/schedules resources, and often referred to as top level schedules, or simply schedules. --- Lab schedules apply only to DevTest Labs (DTL) VMs: lab schedules. They are stored as microsoft.devtestlab/labs/schedules resources. This type of schedule is not covered in this article.--In this article, you'll learn how to: -> [!div class="checklist"] -> > -> - Export an ARM template that contains your schedules. -> - Modify the template by adding or updating the target region and other parameters. -> - Delete the resources in the source region. --## Prerequisites --- Ensure that the services and features that your account uses are supported in the target region.-- For preview features, ensure that your subscription is allowlisted for the target region.-- Ensure a Compute VM exists in the target region.--## Move an existing schedule -There are two ways to move a schedule: ---Use the following steps to export and redeploy your schedule in another Azure region by using an ARM template: --1. Sign in to the [Azure portal](https://portal.azure.com). --2. Go to the source resource group that held your VMs. --3. On the **Resource Group Overview** page, under **Resources**, select **Show hidden types**. --4. Select all resources with the type **microsoft.devtestlab/schedules**. - -5. Select **Export template**. -- :::image type="content" source="./media/how-to-move-schedule-to-new-region/move-compute-schedule.png" alt-text="Screenshot that shows the hidden resources in a resource group, with schedules selected."::: --6. On the **Export resource group template** page, select **Deploy**. --7. On the **Custom deployment** page, select **Edit template**. - -8. In the template code, change all instances of `"location": "<old location>"` to `"location": "<new location>"` and then select **Save**. --9. On the **Custom deployment** page, enter values that match the target VM: -- |Name|Value| - |-|-| - |**Subscription**|Select an Azure subscription.| - |**Resource group**|Select the resource group name. | - |**Region**|Select a location for the lab schedule. For example, **Central US**. | - |**Schedule Name**|Must be a globally unique name. | - |**VirtualMachine_xxx_externalId**|Must be the target VM. | - - :::image type="content" source="./media/how-to-move-schedule-to-new-region/move-schedule-custom-deployment.png" alt-text="Screenshot that shows the custom deployment page, with new location values for the relevant settings."::: -- >[!IMPORTANT] - >Each schedule must have a globally unique name; you will need to change the schedule name for the new location. --10. Select **Review and create** to create the deployment. --11. When the deployment is complete, verify that the new schedule is configured correctly on the new VM. --## Discard or clean up --Now you can choose to clean up the original schedules if they're no longer used. Go to the original schedule resource group (where you exported templates from in step 5 above) and delete the schedule resource. --## Next steps --In this article, you moved a schedule from one region to another and cleaned up the source resources. To learn more about moving resources between regions and disaster recovery in Azure, refer to: --- [Move a DevTest Labs to another region](./how-to-move-labs.md).-- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).-- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md). |
digital-twins | Concepts Data History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md | Once graph updates are historized to Azure Data Explorer, you can run joint quer For more of an introduction to data history, including a quick demo, watch the following IoT show video: -<iframe src="https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15" width="1080" height="530"></iframe> +> [!VIDEO https://aka.ms/docs/player?id=2f9a9af4-1556-44ea-ab5f-afcfd6eb9c15] ## Resources and data flow |
digital-twins | How To Use Power Platform Logic Apps Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-power-platform-logic-apps-connector.md | The connector is a wrapper around the Azure Digital Twins [data plane APIs](conc For an introduction to the connector, including a quick demo, watch the following IoT show video: -<iframe src="https://aka.ms/docs/player?id=d6c200c2-f622-4254-b61f-d5db613bbd11" width="1080" height="530"></iframe> +> [!VIDEO https://aka.ms/docs/player?id=d6c200c2-f622-4254-b61f-d5db613bbd11] You can also complete a basic walkthrough in the blog post [Simplify building automated workflows and apps powered by Azure Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things-blog/simplify-building-automated-workflows-and-apps-powered-by-azure/ba-p/3763051). For more information about the connector, including a complete list of the connector's actions and their parameters, see the [Azure Digital Twins connector reference documentation](/connectors/azuredigitaltwins). |
dms | Concepts Migrate Azure Mysql Replicate Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-replicate-changes.md | You must run an offline migration scenario with "Enable Transactional Consistenc While running the replicate changes migration, when your target is almost caught up with the source server, stop all incoming transactions to the source database and wait until all pending transactions have been applied to the target database. To confirm that the target database is up-to-date on the source server, run the query 'SHOW MASTER STATUS;', then compare that position to the last committed binlog event (displayed under Migration Progress). When the two positions are the same, the target has caught up with all changes, and you can start the cutover. + ### How Replicate Changes works The current implementation is based on streaming binlog changes from the source server and applying them to the target server. Like Data-in replication, this is easier to set up and doesn't require a physical connection between the source and the target servers. The server can send Binlog as a stream that contains binary data as documented [here](https://dev.mysql.com/doc/dev/mysql-server/latest/page_protocol_replication.html). The client can specify the initial log position to start the stream with. The log file name describes the log position, the position within that file, and optionally GTID (Global Transaction ID) if gtid mode is enabled at the source. -The data changes are sent as "row" events, which contain data for individual rows (prior and/or after the change depending on the operation type, which is insert, delete, or update). The row events are then applied in their raw format using a BINLOG statement: [MySQL 8.0 Reference Manual :: 13.7.8.1 BINLOG Statement](https://dev.mysql.com/doc/refman/8.0/en/binlog.html). +The data changes are sent as "row" events, which contain data for individual rows (prior and/or after the change depending on the operation type, which is insert, delete, or update). The row events are then applied in their raw format using a BINLOG statement: [MySQL 8.0 Reference Manual :: 13.7.8.1 BINLOG Statement](https://dev.mysql.com/doc/refman/8.0/en/binlog.html). But for a DMS migration to a 5.7 server, DMS doesnΓÇÖt apply changes as BINLOG statements (since DMS doesnΓÇÖt have necessary privileges to do so) and instead translates the row events into INSERT, UPDATE, or DELETE statements. ## Prerequisites To complete the replicate changes migration successfully, ensure that the follow - When performing a replicate changes migration, the name of the database on the target server must be the same as the name on the source server. - Support is limited to the ROW binlog format. - DDL changes replication is supported only when you have selected the option for migrating entire server on DMS UI.+- Renaming databases or tables is not supported when replicating changes. ## Next steps |
dms | Dms Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md | Title: What is Azure Database Migration Service? description: Overview of Azure Database Migration Service, which provides seamless migrations from many database sources to Azure Data platforms.--++ Last updated 02/08/2023 -With Azure Database Migration Service currently we offer two options: +With Azure Database Migration Service currently we offer two versions: -1. [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) +1. Database Migration Service - via [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md), [Azure portal](https://portal.azure.com/#create/Microsoft.AzureDMS), PowerShell and Azure CLI. 1. Database Migration Service (classic) - via Azure portal, PowerShell and Azure CLI. -**Azure SQL Migration extension for Azure Data Studio** is powered by the latest version of Database Migration Service and provides more features. Currently, it supports SQL Database modernization to Azure. For improved functionality and supportability, consider migrating to Azure SQL Database by using the Azure SQL migration extension for Azure Data Studio. +**Database Migration Service** powers the "Azure SQL Migration" extension for Azure Data Studio, and provides more features. Azure portal, PowerShell and Azure CLI can also be used to access DMS. Currently, it supports SQL Database modernization to Azure. For improved functionality and supportability, consider migrating to Azure SQL Database by using the DMS. **Database Migration Service (classic)** via Azure portal, PowerShell and Azure CLI is an older version of the Azure Database Migration Service. It offers database modernization to Azure and support scenarios like – SQL Server, PostgreSQL, MySQL, and MongoDB. With Azure Database Migration Service currently we offer two options: ## Compare versions -In 2021, a newer version of the Azure Database Migration Service was released as an extension for Azure Data Studio, which improved the functionality, user experience and supportability of the migration service. Consider using the [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) whenever possible. +Newer version of the Azure Database Migration Service is available as an extension for Azure Data Studio and can be accesses from Azure portal, which improved the functionality, user experience and supportability of the migration service. Consider using the [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) and DMS Azure portal whenever possible. The following table compares the functionality of the versions of the Database Migration Service: -|Feature |DMS (classic) |Azure SQL extension for Azure Data Studio |Notes| -||||| -|Assessment | No | Yes | Assess compatibility of the source. | -|SKU recommendation | No | Yes | SKU recommendations for the target based on the assessment of the source. | -|Azure SQL Database - Offline migration | Yes | Yes | Migrate to Azure SQL Database offline. | -|Azure SQL Managed Instance - Online migration | Yes |Yes | Migrate to Azure SQL Managed Instance online with minimal downtime. | -|Azure SQL Managed Instance - Offline migration | Yes |Yes | Migrate to Azure SQL Managed Instance offline. | -|SQL Server on Azure SQL VM - Online migration | No | Yes |Migrate to SQL Server on Azure VMs online with minimal downtime.| -|SQL Server on Azure SQL VM - Offline migration | Yes |Yes | Migrate to SQL Server on Azure VMs offline. | -|Migrate logins|Yes | Yes | Migrate logins from your source to your target.| -|Migrate schemas| Yes | No | Migrate schemas from your source to your target. | -|Azure portal support |Yes | Yes | Monitor your migration by using the Azure portal. | -|Integration with Azure Data Studio | No | Yes | Migration support integrated with Azure Data Studio. | -|Regional availability|Yes |Yes | More regions are available with the extension. | -|Improved user experience| No | Yes | The extension is faster, more secure, and easier to troubleshoot. | -|Automation| Yes | Yes |The extension supports PowerShell and Azure CLI. | -|Private endpoints| No | Yes| Connect to your source and target using private endpoints. -|TDE support|No | Yes |Migrate databases encrypted with TDE. | +|Feature |DMS(classic) |DMS - via Azure SQL extension for ADS|DMS - via Azure portal |Notes| +|||||| +|Assessment | No | Yes | No | Assess compatibility of the source. | +|SKU recommendation | No | Yes | No | SKU recommendations for the target based on the assessment of the source. | +|Azure SQL Database - Offline migration | Yes | Yes | Yes | Migrate to Azure SQL Database offline. | +|Azure SQL Managed Instance - Online migration | Yes |Yes | Yes | Migrate to Azure SQL Managed Instance online with minimal downtime. | +|Azure SQL Managed Instance - Offline migration | Yes |Yes | Yes | Migrate to Azure SQL Managed Instance offline. | +|SQL Server on Azure SQL VM - Online migration | No | Yes | Yes |Migrate to SQL Server on Azure VMs online with minimal downtime.| +|SQL Server on Azure SQL VM - Offline migration | Yes |Yes | Yes | Migrate to SQL Server on Azure VMs offline. | +|Migrate logins|Yes | Yes | No | Migrate logins from your source to your target.| +|Migrate schemas| Yes | No | No | Migrate schemas from your source to your target. | +|Azure portal support |Yes | Partial | Yes | Create and Monitor your migration by using the Azure portal. | +|Integration with Azure Data Studio | No | Yes | No | Migration support integrated with Azure Data Studio. | +|Regional availability|Yes |Yes | Yes | More regions are available with the extension. | +|Improved user experience| No | Yes | Yes | The DMS is faster, more secure, and easier to troubleshoot. | +|Automation| Yes | Yes | Yes |The DMS supports PowerShell and Azure CLI. | +|Private endpoints| No | Yes| Yes | Connect to your source and target using private endpoints. +|TDE support|No | Yes | No |Migrate databases encrypted with TDE. | ## Migrate databases to Azure with familiar tools For up-to-date info about the regional availability of Azure Database Migration * [Services and tools available for data migration scenarios](./dms-tools-matrix.md) * [Migrate databases with Azure SQL Migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) * [FAQ about using Azure Database Migration Service](./faq.yml)+ |
dms | Dms Tools Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md | Title: Azure Database Migration Service tools matrix description: Learn about the services and tools available to migrate databases and to support various phases of the migration process.--- Previously updated : 03/21/2020+++ Last updated : 08/23/2023 The following tables identify the services and tools you can use to plan for dat | Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) | | | | | | |-| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| SQL Server | Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | SQL Server | Azure Synapse Analytics | | | |-| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/sql/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |-| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | -| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Attunity*](https://www.attunity.com/products/replicate/)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | +| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) | | Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | | | | | | | | |
dms | Known Issues Azure Mysql Fs Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-mysql-fs-online.md | One or more incompatible SQL modes can cause many different errors. Below is an - **Error**: An error occurred as referencing table cannot be found. - **Potential error message**: The pipeline was unable to create the schema of object '{object}' for activity '{activity}' using strategy MySqlSchemaMigrationViewUsingTableStrategy because of a query execution. + **Potential error message**: The pipeline was unable to create the schema of object '{object}' for activity '{activity}' using strategy MySqlSchemaMigrationViewUsingTableStrategy because of a query execution. - **Limitation**: The error can occur when the view is referring to a table that has been deleted or renamed, or when the view was created with incorrect or incomplete information. + **Limitation**: The error can occur when the view is referring to a table that has been deleted or renamed, or when the view was created with incorrect or incomplete information. This error can happen if a subset of tables are migrated, but the tables they depend on are not. - **Workaround**: We recommend migrating views manually. + **Workaround**: We recommend migrating views manually. Check if all tables referenced in foreign keys and CREATE VIEW statements are selected for migration. ## All pooled connections broken - **Error**: All connections on the source server were broken. - **Limitation**: The error occurs when all the connections that are acquired at the start of initial load are lost due to server restart, network issues, heavy traffic on the source server or other transient problems. This error isn't recoverable. + **Limitation**: The error occurs when all the connections that are acquired at the start of initial load are lost due to server restart, network issues, heavy traffic on the source server or other transient problems. This error isn't recoverable. Additionally, this error occurs if an attempt to migrate a server is made during the maintenance window. **Workaround**: The migration must be restarted, and we recommend increasing the performance of the source server. Another issue is scripts that kill long running connections, prevents these scripts from working. |
dms | Known Issues Troubleshooting Dms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-troubleshooting-dms.md | - - seo-lt-2019 - - ignite-2022 + # Troubleshoot common Azure Database Migration Service issues and errors |
energy-data-services | How To Manage Data Security And Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-data-security-and-encryption.md | In addition to TLS, when you interact with Azure Data Manager for Energy, all tr ### Prerequisites -**Step 1- Configure the key vault** +**Step 1: Configure the key vault** 1. You can use a new or existing key vault to store customer-managed keys. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../key-vault/general/overview.md) and [What is Azure Key Vault](../key-vault/general/basic-concepts.md)? 2. Using customer-managed keys with Azure Data Manager for Energy requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created. In addition to TLS, when you interact with Azure Data Manager for Energy, all tr 2. Under **Settings**, choose **Properties**. 3. In the **purge protection** section, choose **Enable purge protection**. -**Step 2 - Add a key** +**Step 2: Add a key** 1. Next, add a key to the key vault. 2. To learn how to add a key with the Azure portal, see [Quickstart: Set and retrieve a key from Azure Key Vault using the Azure portal](../key-vault/keys/quick-create-portal.md). 3. It is recommended that the RSA key size is 3072, see [Configure customer-managed keys for your Azure Cosmos DB account | Microsoft Learn](../cosmos-db/how-to-setup-customer-managed-keys.md#generate-a-key-in-azure-key-vault). -**Step 3 - Choose a managed identity to authorize access to the key vault** +**Step 3: Choose a managed identity to authorize access to the key vault** 1. When you enable customer-managed keys for an existing Azure Data Manager for Energy instance you must specify a managed identity that will be used to authorize access to the key vault that contains the key. The managed identity must have permissions to access the key in the key vault. 2. You can create a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). |
event-grid | Authenticate With Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-active-directory.md | Title: Authenticate Event Grid publishing clients using Azure Active Directory description: This article describes how to authenticate Azure Event Grid publishing client using Azure Active Directory. Previously updated : 01/05/2022 Last updated : 08/17/2023 # Authentication and authorization with Azure Active Directory With RBAC privileges taken care of, you can now [build your client application t Use [Event Grid's data plane SDK](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) to publish events to Event Grid. Event Grid's SDK support all authentication methods, including Azure AD authentication. +Here's the sample code that publishes events to Event Grid using the .NET SDK. You can get the topic endpoint on the **Overview** page for your Event Grid topic in the Azure portal. It's in the format: `https://<TOPIC-NAME>.<REGION>-1.eventgrid.azure.net/api/events`. ++```csharp +ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredential(); +EventGridPublisherClient client = new EventGridPublisherClient( new Uri("<TOPIC ENDPOINT>"), managedIdentityCredential); +++EventGridEvent egEvent = new EventGridEvent( + "ExampleEventSubject", + "Example.EventType", + "1.0", + "This is the event data"); ++// Send the event +await client.SendEventAsync(egEvent); +``` + ### Prerequisites Following are the prerequisites to authenticate to Event Grid. |
event-grid | Consume Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/consume-private-endpoints.md | Title: Deliver events using private link service description: This article describes how to work around the limitation of not able to deliver events using private link service. Previously updated : 03/01/2023 Last updated : 08/16/2023 # Deliver events using private link service To deliver events to Storage queues using managed identity, follow these steps: 1. [Add the identity to the **Storage Queue Data Message Sender**](../storage/blobs/assign-azure-role-data-access.md) role on Azure Storage queue. 1. [Configure the event subscription](managed-service-identity.md#create-event-subscriptions-that-use-an-identity) that uses a Storage queue as an endpoint to use the system-assigned or user-assigned managed identity. -> [!NOTE] -> - If there's no firewall or virtual network rules configured for the Azure Storage account, you can use both user-assigned and system-assigned identities to deliver events to the Azure Storage account. -> - If a firewall or virtual network rule is configured for the Azure Storage account, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the storage account. You can't use user-assigned managed identity whether this option is enabled or not. +## Firewall and virtual network rules +If there's no firewall or virtual network rules configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use both user-assigned and system-assigned identities to deliver events. ++If a firewall or virtual network rule is configured for the destination Storage account, Event Hubs namespace, or Service Bus namespace, you can use only the system-assigned managed identity if **Allow Azure services on the trusted service list to access the storage account** is also enabled on the destinations. You can't use user-assigned managed identity whether this option is enabled or not. ## Next steps For more information about delivering events using a managed identity, see [Event delivery using a managed identity](managed-service-identity.md). |
event-grid | Delivery Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/delivery-properties.md | To set headers with a fixed value, provide the name of the header and its value :::image type="content" source="./media/delivery-properties/static-header-property.png" alt-text="Delivery properties - static"::: -You may want check **Is secret?** when providing sensitive data. Sensitive data won't be displayed on the Azure portal. +You might want to check **Is secret?** when you're providing sensitive data. The visibility of sensitive data on the Azure portal depends on the user's RBAC permission. ## Setting dynamic header values You can set the value of a header based on a property in an incoming event. Use JsonPath syntax to refer to an incoming event's property value to be used as the value for a header in outgoing requests. For example, to set the value of a header named **Channel** using the value of the incoming event property **system** in the event data, configure your event subscription in the following way: |
event-grid | Mqtt Client Authorization Use Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-authorization-use-rbac.md | - Title: RBAC authorization for clients with Azure AD identity -description: Describes RBAC roles to authorize clients with Azure AD identity to publish or subscribe MQTT messages - Previously updated : 8/11/2023-----# Authorizing access to publish or subscribe to MQTT messages in Event Grid namespace -You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Azure Active Directory identity, to publish or subscribe access to specific topic spaces. --## Prerequisites -- You need an Event Grid namespace with MQTT enabled. [Learn about creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace)-- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal)--## Operation types -You can use following two data actions to provide publish or subscribe permissions to clients with Azure AD identities on specific topic spaces. --**Topic spaces publish** data action -Microsoft.EventGrid/topicSpaces/publish/action --**Topic spaces subscribe** data action -Microsoft.EventGrid/topicSpaces/subscribe/action --> [!NOTE] -> Currently, we recommend using custom roles with the actions provided. --## Custom roles --You can create custom roles using the publish and subscribe actions. --The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope. --**EventGridMQTTPublisherRole.json**: MQTT messages publish operation. --```json -{ - "roleName": "Event Grid namespace MQTT publisher", - "description": "Event Grid namespace MQTT message publisher role", - "assignableScopes": [ - "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>" - ], - "permissions": [ - { - "actions": [], - "notActions": [], - "dataActions": [ - "Microsoft.EventGrid/topicSpaces/publish/action" - ], - "notDataActions": [] - } - ] -} -``` --**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation. --```json -{ - "roleName": "Event Grid namespace MQTT subscriber", - "description": "Event Grid namespace MQTT message subscriber role", - "assignableScopes": [ - "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>" - ] - "permissions": [ - { - "actions": [], - "notActions": [], - "dataActions": [ - "Microsoft.EventGrid/topicSpaces/subscribe/action" - ], - "notDataActions": [] - } - ] -} -``` --## Create custom roles in Event Grid namespace -1. Navigate to topic spaces page in Event Grid namespace -1. Select the topic space for which the custom RBAC role needs to be created -1. Navigate to the Access control (IAM) page within the topic space -1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name. -1. Switch the Baseline permissions to **Start from scratch** -1. On the Permissions tab, select **Add permissions** -1. In the selection page, find and select Microsoft Event Grid - :::image type="content" source="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions."::: -1. Navigate to Data Actions -1. Select **Topic spaces publish** data action and select **Add** - :::image type="content" source="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-rbac-authorization-aad-clients/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection."::: -1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed. -1. Select **Create** in Review + create tab to create the custom role. -1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal). --> [!NOTE] -> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space. --## Next steps -See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md) |
event-grid | Mqtt Client Azure Ad Token And Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-azure-ad-token-and-rbac.md | + + Title: JWT authentication and RBAC authorization for clients with Azure AD identity +description: Describes JWT authentication and RBAC roles to authorize clients with Azure AD identity to publish or subscribe MQTT messages + Last updated : 8/11/2023+++++# Authenticating and Authorizing access to publish or subscribe to MQTT messages +You can authenticate MQTT clients with Azure AD JWT to connect to Event Grid namespace. You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Azure Active Directory identity, to publish or subscribe access to specific topic spaces. ++> [!IMPORTANT] +> This feature is supported only when using MQTT v5 ++## Prerequisites +- You need an Event Grid namespace with MQTT enabled. Learn about [creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace) +- Review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal) +++## Authentication using Azure AD JWT +You can use the MQTT v5 CONNECT packet to provide the Azure AD JWT token to authenticate your client, and you can use the MQTT v5 AUTH packet to refresh the token. ++In CONNECT packet, you can provide required values in the following fields: ++|Field | Value | +||| +|Authentication Method | OAUTH2-JWT | +|Authentication Data | JWT token | ++In AUTH packet, you can provide required values in the following fields: ++|Field | Value | +||| +| Authentication Method | OAUTH2-JWT | +| Authentication Data | JWT token | +| Authentication Reason Code | 25 | + +Authenticate Reason Code with value 25 signifies reauthentication. ++> [!NOTE] +> Audience: ΓÇ£audΓÇ¥ claim must be set to "https://eventgrid.azure.net/". ++## Authorization to grant access permissions +A client using Azure AD based JWT authentication needs to be authorized to communicate with the Event Grid namespace. You can create custom roles to enable the client to communicate with Event Grid instances in your resource group, and then assign the roles to the client. You can use following two data actions to provide publish or subscribe permissions, to clients with Azure AD identities, on specific topic spaces. ++**Topic spaces publish** data action +Microsoft.EventGrid/topicSpaces/publish/action ++**Topic spaces subscribe** data action +Microsoft.EventGrid/topicSpaces/subscribe/action ++> [!NOTE] +> Currently, we recommend using custom roles with the actions provided. ++### Custom roles ++You can create custom roles using the publish and subscribe actions. ++The following are sample role definitions that allow you to publish and subscribe to MQTT messages. These custom roles give permissions at topic space scope. You can also create roles to provide permissions at subscription, resource group scope. ++**EventGridMQTTPublisherRole.json**: MQTT messages publish operation. ++```json +{ + "roleName": "Event Grid namespace MQTT publisher", + "description": "Event Grid namespace MQTT message publisher role", + "assignableScopes": [ + "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>" + ], + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.EventGrid/topicSpaces/publish/action" + ], + "notDataActions": [] + } + ] +} +``` ++**EventGridMQTTSubscriberRole.json**: MQTT messages subscribe operation. ++```json +{ + "roleName": "Event Grid namespace MQTT subscriber", + "description": "Event Grid namespace MQTT message subscriber role", + "assignableScopes": [ + "/subscriptions/<subscription ID>/resourceGroups/<resource group name>/Microsoft.EventGrid/namespaces/<namespace name>/topicSpaces/<topicspace name>" + ] + "permissions": [ + { + "actions": [], + "notActions": [], + "dataActions": [ + "Microsoft.EventGrid/topicSpaces/subscribe/action" + ], + "notDataActions": [] + } + ] +} +``` ++## Create custom roles +1. Navigate to topic spaces page in your Event Grid namespace +1. Select the topic space for which the custom RBAC role needs to be created +1. Navigate to the Access control (IAM) page within the topic space +1. In the Roles tab, right select any of the roles to clone a new custom role. Provide the custom role name. +1. Switch the Baseline permissions to **Start from scratch** +1. On the Permissions tab, select **Add permissions** +1. In the selection page, find and select Microsoft Event Grid + :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions.png" alt-text="Screenshot showing the Microsoft Event Grid option to find the permissions."::: +1. Navigate to Data Actions +1. Select **Topic spaces publish** data action and select **Add** + :::image type="content" source="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" lightbox="./media/mqtt-client-azure-ad-token-and-rbac/event-grid-custom-role-permissions-data-actions.png" alt-text="Screenshot showing the data action selection."::: +1. Select Next to see the topic space in the Assignable scopes tab. You can add other assignable scopes if needed. +1. Select **Create** in Review + create tab to create the custom role. +1. Once the custom role is created, you can assign the role to an identity to provide the publish permission on the topic space. You can learn how to assign roles [here](/azure/role-based-access-control/role-assignments-portal). ++## Assign the custom role to your Azure AD identity +1. In the Azure portal, navigate to your Event Grid namespace +1. Navigate to the topic space to which you want to authorize access. +1. Go to the Access control (IAM) page of the topic space +1. Select the **Role assignments** tab to view the role assignments at this scope. +1. Select **+ Add** and Add role assignment. +1. On the Role tab, select the role that you created in the previous step. +1. On the Members tab, select User, group, or service principal to assign the selected role to one or more service principals (applications). + - Users and groups work when user/group belong to fewer than 200 groups. +1. Select **Select members**. +1. Find and select the users, groups, or service principals. +1. Select **Review + assign** on the Review + assign tab. ++> [!NOTE] +> You can follow similar steps to create and assign a custom Event Grid MQTT subscriber permission to a topic space. ++## Next steps +- See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md) +- To learn more about how Managed Identities work, you can refer to [How managed identities for Azure resources work with Azure virtual machines - Microsoft Entra](/azure/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm) +- To learn more about how to obtain tokens from Azure AD, you can refer to [obtaining Azure AD tokens](/azure/active-directory/develop/v2-oauth2-client-creds-grant-flow#get-a-token) +- To learn more about Azure Identity client library, you can refer to [using Azure Identity client library](/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-the-azure-identity-client-library) +- To learn more about implementing an interface for credentials that can provide a token, you can refer to [TokenCredential Interface](/java/api/com.azure.core.credential.tokencredential) +- To learn more about how to authenticate using Azure Identity, you can refer to [examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples) |
event-grid | Mqtt Publish And Subscribe Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-publish-and-subscribe-portal.md | step certificate fingerprint client1-authnID.pem 1. On the Review + create tab of the Create namespace page, select **Create**. > [!NOTE]- > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see Create a Namespace. + > To keep the QuickStart simple, you'll be using only the Basics page to create a namespace. For detailed steps about configuring network, security, and other settings on other pages of the wizard, see Create a Namespace. + 1. After the deployment succeeds, select **Go to resource** to navigate to the Event Grid Namespace Overview page for your namespace. 1. In the Overview page, you see that the MQTT is in Disabled state. To enable MQTT, select the **Disabled** link, it will redirect you to Configuration page. 1. On Configuration page, select the Enable MQTT option, and Apply the settings. step certificate fingerprint client1-authnID.pem :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-client1-metadata.png" alt-text="Screenshot of client 1 configuration."::: 6. Select **Create** to create the client.-7. Repeat the above steps to create another client called ΓÇ£client2ΓÇ¥. +7. Repeat the above steps to create another client called "client2". :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqtt-client2-metadata.png" alt-text="Screenshot of client 2 configuration."::: step certificate fingerprint client1-authnID.pem :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-1.png" alt-text="Screenshot showing creation of first permission binding."::: 4. Select **Create** to create the permission binding. 5. Create one more permission binding by selecting **+ Permission binding** on the toolbar.-6. Provide a name and give $all client group Subscriber access to the Topicspace1 as shown. +6. Provide a name and give $all client group Subscriber access to the "Topicspace1" as shown. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/create-permission-binding-2.png" alt-text="Screenshot showing creation of second permission binding."::: 7. Select **Create** to create the permission binding. step certificate fingerprint client1-authnID.pem :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client1-configuration-2.png" alt-text="Screenshot showing client 1 configuration part 2 on MQTTX app."::: 1. Select Connect to connect the client to the Event Grid MQTT service.-1. Repeat the above steps to connect the second client ΓÇ£client2ΓÇ¥, with corresponding authentication information as shown. +1. Repeat the above steps to connect the second client "client2", with corresponding authentication information as shown. :::image type="content" source="./media/mqtt-publish-and-subscribe-portal/mqttx-app-client2-configuration-1.png" alt-text="Screenshot showing client 2 configuration part 1 on MQTTX app."::: |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
event-grid | Powershell Webhook Secure Delivery Azure Ad App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-app.md | Title: Azure PowerShell - Secure WebHook delivery with Azure AD Application in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD Application using Azure Event Grid ms.devlang: powershell-+ Last updated 10/14/2021 |
event-grid | Powershell Webhook Secure Delivery Azure Ad User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-azure-ad-user.md | Title: Azure PowerShell - Secure WebHook delivery with Azure AD User in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure AD User using Azure Event Grid ms.devlang: powershell-+ Last updated 09/29/2021 |
event-grid | Secure Webhook Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md | Title: Secure WebHook delivery with Azure AD in Azure Event Grid description: Describes how to deliver events to HTTPS endpoints protected by Azure Active Directory using Azure Event Grid + Last updated 10/12/2022 |
event-grid | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
event-hubs | Event Hubs Dedicated Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md | Approximately, one capacity unit (CU) in a legacy cluster provides *ingress capa With Legacy cluster, you can purchase up to 20 CUs. > [!Note] -> Event Hubs Dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Clusters with self-serve scaling does not support availability zones yet. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones). +> Legacy Event Hubs Dedicated clusters require at least 8 Capacity Units(CUs) to enable availability zones. Availability zone support is only available in [Azure regions with availability zones](../availability-zones/az-overview.md#azure-regions-with-availability-zones). > [!IMPORTANT] > Migrating an existing Legacy cluster to a Self-Serve Cluster isn't currently support. For more information, see [migrating a Legacy cluster to Self-Serve Scalable cluster.](#can-i-migrate-my-standard-or-premium-namespaces-to-a-dedicated-tier-cluster). |
event-hubs | Event Hubs Ip Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-ip-filtering.md | Use the following Azure PowerShell commands to add, list, remove, update, and de - [`New-AzEventHubIPRuleConfig`](/powershell/module/az.eventhub/new-azeventhubipruleconfig) and [`Set-AzEventHubNetworkRuleSet`](/powershell/module/az.eventhub/set-azeventhubnetworkruleset) together to add an IP firewall rule - [`Remove-AzEventHubIPRule`](/powershell/module/az.eventhub/remove-azeventhubiprule) to remove an IP firewall rule. - ## Default action and public network access ### REST API From API version **2021-06-01-preview onwards**, the default value of the `defau The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet. -For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/preview/private-endpoint-connections/create-or-update). +For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/controlplane-preview/private-endpoint-connections/create-or-update). > [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings. Azure portal always uses the latest API version to get and set properties. If y ++ ## Next steps For constraining access to Event Hubs to Azure virtual networks, see the following link: For constraining access to Event Hubs to Azure virtual networks, see the followi <!-- Links --> [express-route]: ../expressroute/expressroute-faqs.md#supported-services+ [lnk-deploy]: ../azure-resource-manager/templates/deploy-powershell.md+ [lnk-vnet]: event-hubs-service-endpoints.md++ |
event-hubs | Event Hubs Kafka Connect Debezium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md | Last updated 10/18/2021 This tutorial walks you through how to set up a change data capture based system on Azure using [Event Hubs](./event-hubs-about.md?WT.mc_id=devto-blog-abhishgu) (for Kafka), [Azure DB for PostgreSQL](../postgresql/overview.md) and Debezium. It will use the [Debezium PostgreSQL connector](https://debezium.io/documentation/reference/1.2/connectors/postgresql.html) to stream database modifications from PostgreSQL to Kafka topics in Event Hubs > [!NOTE]-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. In this tutorial, you take the following steps: |
event-hubs | Event Hubs Kafka Mirror Maker Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-mirror-maker-tutorial.md | This tutorial shows how to mirror a Kafka broker into an Azure Event Hubs using > This sample is available on [GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/mirror-maker) > [!NOTE]-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. In this tutorial, you learn how to: > [!div class="checklist"] |
event-hubs | Event Hubs Service Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-service-endpoints.md | From API version **2021-06-01-preview onwards**, the default value of the `defau The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet. -For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/preview/private-endpoint-connections/create-or-update). +For more information about these properties, see [Create or Update Network Rule Set](/rest/api/eventhub/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/eventhub/controlplane-preview/private-endpoint-connections/create-or-update). > [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings. |
event-hubs | Monitor Event Hubs Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/monitor-event-hubs-reference.md | Azure Event Hubs supports the following dimensions for metrics in Azure Monitor. |Dimension name|Description| | - | -- |-|Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension will see a value of '-NamespaceOnlyMetric-' in addition to all your Event Hubs. This represents request which were made at the namespace level. Examples include a request to list all Event Hubs under the namespace or requests to entities which failed authentication or authorization.| +|Entity Name| Name of the event hub. With the 'Incoming Requests' metric, the Entity Name dimension has a value of '-NamespaceOnlyMetric-' in addition to all your event hubs. It represents the requests that were made at the namespace level. Examples include a request to list all event hubs in the namespace or requests to entities that failed authentication or authorization.| ## Resource logs [!INCLUDE [event-hubs-diagnostic-log-schema](./includes/event-hubs-diagnostic-log-schema.md)] Here's an example of a runtime audit log entry: Application metrics logs capture the aggregated information on certain metrics related to data plane operations. The captured information includes the following runtime metrics. > [!NOTE] -> Application metrics logs are available only in **premium** and **dedicated** tiers. Application Metric logs for following metrics- **IncomingBytes**. **IncomingMessages** ,**OutgoingBytes** ,**OutgoingMessages** are only generated if you have already created [Application Groups](resource-governance-overview.md#application-groups),in your environment. Application Groups should have the same security context - AAD ID or SAS key, which is being used to send/receive data to Azure Event Hubs. +> Application metrics logs are available only in **premium** and **dedicated** tiers. Name | Description - | - Name | Description `IncomingBytes` | Details of Publisher throughput sent to Event Hubs `OutgoinMessages` | Details of number of messages consumed from Event Hubs. `OutgoingBytes` | Details of Consumer throughput from Event Hubs.+`OffsetCommit` | Number of offset commit calls made to the event hub +`OffsetFetch` | Number of offset fetch calls made to the event hub. + ## Azure Monitor Logs tables |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
event-hubs | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
expressroute | About Fastpath | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md | FastPath will honor UDRs configured on the GatewaySubnet and send traffic direct > * FastPath UDR connectivity is not supported for Azure Dedicated Host workloads. > * FastPath UDR connectivity is not supported for IPv6 workloads. +To enroll in the Public preview, please send an email **exrpm@microsoft.com** with the following information: +- Azure subscription ID +- Virtual Network(s) Azure Resource ID(s) +- ExpressRoute Circuit(s) Azure Resource ID(s) +- ExpressRoute Connection(s) Azure Resource ID(s) +- Number of Virtual Network peering connections +- Number of UDRs configured in the hub Virtual Network + ### Private Link Connectivity for 10Gbps ExpressRoute Direct Private Link traffic sent over ExpressRoute FastPath will bypass the ExpressRoute virtual network gateway in the data path. |
expressroute | Expressroute Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-faqs.md | ExpressRoute Local is a SKU of ExpressRoute circuit, in addition to the Standard ExpressRoute Local may not be available for an ExpressRoute Location. For peering location and supported Azure local region, see [locations and connectivity providers](expressroute-locations-providers.md#partners). - > [!NOTE] - > The restriction of Azure regions in the same metro doesn't apply for ExpressRoute Local in Virtual WAN. - ### What are the benefits of ExpressRoute Local? While you need to pay egress data transfer for your Standard or Premium ExpressRoute circuit, you don't pay egress data transfer separately for your ExpressRoute Local circuit. In other words, the price of ExpressRoute Local includes data transfer fees. ExpressRoute Local is an economical solution if you have massive amount of data to transfer and want to have your data over a private connection to an ExpressRoute peering location near your desired Azure regions. |
expressroute | Expressroute Howto Add Gateway Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-add-gateway-resource-manager.md | The steps for this task use a VNet based on the values in the following configur ## Add a gateway +> [!IMPORTANT] +> If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **-GatewaySku** or use Non-AZ SKU (Standard, HighPerformance, UltraPerformance) for -GatewaySKU with Standard and Static Public IP. +> + 1. To connect with Azure, run `Connect-AzAccount`. 1. Declare your variables for this tutorial. Be sure to edit the sample to reflect the settings that you want to use. The steps for this task use a VNet based on the values in the following configur ```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG -Location $Location -IpConfigurations $ipconf -GatewayType Expressroute -GatewaySku Standard ```-> [!IMPORTANT] -> If you plan to use IPv6-based private peering over ExpressRoute, make sure to select an AZ SKU (ErGw1AZ, ErGw2AZ, ErGw3AZ) for **-GatewaySku** or use Non-AZ SKU (Standard, HighPerformance, UltraPerformance) for -GatewaySKU with Standard and Static Public IP. -> -> ## Verify the gateway was created Use the following commands to verify that the gateway has been created: |
expressroute | Expressroute Howto Linkvnet Arm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-linkvnet-arm.md | To enroll in the preview, send an email to **exrpm@microsoft.com**, providing th * Azure Subscription ID * Virtual Network (VNet) Resource ID * ExpressRoute Circuit Resource ID+* ExpressRoute Connection(s) Resource ID(s) +* Number of Private Endpoints deployed to the local/Hub VNet. +* Resource ID of any User-Defined-Routes (UDRs) configured in the local/Hub VNet. **FastPath support for virtual network peering and UDRs is only available for ExpressRoute Direct connections**. |
expressroute | Expressroute Howto Set Global Reach Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-cli.md | az account set --subscription <your subscription ID> You can enable ExpressRoute Global Reach between any two ExpressRoute circuits. The circuits are required to be in supported countries/regions and were created at different peering locations. If your subscription owns both circuits, you may select either circuit to run the configuration. However, if the two circuits are in different Azure subscriptions you must create an authorization key from one of the circuits. Using the authorization key generated from the first circuit you can enable Global Reach on the second circuit. +> [!NOTE] +> ExpressRoute Global Reach configurations can only be seen from the configured circuit. + ## Enable connectivity between your on-premises networks When running the command to enable connectivity, note the following requirements for parameter values: |
expressroute | Expressroute Howto Set Global Reach Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach-portal.md | Before you start configuration, confirm the following criteria: * If your subscription owns both circuits, you can choose either circuit to run the configuration in the following sections. * If the two circuits are in different Azure subscriptions, you need authorization from one Azure subscription. Then you pass in the authorization key when you run the configuration command in the other Azure subscription. +> [!NOTE] +> ExpressRoute Global Reach configurations can only be seen from the configured circuit. + ## Enable connectivity Enable connectivity between your on-premises networks. There are separate sets of instructions for circuits that are in the same Azure subscription, and circuits that are different subscriptions. |
expressroute | Expressroute Howto Set Global Reach | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-set-global-reach.md | Before you start configuration, confirm the following information: * If your subscription owns both circuits, you can choose either circuit to run the configuration in the following sections. * If the two circuits are in different Azure subscriptions, you need authorization from one Azure subscription. Then you pass in the authorization key when you run the configuration command in the other Azure subscription. +> [!NOTE] +> ExpressRoute Global Reach configurations can only be seen from the configured circuit. + ## Enable connectivity Enable connectivity between your on-premises networks. There are separate sets of instructions for circuits that are in the same Azure subscription, and circuits that are different subscriptions. |
expressroute | Expressroute Locations Providers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md | The following table shows connectivity locations and the service providers for e | **Toronto** | [Cologix TOR1](https://www.cologix.com/data-centers/toronto/tor1/) | 1 | Canada Central | Supported | AT&T NetBond<br/>Bell Canada<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Equinix<br/>IX Reach Megaport<br/>Orange<br/>Telus<br/>Verizon<br/>Zayo | | **Toronto2** | [Allied REIT](https://www.alliedreit.com/property/905-king-st-w/) | 1 | Canada Central | Supported | Fibrenoire | | **Vancouver** | [Cologix VAN1](https://www.cologix.com/data-centers/vancouver/van1/) | 1 | n/a | Supported | Bell Canada<br/>Cologix<br/>Megaport<br/>Telus<br/>Zayo |-| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix | +| **Warsaw** | [Equinix WA1](https://www.equinix.com/data-centers/europe-colocation/poland-colocation/warsaw-data-centers/wa1) | 1 | Poland Central | Supported | Equinix, Orange Poland | | **Washington DC** | [Equinix DC2](https://www.equinix.com/locations/americas-colocation/united-states-colocation/washington-dc-data-centers/dc2/)<br/>[Equinix DC6](https://www.equinix.com/data-centers/americas-colocation/united-states-colocation/washington-dc-data-centers/dc6) | 1 | East US<br/>East US 2 | Supported | Aryaka Networks<br/>AT&T NetBond<br/>British Telecom<br/>CenturyLink Cloud Connect<br/>Cologix<br/>Colt<br/>Comcast<br/>Coresite<br/>Cox Business Cloud Port<br/>Crown Castle<br/>Equinix<br/>Internet2<br/>InterCloud<br/>Iron Mountain<br/>IX Reach<br/>Level 3 Communications<br/>Lightpath<br/>Megaport<br/>Neutrona Networks<br/>NTT Communications<br/>Orange<br/>PacketFabric<br/>SES<br/>Sprint<br/>Tata Communications<br/>Telia Carrier<br/>Verizon<br/>Zayo | | **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US<br/>East US 2 | n/a | CenturyLink Cloud Connect<br/>Coresite<br/>Intelsat<br/>Megaport<br/>Viasat<br/>Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt<br/>Equinix<br/>Intercloud<br/>Interxion<br/>Megaport<br/>Swisscom<br/>Zayo | |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[NTT SmartConnect](https://cloud.nttsmc.com/cxc/azure.html)** | Supported | Supported | Osaka | | **[Ooredoo Cloud Connect](https://www.ooredoo.com.kw/portal/en/b2bOffConnAzureExpressRoute)** | Supported | Supported | Doha<br/>Doha2<br/>London2<br/>Marseille | | **[Optus](https://www.optus.com.au/enterprise/networking/network-connectivity/express-link/)** | Supported | Supported | Melbourne<br/>Sydney |-| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Sydney<br/>Tokyo<br/>Toronto<br/>Washington DC | +| **[Orange](https://www.orange-business.com/en/products/business-vpn-galerie)** | Supported | Supported | Amsterdam<br/>Amsterdam2<br/>Chicago<br/>Dallas<br/>Dubai2<br/>Dublin2 Frankfurt<br/>Hong Kong<br/>Johannesburg<br/>London<br/>London2<br/>Mumbai2<br/>Melbourne<br/>Paris<br/>Sao Paulo<br/>Silicon +| **Orange Poland** | Supported | Supported | Warsaw | | **[Orixcom](https://www.orixcom.com/solutions/azure-expressroute)** | Supported | Supported | Dubai2 | | **[PacketFabric](https://www.packetfabric.com/cloud-connectivity/microsoft-azure)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Denver<br/>Las Vegas<br/>London<br/>Los Angeles2<br/>Miami<br/>New York<br/>Seattle<br/>Silicon Valley<br/>Toronto<br/>Washington DC | | **[PCCW Global Limited](https://consoleconnect.com/clouds/#azureRegions)** | Supported | Supported | Chicago<br/>Hong Kong<br/>Hong Kong2<br/>London<br/>Singapore<br/>Singapore2<br/>Tokyo2 | |
expressroute | Expressroute Monitoring Metrics Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md | Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](.. | Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | | | [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | Yes | -| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes | -| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | ConnectionName | Yes | -| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | ConnectionName | Yes | +| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes | ### ExpressRoute Direct |
expressroute | Expressroute Troubleshooting Expressroute Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-expressroute-overview.md | At line:1 char:1 The Address Resolution Protocol (ARP) table provides a mapping of the IP address and MAC address for a particular peering. The ARP table for an ExpressRoute circuit peering provides the following information for each interface (primary and secondary): * Mapping of the IP address for the on-premises router interface to the MAC address-* Mapping of the IP address for the ExpressRoute router interface to the MAC address +* Mapping of the IP address for the ExpressRoute router interface to the MAC address (optional) * Age of the mapping ARP tables can help validate layer 2 configuration and troubleshoot basic layer 2 connectivity issues. +>[!NOTE] +> Depending on the hardware platform, the ARP results may vary and only display the *On-premises* interface. + To learn how to view the ARP table of an ExpressRoute peering and how to use the information to troubleshoot layer 2 connectivity issues, see [Getting ARP tables in the Resource Manager deployment model][ARP]. ## Validate BGP and routes on the MSEE When your results are ready, you have two sets of them for the primary and secon * **If you're testing PsPing from on-premises to Azure, received results show matches, but sent results show no matches**: This result indicates that traffic is coming in to Azure but isn't returning to on-premises. Check for return-path routing issues. For example, are you advertising the appropriate prefixes to Azure? Is a user-defined route (UDR) overriding prefixes? * **If you're testing PsPing from Azure to on-premises, sent results show matches, but received results show no matches**: This result indicates that traffic is coming in to on-premises but isn't returning to Azure. Work with your provider to find out why traffic isn't being routed to Azure via your ExpressRoute circuit. * **One MSEE shows no matches, but the other shows good matches**: This result indicates that one MSEE isn't receiving or passing any traffic. It might be offline (for example, BGP/ARP is down).+ * You can run additional testing to confirm the unhealthy path by advertising a unique /32 on-premises route over the BGP session on this path. + * Run "Test your private peering connectivity" using the unique /32 advertised as the on-premise destination address and reveiw the results to confirm the path health. Your test results for each MSEE device look like the following example: |
expressroute | Expressroute Troubleshooting Network Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-network-performance.md | There are three basic steps to use this toolkit for Performance testing. If the performance test isn't giving you expected results, figuring out why should be a progressive step-by-step process. Given the number of components in the path, a systematic approach will provide a faster path to resolution than jumping around. By troubleshooting systematically, you can prevent doing unnecessary testing multiple times. >[!NOTE]->The scenario here is a performance issue, not a connectivity issue. The steps would be different if traffic wasn't passing at all. +>The scenario here is a performance issue, not a connectivity issue. To isolate the connectivity problem to Azure network, follow [Verifying ExpressRotue connectivity](expressroute-troubleshooting-expressroute-overview.md) article. > First, challenge your assumptions. Is your expectation reasonable? For instance, if you have a 1-Gbps ExpressRoute circuit and 100 ms of latency. It's not reasonable to expect the full 1 Gbps of traffic given the performance characteristics of TCP over high latency links. See the [References section](#references) for more on performance assumptions. Test setup: \* The latency to Brazil is a good example where the straight-line distance significantly differs from the fiber run distance. The expected latency would be in the neighborhood of 160 ms, but is actually 189 ms. The difference in latency would seem to indicate a network issue somewhere. But the reality is the fiber line doesn't go to Brazil in a straight line. So you should expect an extra 1,000 km or so of travel to get to Brazil from Seattle. >[!NOTE]->While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor and uses a much lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with `-w` switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi-threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. +>While these numbers should be taken into consideration, they were tested using AzureCT which is based in IPERF in Windows via PowerShell. In this scenario, IPERF does not honor default Windows TCP options for Scaling Factor and uses a much lower Shift Count for the TCP Window size. The numbers represented here were performed using default IPERF values and are for general reference only. By tuning IPERF commands with `-w` switch and a big TCP Window size, better throughput can be obtained over long distances, showing significantly better throughput figures. Also, to ensure an ExpressRoute is using the full bandwidth, it's ideal to run the IPERF in multi-threaded option from multiple machines simultaneously to ensure computing capacity is able to reach maximum link performance and is not limited by processing capacity of a single VM. To get the best Iperf results on Windows, use "Set-NetTCPSetting -AutoTuningLevelLocal Experimental". Please check your organizational policies before making any changes. ## Next steps |
expressroute | Reset Circuit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/reset-circuit.md |  Title: 'Reset a failed circuit - ExpressRoute: PowerShell: Azure | Microsoft Docs' + Title: 'Reset a failed circuit - ExpressRoute | Microsoft Docs' description: This article helps you reset an ExpressRoute circuit that is in a failed state. +## Azure portal ++1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. ++1. Search for **ExpressRoute circuits** in the search box at the top of the portal. ++1. Select the ExpressRoute circuit that you want to reset. -## Reset a circuit +1. Select **Refresh** from the top menu. ++ :::image type="content" source="./media/reset-circuit/refresh-circuit.png" alt-text="Screenshot of refresh button for an ExpressRoute circuit."::: ++## Azure PowerShell + 1. Install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information, see [Install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). |
external-attack-surface-management | Understanding Asset Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-asset-details.md | The following fields are included in the table in the **Values** section on the Many organizations opt to obfuscate their registry information. Sometimes contact email addresses end in *@anonymised.email*. This placeholder is used instead of a real contact address. Many fields are optional during registration configuration, so any field with an empty value wasn't included by the registrant. +++### Change history ++The "Change history" tab displays a list of modifications that have been applied to an asset over time. This information helps you track these changes over time and better understand the lifecycle of the asset. This tab displays a variety of changes, including but not limited to asset states, labels and external IDs. For each change, we list the user who implemented the change and a timestamp. ++[ ![Screenshot that shows the Change history tab.](media/change-history-1.png) ](media/change-history-1.png#lightbox) +++ ## Next steps - [Understand dashboards](understanding-dashboards.md) |
external-attack-surface-management | Understanding Inventory Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md | All assets are labeled as one of the following states: These asset states are uniquely processed and monitored to ensure that customers have clear visibility into the most critical assets by default. For instance, ΓÇ£Approved InventoryΓÇ¥ assets are always represented in dashboard charts and are scanned daily to ensure data recency. All other kinds of assets are not included in dashboard charts by default; however, users can adjust their inventory filters to view assets in different states as needed. Similarly, "CandidateΓÇ¥ assets are only scanned during the discovery process; itΓÇÖs important to review these assets and change their state to ΓÇ£Approved InventoryΓÇ¥ if they are owned by your organization. ++## Tracking inventory changes ++Your attack surface is constantly changing, which is why Defender EASM continuously analyzes and updates your inventory to ensure accuracy. Assets are frequently added and removed from inventory, so it's important to track these changes to understand your attack surface and identify key trends. The inventory changes dashboard provides an overview of these changes, displaying the "added" and "removed" counts for each asset type. You can filter the dashboard by two date ranges: either the last 7 or 30 days. For a more granular view of these inventory changes, refer to the "Changes by date" section. +++[ ![Screenshot of Inventory Changes screen.](media/inventory-changes-1.png)](media/inventory-changes-1.png#lightbox) ++++ ## Next steps -- [Deploying the EASM Azure resource](deploying-the-defender-easm-azure-resource.md)+- [Modifying inventory assets](labeling-inventory-assets.md) - [Understanding asset details](understanding-asset-details.md) - [Using and managing discovery](using-and-managing-discovery.md) |
firewall | Firewall Preview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md | With the Azure Firewall Resource Health check, you can now diagnose and get supp Starting in August 2023, this preview will be automatically enabled on all firewalls and no action will be required to enable this functionality. For more information, see [Resource Health overview](../service-health/resource-health-overview.md). +### Top flows (preview) and Flow trace logs (preview) ++- The Top flows log shows the top connections that contribute to the highest throughput through the firewall. +- Flow trace logs show the full journey of a packet in the TCP handshake. ++For more information, see [Enable Top flows (preview) and Flow trace logs (preview) in Azure Firewall](enable-top-ten-and-flow-trace.md). ++### Auto-learn SNAT routes (preview) ++You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. For information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md#auto-learn-snat-routes-preview). ++### Embedded Firewall Workbooks (preview) ++Azure Firewall predefined workbooks are two clicks away and fully available from the **Monitoring** section in the Azure Firewall portal UI. ++For more information, see [Azure Firewall: New Monitoring and Logging Updates](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-new-monitoring-and-logging-updates/ba-p/3897897#:~:text=Embedded%20Firewall%20Workbooks%20are%20now%20in%20public%20preview) + ## Next steps To learn more about Azure Firewall, see [What is Azure Firewall?](overview.md). |
firewall | Integrate With Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md | az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet -- ## Next steps +- For more information, see [Scale Azure Firewall SNAT ports with NAT Gateway for large workloads](https://azure.microsoft.com/blog/scale-azure-firewall-snat-ports-with-nat-gateway-for-large-workloads/). - [Design virtual networks with NAT gateway](../virtual-network/nat-gateway/nat-gateway-resource.md) - [Integrate NAT gateway with Azure Firewall in a hub and spoke network](../virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md) |
firewall | Logs And Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/logs-and-metrics.md | The following metrics are available for Azure Firewall: - Status: Possible values are *Healthy*, *Degraded*, *Unhealthy*. - Reason: Indicates the reason for the corresponding status of the firewall. - If SNAT ports are used > 95%, they are considered exhausted and the health is 50% with status=**Degraded** and reason=**SNAT port**. The firewall keeps processing traffic and existing connections are not affected. However, new connections may not be established intermittently. + If SNAT ports are used > 95%, they're considered exhausted and the health is 50% with status=**Degraded** and reason=**SNAT port**. The firewall keeps processing traffic and existing connections aren't affected. However, new connections may not be established intermittently. If SNAT ports are used < 95%, then firewall is considered healthy and health is shown as 100%. The following metrics are available for Azure Firewall: - There may be various reasons that can cause high latency in Azure Firewall. For example, high CPU utilization, high throughput, or a possible networking issue. - This metric does not measure end-to-end latency of a given network path. In other words, this latency health probe does not measure how much latency Azure Firewall adds. + This metric doesn't measure end-to-end latency of a given network path. In other words, this latency health probe doesn't measure how much latency Azure Firewall adds. - - When the latency metric is not functioning as expected, a value of 0 appears in the metrics dashboard. - - As a reference, the average expected latency for a firewall is approximately 1 m/s. This may vary depending on deployment size and environment. + - When the latency metric isn't functioning as expected, a value of 0 appears in the metrics dashboard. + - As a reference, the average expected latency for a firewall is approximately 1 m/s. This may vary depending on deployment size and environment. + - The latency probe is based on Microsoft's Ping Mesh technology. So, intermittent spikes in the latency metric are to be expected. These spikes are normal and don't signal an issue with the Azure Firewall. They're part of the standard host networking setup that supports the system. + + As a result, if you experience consistent high latency that last longer than typical spikes, consider filing a Support ticket for assistance. ## Next steps |
firewall | Policy Analytics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/policy-analytics.md | Policy analytics starts monitoring the flows in the DNAT, Network, and Applicati ## Next steps +- To learn more about Policy Analytics, see [Optimize performance and strengthen security with Policy Analytics for Azure Firewall](https://azure.microsoft.com/blog/optimize-performance-and-strengthen-security-with-policy-analytics-for-azure-firewall/). - To learn more about Azure Firewall logs and metrics, see [Azure Firewall logs and metrics](logs-and-metrics.md). - To learn more about Azure Firewall structured logs, see [Azure Firewall structured logs](firewall-structured-logs.md). |
firewall | Premium Deploy Certificates Enterprise Ca | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy-certificates-enterprise-ca.md | To use an Enterprise CA to generate a certificate to use with Azure Firewall Pre ## Next steps -[Azure Firewall Premium in the Azure portal](premium-portal.md) +- [Azure Firewall Premium in the Azure portal](premium-portal.md) +- [Building a POC for TLS inspection in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723) + |
firewall | Premium Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/premium-deploy.md | Let's create an application rule to allow access to sports web sites. ## Next steps +- [Building a POC for TLS inspection in Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723) - [Azure Firewall Premium in the Azure portal](premium-portal.md) |
firewall | Rule Processing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md | As a result, there's no need to create an explicit deny rule from VNet-B to VNet ## Next steps -- Learn how to [deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md).+- [Learn more about Azure Firewall NAT behaviors](https://techcommunity.microsoft.com/t5/azure-network-security-blog/azure-firewall-nat-behaviors/ba-p/3825834) +- [Learn how to deploy and configure an Azure Firewall](tutorial-firewall-deploy-portal.md) - [Learn more about Azure network security](../networking/security/index.yml) |
firewall | Snat Private Range | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/snat-private-range.md | You can use the Azure portal to specify private IP address ranges for the firewa ## Auto-learn SNAT routes (preview) -You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network and hence traffic to destinations in the learned ranges aren't SNATed. Configure auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The Firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use JSON, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes. +You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. These learned address ranges are considered to be internal to the network, so traffic to destinations in the learned ranges aren't SNATed. Auto-learn SNAT ranges requires Azure Route Server to be deployed in the same VNet as the Azure Firewall. The firewall must be associated with the Azure Route Server and configured to auto-learn SNAT ranges in the Azure Firewall Policy. You can currently use an ARM template, Azure PowerShell, or the Azure portal to configure auto-learn SNAT routes. -### Configure using JSON +### Configure using an ARM template You can use the following JSON to configure auto-learn. Azure Firewall must be associated with an Azure Route Server. Use the following JSON to associate an Azure Route Server: You can use the portal to associate a Route Server with Azure Firewall to configure auto-learn SNAT routes (preview). -1. Select your resource group, and then select your firewall. -2. Select **Overview**. -3. Add a Route Server. +Use the portal to complete the following tasks: -Review learned routes: --1. Select your resource group, and then select your firewall. -2. Select **Learned SNAT IP Prefixes (preview)** in the **Settings** column. +- Add a subnet named **RouteServerSubnet** to your existing firewall VNet. The size of the subnet should be at least /27. +- Deploy a Route Server into the existing firewall VNet. For information about Azure Route Server, see [Quickstart: Create and configure Route Server using the Azure portal](../route-server/quickstart-configure-route-server-portal.md). +- Add the route server on the firewall **Learned SNAT IP Prefixes (preview)** page. + :::image type="content" source="media/snat-private-range/add-route-server.png" alt-text="Screenshot showing firewall add a route server." lightbox="media/snat-private-range/add-route-server.png"::: +- Modify your firewall policy to enable **Auto-learn IP prefixes (preview)** in the **Private IP ranges (SNAT)** section. + :::image type="content" source="media/snat-private-range/auto-learn.png" alt-text="Screenshot showing firewall policy Private IP ranges (SNAT) settings." lightbox="media/snat-private-range/auto-learn.png"::: +- You can see the learned routes on the **Learned SNAT IP Prefixes (preview)** page. ## Next steps |
frontdoor | Front Door Wildcard Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-wildcard-domain.md | You can add as many single-level subdomains of the wildcard domain in front-end - Defining a different route for a subdomain than the rest of the domains (from the wildcard domain). -- Having a different WAF policy for a specific subdomain. For example, `*.contoso.com` allows adding `foo.contoso.com` without having to again prove domain ownership. But it doesn't allow `foo.bar.contoso.com` because it isn't a single level subdomain of `*.contoso.com`. To add `foo.bar.contoso.com` without extra domain ownership validation, `*.bar.contosonews.com` needs to be added.+- Having a different WAF policy for a specific subdomain. For example, `*.contoso.com` allows adding `foo.contoso.com` without having to again prove domain ownership. But it doesn't allow `foo.bar.contoso.com` because it isn't a single level subdomain of `*.contoso.com`. To add `foo.bar.contoso.com` without extra domain ownership validation, `*.bar.contoso.com` needs to be added. You can add wildcard domains and their subdomains with certain limitations: |
frontdoor | Integrate Storage Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/integrate-storage-account.md | + + Title: Integrate an Azure Storage account with Azure Front Door ++description: This article shows you how to use Azure Front Door to deliver high-bandwidth content by caching blobs from Azure Storage. ++++ Last updated : 08/22/2023+++++# Integrate an Azure Storage account with Azure Front Door ++Azure Front Door can be used to deliver high-bandwidth content by caching blobs from Azure Storage. In this article, you create an Azure Storage account and then enable Front Door to cache and accelerate contents from Azure Storage. ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ++## Sign in to the Azure portal ++Sign in to the [Azure portal](https://portal.azure.com) with your Azure account. ++## Create a storage account ++A storage account gives access to the Azure Storage services. The storage account represents the highest level of the namespace for accessing each of the Azure Storage service components: Azure Blob, Queue, and Table storage. For more information, see [Introduction to Microsoft Azure Storage](../storage/common/storage-introduction.md). ++1. In the Azure portal, select **+ Create a resource** in the upper left corner. The **Create a resource** pane appears. ++1. On the **Create a resource** page, search for **Storage account** and select **Storage account** from the list. Then select **Create**. ++ :::image type="content" source="./media/integrate-storage-account/create-new-storage-account.png" alt-text="Screenshot of create a storage account."::: ++1. On the **Create a storage account** page, enter or select the following information for the new storage account: ++ | Setting | Value | + | | | + | Resource group | Select **Create new** and enter the name **AFDResourceGroup**. You may also select an existing resource group. | + | Storage account name | Enter a name for the account using 3-24 lowercase letters and numbers only. The name must be unique across Azure, and becomes the host name in the URL that's used to address blob, queue, or table resources for the subscription. To address a container resource in Blob storage, use a URI in the following format: http://*<storageaccountname>*.blob.core.windows.net/*<container-name>*. + | Region | Select an Azure region closest to you from the drop-down list. | + + Leave all other settings as default. Select the **Review** tab, select **Create**, and then select **Review + Create**. ++1. The creation of the storage account may take a few minutes to complete. Once creation is complete, select **Go to resource** to go to the new storage account resource. ++## Enable Azure Front Door CDN for the storage account ++1. From the storage account resource, select **Front Door and CDN** from under **Security + networking** on the left side menu pane. ++ :::image type="content" source="./media/integrate-storage-account/storage-endpoint-configuration.png" alt-text="Screenshot of create an AFD endpoint."::: + +1. In the **New endpoint** section, enter the following information: ++ | Setting | Value | + | -- | -- | + | Service type | Select **Azure Front Door**. | + | Create new/use existing profile | You can create a new Front Door profile or select an existing one. | + | Profile name | Enter a name for the Front Door profile. You have a list of available Front Door profiles if you selected **Use existing**.| + | Endpoint name | Enter your endpoint hostname, such as *contoso1234*. This name is used to access your cached resources at the URL _<endpoint-name + hash value>_.z01.azurefd.net. | + | Origin hostname | By default, a new Front Door endpoint uses the hostname of your storage account as the origin server. | + | Pricing tier | Select **Standard** if you want to do content delivery or select **Premium** if you want to do content delivery and use security features. | + | Caching | *Optional* - Toggle on if you want to [enable caching](front-door-caching.md) for your static content. Choose an appropriate query string behavior. Enable compression if required.| + | WAF | *Optional* - Toggle on if you want to protect your endpoint from common vulnerabilities, malicious actor and bots with [Web Application Firewall](web-application-firewall.md). You can use an existing policy from the WAF policy dropdown or create a new one. | + | Private link | *Optional* - Toggle on if you want to keep your storage account private that is, not exposed to public internet. Select the region that is the same region as your storage account or closest to your origin. Select target sub resource as **blob**. | ++ :::image type="content" source="./media/integrate-storage-account/security-settings.png" alt-text="Screenshot of the caching, WAF and private link settings for an endpoint."::: ++ > [!NOTE] + > * With Standard tier, you can only use custom rules with WAF.To deploy managed rules and bot protection, choose Premium tier. For detailed comparison, see [Azure Front Door tier comparison](./standard-premium/tier-comparison.md). + > * Private Link feature is **only** available with Premium tier. ++1. Select **Create** to create the new endpoint. After the endpoint is created, it appears in the endpoint list. ++ :::image type="content" source="./media/integrate-storage-account/endpoint-created.png" alt-text="Screenshot of new Front Door endpoint created from Storage account."::: ++## Extra features ++From the storage account **Front Door and CDN** page, select the endpoint from the list to open the Front Door endpoint configuration page. You can enable more Front Door features for your delivery, such as [rules engine](front-door-rules-engine.md) and configure how traffic gets [load balanced](routing-methods.md). ++For best practices, refer to [Use Azure Front Door with Azure Storage blobs](scenario-storage-blobs.md). ++## Enable SAS ++If you want to grant limited access to private storage containers, you can use the Shared Access Signature (SAS) feature of your Azure Storage account. A SAS is a URI that grants restricted access rights to your Azure Storage resources without exposing your account key. ++## Access CDN content ++To access cached content with Azure Front Door, use the Front Door URL provided in the portal. The address for a cached blob has the following format: ++http://<*endpoint-name-with-hash-value*\>.z01.azurefd.net/<*myPublicContainer*\>/<*BlobName*\> ++> [!NOTE] +> After you enable Azure Front Door access to a storage account, all publicly available objects are eligible for Front Door POP (Point-of-presence) caching. If you modify an object that is currently cached in Front Door, the new content won't be available through Azure Front Door until Front Door refreshes its content after the time-to-live period for the cached content expires. ++## Add a custom domain ++When you use Azure Front Door for content delivery, a custom domain is necessary if you would like your own domain name to be visible in your end-user requests. Having a visible domain name can be convenient for your customers and useful for branding purposes. ++From the storage account **Front Door and CDN** page, select **View custom domains** for the Front Door endpoint. On the domains page, you can add a new custom domain to access your storage account. For more information, see [Configure a custom domain with Azure Front Door](./standard-premium/how-to-add-custom-domain.md). ++## Purge cached content from Front Door ++If you no longer want to cache an object in Azure Front Door, you can purge the cached content. ++From the storage account **Front Door and CDN** page, select the Front Door endpoint from the list to open the Front Door endpoint configuration page. Select **Purge cache** option at the top of the page and then select the endpoint, domain, and path to purge. ++> [!NOTE] +> An object that's already cached in Azure Front Door remains cached until the time-to-live period for the object expires or until the endpoint is purged. ++## Clean up resources ++In the preceding steps, you created an Azure Front Door profile and an endpoint in a resource group. However, if you don't expect to use these resources in the future, you can delete them by deleting the resource group to avoid any charges. ++1. From the left-hand menu in the Azure portal, select **Resource groups** and then select *AFDResourceGroup**. ++2. On the **Resource group** page, select **Delete resource group**, enter *AFDResourceGroup* in the text box, then select **Delete**. ++ This action deletes the resource group, profile, and endpoint that you created in this quickstart. ++3. To delete your storage account, select it from the dashboard, then select **Delete** from the top menu. ++## Next steps ++* Learn how to use [Azure Front Door with Azure Storage blobs](scenario-storage-blobs.md) +* Learn how to [enable Azure Front Door Private Link with Azure Blob Storage](standard-premium/how-to-enable-private-link-storage-account.md) +* Learn how to [enable Azure Front Door Private Link with Storage Static Website](how-to-enable-private-link-storage-static-website.md) ++ |
global-secure-access | How To Simulate Remote Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/how-to-simulate-remote-network.md | + + Title: Extend remote network connectivity to Azure virtual networks +description: Configure Azure resources to simulate remote network connectivity to Microsoft's Security Edge Solutions, Microsoft Entra Internet Access and Microsoft Entra Private Access. ++ Last updated : 08/28/2023++++++# Create a remote network using Azure virtual networks ++Organizations may want to extend the capabilities of Microsoft Entra Internet Access to entire networks not just individual devices they can [install the Global Secure Access Client](how-to-install-windows-client.md) on. This article shows how to extend these capabilities to an Azure virtual network hosted in the cloud. Similar principles may be applied to a customer's on-premises network equipment. +++## Prerequisites ++In order to complete the following steps, you must have these prerequisites in place. ++- An Azure subscription and permission to create resources in the [Azure portal](https://portal.azure.com). + - A basic understanding of [site-to-site VPN connections](/azure/vpn-gateway/tutorial-site-to-site-portal). +- A Microsoft Entra ID tenant with the [Global Secure Access Administrator](/azure/active-directory/roles/permissions-reference#global-secure-access-administrator) role assigned. +- Completed the [remote network onboarding steps](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks). ++## Infrastructure creation ++Building this functionality out in Azure provides organizations the ability to understand how Microsoft Entra Internet Access works in a more broad implementation. The resources we create in Azure correspond to on-premises concepts in the following ways: ++- The **[virtual network](#virtual-network)** corresponds to your on-premises IP address space. +- The **[virtual network gateway](#virtual-network-gateway)** corresponds to an on-premises virtual private network (VPN) router. This device is sometimes referred to as customer premises equipment (CPE). +- The **[local network gateway](#local-network-gateway)** corresponds to the Microsoft side of the connection where traffic would flow to from your on-premises VPN router. The information provided by Microsoft as part of the [remote network onboarding steps](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks) is used here. +- The **[connection](#create-site-to-site-vpn-connection)** links the two network gateways and contains the settings required to establish and maintain connectivity. +- The **[virtual machine](#virtual-machine)** corresponds to client devices on your on-premises network. ++In this document, we use the following default values. Feel free to configure these settings according to your own requirements. ++**Subscription:** Visual Studio Enterprise +**Resource group name:** Network_Simulation +**Region:** East US ++### Resource group ++Create a resource group to contain all of the necessary resources. ++1. Sign in to the [Azure portal](https://portal.azure.com) with permission to create resources. +1. Select **Create a resource**. +1. Search for **Resource group** and choose **Create** > **Resource group**. +1. Select your **Subscription**, **Region**, and provide a name for your **Resource group**. +1. Select **Review + create**. +1. Confirm your details, then select **Create**. ++> [!TIP] +> If you're using this article for testing Microsoft Entra Internet Access, you may clean up all related Azure resources by deleting the resource group you create after you're done. ++### Virtual network ++Next we need to create a virtual network inside of our resource group, then add a gateway subnet that we'll use in a future step. ++1. From the Azure portal, select **Create a resource**. +1. Select **Networking** > **Virtual Network**. +1. Select the **Resource group** created previously. +1. Provide your network with a **Name**. +1. Leave the default values for the other fields. +1. Select **Review + create**. +1. Select **Create**. ++When the virtual network is created, select **Go to resource** or browse to it inside of the resource group and complete the following steps: ++1. Select **Subnets**. +1. Select **+ Gateway subnet**. +1. Leave the defaults and select **Save**. ++### Virtual network gateway ++Next we need to create a virtual network gateway inside of our resource group. ++1. From the Azure portal, select **Create a resource**. +1. Select **Networking** > **Virtual network gateway**. +1. Provide your virtual network gateway with a **Name**. +1. Select the appropriate region. +1. Select the **Virtual network** created in the previous section. +1. Create a **Public IP address** and **SECOND PUBLIC IP ADDRESS** and provide them with descriptive names. + 1. Set their **Availability zone** to **Zone-redundant**. +1. Set **Configure BGP** to **Enabled** + 1. Set the **Autonomous system number (ASN)** to an appropriate value. + 1. Don't use any reserved ASN numbers or the ASN provided as part of [onboarding to Microsoft Entra Internet Access](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks). For more information, see the article [Global Secure Access remote network configurations](reference-remote-network-configurations.md#valid-autonomous-system-number-asn). +1. Leave all other settings their defaults or blank. +1. Select **Review + create**, confirm your settings. +1. Select **Create**. + 1. You can continue to the following sections while the gateway is created. +++### Local network gateway ++You need to create two local network gateways. One for your primary and one for the secondary endpoints. ++You use the BGP IP addresses, Public IP addresses, and ASN values provided by Microsoft when you [onboard to Microsoft Entra Internet Access](how-to-create-remote-networks.md#onboard-your-tenant-for-remote-networks) in this section. ++1. From the Azure portal, select **Create a resource**. +1. Select **Networking** > **Local network gateway**. +1. Select the **Resource group** created previously. +1. Select the appropriate region. +1. Provide your local network gateway with a **Name**. +1. For **Endpoint**, select **IP address**, then provide the IP address provided in the Microsoft Entra admin center. +1. Select **Next: Advanced**. +1. Set **Configure BGP** to **Yes** + 1. Set the **Autonomous system number (ASN)** to the appropriate value provided in the Microsoft Entra admin center. + 1. Set the **BGP peer IP address** to the appropriate value provided in the Microsoft Entra admin center. +1. Select **Review + create**, confirm your settings. +1. Select **Create**. +++### Virtual machine ++1. From the Azure portal, select **Create a resource**. +1. Select **Virtual machine**. +1. Select the **Resource group** created previously. +1. Provide a **Virtual machine name**. +1. Select the Image you want to use, for this example we choose **Windows 11 Pro, version 22H2 - x64 Gen2** +1. Select **Run with Azure Spot discount** for this test. +1. Provide a **Username** and **Password** for your VM +1. Move to the **Networking** tab. + 1. Select the **Virtual network** created previously. + 1. Keep the other networking defaults. +1. Move to the **Management** tab + 1. Check the box **Login with Azure AD** + 1. Keep the other management defaults. +1. Select **Review + create**, confirm your settings. +1. Select **Create**. ++You may choose to lock down remote access to the network security group to only a specific network or IP. ++### Create Site-to-site VPN connection ++You create two connections one for your primary and secondary gateways. ++1. From the Azure portal, select **Create a resource**. +1. Select **Networking** > **Connection**. +1. Select the **Resource group** created previously. +1. Under **Connection type**, select **Site-to-site (IPsec)**. +1. Provide a **Name** for the connection, and select the appropriate **Region**. +1. Move to the **Settings** tab. + 1. Select your **Virtual network gateway** and **Local network gateway** created previously. + 1. Create a **Shared key (PSK)** that you'll use in a future step. + 1. Check the box for **Enable BGP**. + 1. Keep the other default settings. +1. Select **Review + create**, confirm your settings. +1. Select **Create**. +++## Enable remote connectivity in Microsoft Entra ++### Create a remote network ++You need the public IP addresses of your virtual network gateway. These IP addresses can be found by browsing to the Configuration page of your virtual and local network gateways. You complete the **Add a link** sections twice to create a link for your primary and secondary connections. +++1. Sign in to the **[Microsoft Entra admin center](https://entra.microsoft.com)** as a [Global Secure Access Administrator](../active-directory/roles/permissions-reference.md#global-secure-access-administrator). +1. Browse to **Global Secure Access Preview** > **Remote network** > **Create remote network**. +1. Provide a **Name** for your network, select an appropriate **Region**, then select **Next: Connectivity**. +1. On the **Connectivity** tab, select **Add a link**. + 1. On the **General** tab: + 1. Provide a **Link name** and set **Device type** to **Other**. + 1. Set the **IP address** to the primary IP address of your virtual network gateway. + 1. Set the **Local BGP address** to the primary private BGP IP address of your local network gateway. + 1. Set the **Peer BGP address** to the BGP IP address of your virtual network gateway. + 1. Set the **Link ASN** to the ASN of your virtual network gateway. + 1. Leave **Redundancy** set to **No redundancy**. + 1. Set **Bandwidth capacity (Mbps)** to the appropriate setting. + 1. Select Next to continue to the **Details** tab. + 1. On the **Details** tab: + 1. Leave the defaults selected unless you made a different selection previously. + 1. Select Next to continue to the **Security** tab. + 1. On the **Security** tab: + 1. Enter the **Pre-shared key (PSK)** set in the [previous section when creating the site to site connection](#create-site-to-site-vpn-connection). + 1. Select **Add link**. + 1. Select **Next: Traffic profiles**. +1. On the **Traffic profiles** tab: + 1. Check the box for the **Microsoft 365 traffic profile**. + 1. Select **Next: Review + create**. +1. Confirm your settings and select **Create remote network**. ++For more information about remote networks, see the article [How to create a remote network](how-to-create-remote-networks.md) ++## Verify connectivity ++After you create the remote networks in the previous steps, it may take a few minutes for the connection to be established. From the Azure portal, we can validate that the VPN tunnel is connected and that BGP peering is successful. ++1. In the Azure portal, browse to the **virtual network gateway** created earlier and select **Connections**. +1. Each of the connections should show a **Status** of **Connected** once the configuration is applied and successful. +1. Browsing to **BGP peers** under the **Monitoring** section allows you to confirm that BGP peering is successful. Look for the peer addresses provided by Microsoft. Once configuration is applied and successful, the **Status** should show **Connected**. +++You can also use the virtual machine you created to validate that traffic is flowing to Microsoft 365 locations like SharePoint Online. Browsing to resources in SharePoint or Exchange Online should result in traffic on your virtual network gateway. This traffic can be seen by browsing to [Metrics on the virtual network gateway](/azure/vpn-gateway/monitor-vpn-gateway#analyzing-metrics) or by [Configuring packet capture for VPN gateways](/azure/vpn-gateway/packet-capture). ++## Next steps ++- [Tutorial: Create a site-to-site VPN connection in the Azure portal](/azure/vpn-gateway/tutorial-site-to-site-portal) |
global-secure-access | Reference Remote Network Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/global-secure-access/reference-remote-network-configurations.md | You can use any values *except* for the following reserved ASNs: - Azure reserved ASNs: 12076, 65517,65518, 65519, 65520, 8076, 8075 - IANA reserved ASNs: 23456, >= 64496 && <= 64511, >= 65535 && <= 65551, 4294967295-- 65486+- 65476 ### Valid enums |
governance | Attestation Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/attestation-structure.md | -> Attestations are available only through the [Azure Resource Manager (ARM) API](/rest/api/policy/attestations). +> Attestations can be created and managed only through Azure Policy [Azure Resource Manager (ARM) API](/rest/api/policy/attestations), [PowerShell](/powershell/module/az.policyinsights) or [Azure CLI](/cli/azure/policy/attestation). ## Best practices |
governance | Compliance States | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/compliance-states.md | An applicable resource has a compliance state of exempt for a policy assignment > [!NOTE] > _Exempt_ is different than _excluded_. For more details, see [scope](./scope.md). -### Unknown (preview) +### Unknown Unknown is the default compliance state for definitions with `manual` effect, unless the default has been explicitly set to compliant or non-compliant. This state indicates that an [attestation](./attestation-structure.md) of compliance is warranted. This compliance state only occurs for policy assignments with `manual` effect. + ### Protected (preview) ++ Protected state signfies that the resource is covered under an assignment with a [denyAction](./effects.md#denyaction-preview) effect. + ### Not registered This compliance state is visible in portal when the Azure Policy Resource Provider hasn't been registered, or when the account logged in doesn't have permission to read compliance data. So how is the aggregate compliance state determined if multiple resources or pol 1. Compliant 1. Error 1. Conflicting+1. Protected (preview) 1. Exempted 1. Unknown (preview) |
governance | Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md | Title: Details of the policy definition structure description: Describes how policy definitions are used to establish conventions for Azure resources in your organization. Previously updated : 08/29/2022 Last updated : 08/15/2023 + # Azure Policy definition structure Azure Policy establishes conventions for resources. Policy definitions describe resource compliance always stay the same, however their values change based on the individual fillin Parameters work the same way when building policies. By including parameters in a policy definition, you can reuse that policy for different scenarios by using different values. -> [!NOTE] -> Parameters may be added to an existing and assigned definition. The new parameter must include the -> **defaultValue** property. This prevents existing assignments of the policy or initiative from -> indirectly being made invalid. +Parameters may be added to an existing and assigned definition. The new parameter must include the +**defaultValue** property. This prevents existing assignments of the policy or initiative from +indirectly being made invalid. -> [!NOTE] -> Parameters can't be removed from a policy definition that's been assigned. +Parameters can't be removed from a policy definition because there may be an assignment that sets the parameter value, and that reference would become broken. Instead of removing, you can classify the parameter as deprecated in the parameter metadata. ### Parameter properties |
governance | Effects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md | These effects are currently supported in a policy definition: ## Interchanging effects -Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies will assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies will assess a resource's compliance based on a child or extension resource's properties. +Sometimes multiple effects can be valid for a given policy definition. Parameters are often used to specify allowed effect values so that a single definition can be more versatile. However, it's important to note that not all effects are interchangeable. Resource properties and logic in the policy rule can determine whether a certain effect is considered valid to the policy definition. For example, policy definitions with effect **AuditIfNotExists** require other details in the policy rule that aren't required for policies with effect **Audit**. The effects also behave differently. **Audit** policies assess a resource's compliance based on its own properties, while **AuditIfNotExists** policies assess a resource's compliance based on a child or extension resource's properties. The following list is some general guidance around interchangeable effects: - **Audit**, **Deny**, and either **Modify** or **Append** are often interchangeable. related resources to match. - When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource. However, an [audit](#audit) effect should be considered instead.++> [!NOTE] +> +> **Type** and **Name** segments can be combined to generically retrieve nested resources. +> +> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`. +> +> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`." + - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource. location of the Constraint template to use in Kubernetes to limit the allowed co When a request call with an applicable action name and targeted scope is submitted, `denyAction` prevents the request from succeeding. The request is returned as a `403 (Forbidden)`. In the portal, the Forbidden can be viewed as a status on the deployment that was prevented by the policy assignment. -`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios. --> [!NOTE] -> Under preview, assignments with `denyAction` effect will show a `Not Started` compliance state. +`Microsoft.Authorization/policyAssignments`, `Microsoft.Authorization/denyAssignments`, `Microsoft.Blueprint/blueprintAssignments`, `Microsoft.Resources/deploymentStacks`, `Microsoft.Resources/subscriptions` and `Microsoft.Authorization/locks` are all exempt from DenyAction enforcement to prevent lockout scenarios. #### Subscription deletion -Policy won't block removal of resources that happens during a subscription deletion. +Policy doesn't block removal of resources that happens during a subscription deletion. #### Resource group deletion -Policy will evaluate resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule will block a resource group deletion. Policy won't block removal of resources that don't support location and tags nor any policy with `mode:all`. +Policy evaluates resources that support location and tags against `DenyAction` policies during a resource group deletion. Only policies that have the `cascadeBehaviors` set to `deny` in the policy rule block a resource group deletion. Policy doesn't block removal of resources that don't support location and tags nor any policy with `mode:all`. #### Cascade deletion -Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy won't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child). +Cascade deletion occurs when deleting of a parent resource is implicitly deletes all its child resources. Policy doesn't block removal of child resources when a delete action targets the parent resources. For example, `Microsoft.Insights/diagnosticSettings` is a child resource of `Microsoft.Storage/storageaccounts`. If a `denyAction` policy targets `Microsoft.Insights/diagnosticSettings`, a delete call to the diagnostic setting (child) will fail, but a delete to the storage account (parent) will implicitly delete the diagnostic setting (child). [!INCLUDE [policy-denyAction](../../../../includes/azure-policy-deny-action.md)] related resources to match and the template deployment to execute. resource instead of all resources of the specified type. - When the condition values for **if.field.type** and **then.details.type** match, then **Name** becomes _required_ and must be `[field('name')]`, or `[field('fullName')]` for a child resource.++> [!NOTE] +> +> **Type** and **Name** segments can be combined to generically retrieve nested resources. +> +> To retrieve a specific resource, you can use `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/nestedResourceName"`. +> +> To retrieve a collection of nested resources, a wildcard character `?` can be provided in place of the last name segment. For example, `"type": "Microsoft.ExampleProvider/exampleParentType/exampleNestedType"` and `"name": "parentResourceName/?"`. This can be combined with field functions to access resources related to the evaluated resource, such as `"name": "[concat(field('name'), '/?')]"`." + - **ResourceGroupName** (optional) - Allows the matching of the related resource to come from a different resource group. - Doesn't apply if **type** is a resource that would be underneath the **if** condition resource. logs, and the policy effect don't occur. For more information, see ## Manual -The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you'll need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting. +The new `manual` effect enables you to self-attest the compliance of resources or scopes. Unlike other policy definitions that actively scan for evaluation, the Manual effect allows for manual changes to the compliance state. To change the compliance of a resource or scope targeted by a manual policy, you need to create an [attestation](attestation-structure.md). The [best practice](attestation-structure.md#best-practices) is to design manual policies that target the scope that defines the boundary of resources whose compliance need attesting. > [!NOTE] > Support for manual policy is available through various Microsoft Defender |
governance | Policy For Kubernetes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md | Title: Learn Azure Policy for Kubernetes description: Learn how Azure Policy uses Rego and Open Policy Agent to manage clusters running Kubernetes in Azure or on-premises. Previously updated : 06/17/2022 Last updated : 08/29/2023 Azure Policy for Kubernetes supports the following cluster environments: - [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md) > [!IMPORTANT]-> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Instructions can be found below for [removal of those add-ons](#remove-the-add-on). +> The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Follow the instructions to [remove the add-ons](#remove-the-add-on). ## Overview To enable and use Azure Policy with your Kubernetes cluster, take the following The following general limitations apply to the Azure Policy Add-on for Kubernetes clusters: -- Azure Policy Add-on for Kubernetes is supported on [supported Kubernetes versions in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md).+- Azure Policy Add-on for Kubernetes [supported Kubernetes versions in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md). - Azure Policy Add-on for Kubernetes can only be deployed to Linux node pools.-- Maximum number of pods supported by the Azure Policy Add-on: **10,000**+- Maximum number of pods supported by the Azure Policy Add-on per cluster: **10,000** - Maximum number of Non-compliant records per policy per cluster: **500** - Maximum number of Non-compliant records per subscription: **1 million** - Installations of Gatekeeper outside of the Azure Policy Add-on aren't supported. Uninstall any The following are general recommendations for using the Azure Policy Add-on: policy assignments increases in the cluster, which requires audit and enforcement operations. - For fewer than 500 pods in a single cluster with a max of 20 constraints: two vCPUs and 350 MB- memory per component. + of memory per component. - For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB- memory per component. + of memory per component. - Windows pods [don't support security contexts](https://kubernetes.io/docs/concepts/security/pod-security-standards/#what-profiles-should-i-apply-to-my-windows-pods). The following recommendation applies only to AKS and the Azure Policy Add-on: ## Install Azure Policy Add-on for AKS -Before installing the Azure Policy Add-on or enabling any of the service features, your subscription -must enable the **Microsoft.PolicyInsights** resource providers. +Before you install the Azure Policy Add-on or enabling any of the service features, your subscription +must enable the `Microsoft.PolicyInsights` resource providers. 1. You need the Azure CLI version 2.12.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see must enable the **Microsoft.PolicyInsights** resource providers. - Azure portal: - Register the **Microsoft.PolicyInsights** resource providers. For steps, see + Register the `Microsoft.PolicyInsights` resource providers. For steps, see [Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). - Azure CLI: must enable the **Microsoft.PolicyInsights** resource providers. 1. If limited preview policy definitions were installed, remove the add-on with the **Disable** button on your AKS cluster under the **Policies** page. -1. The AKS cluster must be a [supported AKS cluster version](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli). Use the following script to validate your AKS +1. The AKS cluster must be a [supported Kubernetes version in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md). Use the following script to validate your AKS cluster version: ```azurecli-interactive must enable the **Microsoft.PolicyInsights** resource providers. 1. Install version _2.12.0_ or higher of the Azure CLI. For more information, see [Install the Azure CLI](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli). -Once the above prerequisite steps are completed, install the Azure Policy Add-on in the AKS cluster +After the prerequisites are completed, install the Azure Policy Add-on in the AKS cluster you want to manage. - Azure portal similar to the following output: ```output {- "config": null, - "enabled": true, - "identity": null + "config": null, + "enabled": true, + "identity": null } ``` ## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes For an overview of the extensions platform, see [Azure Arc cluster extensions](. ### Prerequisites -> Note: If you have already deployed Azure Policy for Kubernetes on an Azure Arc cluster using Helm directly without extensions, follow the instructions listed to [delete the Helm chart](#remove-the-add-on-from-azure-arc-enabled-kubernetes). Once the deletion is done, you can then proceed. +If you have already deployed Azure Policy for Kubernetes on an Azure Arc cluster using Helm directly without extensions, follow the instructions to [delete the Helm chart](#remove-the-add-on-from-azure-arc-enabled-kubernetes). After the deletion is done, you can then proceed. + 1. Ensure your Kubernetes cluster is a supported distribution. - > Note: Azure Policy for Arc extension is supported on [the following Kubernetes distributions](../../../azure-arc/kubernetes/validation-program.md). + > [!NOTE] + > Azure Policy for Arc extension is supported on [the following Kubernetes distributions](../../../azure-arc/kubernetes/validation-program.md). + 1. Ensure you have met all the common prerequisites for Kubernetes extensions listed [here](../../../azure-arc/kubernetes/extensions.md) including [connecting your cluster to Azure Arc](../../../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli). - > Note: Azure Policy extension is supported for Arc enabled Kubernetes clusters [in these regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). + > [!NOTE] + > Azure Policy extension is supported for Arc enabled Kubernetes clusters [in these regions](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc). + 1. Open ports for the Azure Policy extension. The Azure Policy extension uses these domains and ports to fetch policy definitions and assignments and report compliance of the cluster back to Azure Policy. For an overview of the extensions platform, see [Azure Arc cluster extensions](. |`login.windows.net` |`443` | |`dc.services.visualstudio.com` |`443` | -1. Before installing the Azure Policy extension or enabling any of the service features, your subscription must enable the **Microsoft.PolicyInsights** resource providers. - > Note: To enable the resource provider, follow the steps in - [Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) - or run either the Azure CLI or Azure PowerShell command: +1. Before you install the Azure Policy extension or enabling any of the service features, your subscription must enable the `Microsoft.PolicyInsights` resource providers. ++ > [!NOTE] + > To enable the resource provider, follow the steps in [Resource providers and types](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal) + or run either the Azure CLI or Azure PowerShell command. + - Azure CLI ```azurecli-interactive For an overview of the extensions platform, see [Azure Arc cluster extensions](. ### Create Azure Policy extension +> [!NOTE] > Note the following for Azure Policy extension creation: > - Auto-upgrade is enabled by default which will update Azure Policy extension minor version if any new changes are deployed. > - Any proxy variables passed as parameters to `connectedk8s` will be propagated to the Azure Policy extension to support outbound proxy.-> + To create an extension instance, for your Arc enabled cluster, run the following command substituting `<>` with your values: ```azurecli-interactive you created it with. 1. Give the policy assignment a **Name** and **Description** that you can use to identify it easily. -1. Set the [Policy enforcement](./assignment-structure.md#enforcement-mode) to one of the values - below. +1. Set the [Policy enforcement](./assignment-structure.md#enforcement-mode) to one of the following values: - **Enabled** - Enforce the policy on the cluster. Kubernetes admission requests with violations are denied. - **Disabled** - Don't enforce the policy on the cluster. Kubernetes admission requests with- violations aren't denied. Compliance assessment results are still available. When rolling out + violations aren't denied. Compliance assessment results are still available. When you roll out new policy definitions to running clusters, _Disabled_ option is helpful for testing the policy definition as admission requests with violations aren't denied. you created it with. 1. Select **Review + create**. Alternately, use the [Assign a policy - Portal](../assign-policy-portal.md) quickstart to find and-assign a Kubernetes policy. Search for a Kubernetes policy definition instead of the sample 'audit -vms'. +assign a Kubernetes policy. Search for a Kubernetes policy definition instead of the sample _audit +vms_. > [!IMPORTANT] > Built-in policy definitions are available for Kubernetes clusters in category **Kubernetes**. For field of the failed constraint. For details on _Non-compliant_ resources, see Some other considerations: -- If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud- Kubernetes policies are applied on the cluster automatically. +- If the cluster subscription is registered with Microsoft Defender for Cloud, then Microsoft Defender for Cloud Kubernetes policies are applied on the cluster automatically. - When a deny policy is applied on cluster with existing Kubernetes resources, any pre-existing- resource that is not compliant with the new policy continues to run. When the non-compliant + resource that isn't compliant with the new policy continues to run. When the non-compliant resource gets rescheduled on a different node the Gatekeeper blocks the resource creation. -- When a cluster has a deny policy that validates resources, the user will not see a rejection+- When a cluster has a deny policy that validates resources, the user doesn't get a rejection message when creating a deployment. For example, consider a Kubernetes deployment that contains- replicasets and pods. When a user executes `kubectl describe deployment $MY_DEPLOYMENT`, it does - not return a rejection message as part of events. However, + replicasets and pods. When a user executes `kubectl describe deployment $MY_DEPLOYMENT`, it doesn't return a rejection message as part of events. However, `kubectl describe replicasets.apps $MY_DEPLOYMENT` returns the events associated with rejection. > [!NOTE] Two policy definitions reference the same `template.yaml` file stored at differe such as the Azure Policy template store (`store.policy.core.windows.net`) and GitHub. When policy definitions and their constraint templates are assigned but aren't already installed on-the cluster and are in conflict, they are reported as a conflict and won't be installed into the +the cluster and are in conflict, they're reported as a conflict and aren't installed into the cluster until the conflict is resolved. Likewise, any existing policy definitions and their-constraint templates that are already on the cluster that conflict with newly assigned policy -definitions continue to function normally. If an existing assignment is updated and there is a +constraint templates that are already on the cluster that conflicts with newly assigned policy +definitions continue to function normally. If an existing assignment is updated and there's a failure to sync the constraint template, the cluster is also marked as a conflict. For all conflict messages, see [AKS Resource Provider mode compliance reasons](../how-to/determine-non-compliance.md#aks-resource-provider-mode-compliance-reasons) constraints on the cluster, it annotates both with Azure Policy information like assignment ID and the policy definition ID. To configure your client to view the add-on related artifacts, use the following steps: -1. Setup `kubeconfig` for the cluster. +1. Set up `kubeconfig` for the cluster. For an Azure Kubernetes Service cluster, use the following Azure CLI: For more information about troubleshooting the Add-on for Kubernetes, see the [Kubernetes section](../troubleshoot/general.md#add-on-for-kubernetes-general-errors) of the Azure Policy troubleshooting article. -For Azure Policy extension for Arc extension related issues, please see: +For Azure Policy extension for Arc extension related issues, go to: - [Azure Arc enabled Kubernetes troubleshooting](../../../azure-arc/kubernetes/troubleshooting.md) -For Azure Policy related issues, please see: +For Azure Policy related issues, go to: - [Inspect Azure Policy logs](#logging) - [General troubleshooting for Azure Policy on Kubernetes](../troubleshoot/general.md#add-on-for-kubernetes-general-errors) |
governance | Remediation Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/remediation-structure.md | this property is a _string_. The value must match the value in the initiative de Use **resource count** to determine how many non-compliant resources to remediate in a given remediation task. The default value is 500, with the maximum number being 50,000. **Parallel deployments** determines how many of those resources to remediate at the same time. The allowed range is between 1 to 30 with the default value being 10. > [!NOTE]-> Parallel deployments are the number of deployments within a singular remediation task with a maxmimum of 30. 100 remediation tasks can be ran simultaneously in the tenant. +> Parallel deployments are the number of deployments within a singular remediation task with a maximum of 30. There can be a maximum of 100 remediation tasks running in parallel for a single policy definition or policy reference within an initiative. ## Failure threshold |
governance | Australia Ism | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md | Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||+|[\[Preview\]: API endpoints in Azure API Management should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8ac833bd-f505-48d5-887e-c993a1d3eea0) |API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. Learn More about the OWASP API Threat for Broken User Authentication here: [https://learn.microsoft.com/azure/api-management/mitigate-owasp-api-threats#broken-user-authentication](../../../api-management/mitigate-owasp-api-threats.md#broken-user-authentication) |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMApiEndpointsShouldbeAuthenticated_AuditIfNotExists.json) | |[API Management calls to API backends should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc15dcc82-b93c-4dcb-9332-fbf121685b54) |Calls from API Management to backends should use some form of authentication, whether via certificates or credentials. Does not apply to Service Fabric backends. |Audit, Disabled, Deny |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_BackendAuth_AuditDeny.json) | |[API Management calls to API backends should not bypass certificate thumbprint or name validation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92bb331d-ac71-416a-8c91-02f2cb734ce4) |To improve the API security, API Management should validate the backend server certificate for all API calls. Enable SSL certificate thumbprint and name validation. |Audit, Disabled, Deny |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_BackendCertificateChecks_AuditDeny.json) | initiative definition. |[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | ### Restrict the exposure of credential and secrets initiative definition. ||||| |[API Management minimum API version should be set to 2019-12-01 or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F549814b6-3212-4203-bdc8-1548d342fb67) |To prevent service secrets from being shared with read-only users, the minimum API version should be set to 2019-12-01 or higher. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_MinimumApiVersion_AuditDeny.json) | |[API Management secret named values should be stored in Azure Key Vault](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff1cc7827-022c-473e-836e-5a51cae0b249) |Named values are a collection of name and value pairs in each API Management service. Secret values can be stored either as encrypted text in API Management (custom secrets) or by referencing secrets in Azure Key Vault. To improve security of API Management and secrets, reference secret named values from Azure Key Vault. Azure Key Vault supports granular access management and secret rotation policies. |Audit, Disabled, Deny |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_NamedValueSecretsInKV_AuditDeny.json) |-|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) | +|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) | ## Privileged Access initiative definition. ||||| |[API Management subscriptions should not be scoped to all APIs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3aa03346-d8c5-4994-a5bc-7652c2a2aef1) |API Management subscriptions should be scoped to a product or an individual API instead of all APIs, which could result in an excessive data exposure. |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_AllApiSubscription_AuditDeny.json) | |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## Data Protection initiative definition. |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | ### Encrypt sensitive data in transit initiative definition. |[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) | +### Ensure security of asset lifecycle management ++**ID**: Microsoft cloud security benchmark AM-3 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: API endpoints that are unused should be disabled and removed from the Azure API Management service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc8acafaf-3d23-44d1-9624-978ef0f8652c) |As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused and should be removed from the Azure API Management service. Keeping unused API endpoints may pose a security risk to your organization. These may be APIs that should have been deprecated from the Azure API Management service but may have been accidentally left active. Such APIs typically do not receive the most up to date security coverage. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMUnusedApiEndpointsShouldbeRemoved_AuditIfNotExists.json) | + ### Use only approved applications in virtual machine **ID**: Microsoft cloud security benchmark AM-5 initiative definition. |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | |[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Enable threat detection for identity and access management initiative definition. |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | |[Windows Defender Exploit Guard should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbed48b13-6647-468e-aa2f-1af1d3f4dd40) |Windows Defender Exploit Guard uses the Azure Policy Guest Configuration agent. Exploit Guard has four components that are designed to lock down devices against a wide variety of attack vectors and block behaviors commonly used in malware attacks while enabling enterprises to balance their security risk and productivity requirements (Windows only). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsDefenderExploitGuard_AINE.json) | ### Enable logging for security investigation initiative definition. |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | ### Detection and analysis - investigate an incident initiative definition. |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |+|[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | ## Posture and Vulnerability Management initiative definition. |[App Service apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Azure Arc enabled Kubernetes clusters should have the Azure Policy extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6b2122c1-8120-4ff5-801b-17625a355590) |The Azure Policy extension for Azure Arc provides at-scale enforcements and safeguards on your Arc enabled Kubernetes clusters in a centralized, consistent manner. Learn more at [https://aka.ms/akspolicydoc](https://aka.ms/akspolicydoc). |AuditIfNotExists, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ArcPolicyExtension_Audit.json) |-|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) | +|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) | |[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) | |[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | initiative definition. |[\[Preview\]: Secure Boot should be enabled on supported Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97566dd7-78ae-4997-8b36-1c7bfe0d8121) |Enable Secure Boot on supported Windows virtual machines to mitigate against malicious and unauthorized changes to the boot chain. Once enabled, only trusted bootloaders, kernel and kernel drivers will be allowed to run. This assessment applies to Trusted Launch and Confidential Windows virtual machines. |Audit, Disabled |[4.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableWindowsSB_Audit.json) | |[\[Preview\]: vTPM should be enabled on supported virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c30f9cd-b84c-49cc-aa2c-9288447cc3b3) |Enable virtual TPM device on supported virtual machines to facilitate Measured Boot and other OS security features that require a TPM. Once enabled, vTPM can be used to attest boot integrity. This assessment only applies to trusted launch enabled virtual machines. |Audit, Disabled |[2.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableVTPM_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) |-|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) | +|[Machines should have secret findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ac7c827-eea2-4bde-acc7-9568cd320efa) |Audits virtual machines to detect whether they contain secret findings from the secret scanning solutions on your virtual machines. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerSecretAssessment_Audit.json) | |[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) | |[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Update%20Management%20Center/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) | +|[\[Preview\]: Machines should be configured to periodically check for missing system updates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd876905-5b84-4f73-ab2d-2e7a7c4568d9) |To ensure periodic assessments for missing system updates are triggered automatically every 24 hours, the AssessmentMode property should be set to 'AutomaticByPlatform'. Learn more about AssessmentMode property for Windows: [https://aka.ms/computevm-windowspatchassessmentmode,](https://aka.ms/computevm-windowspatchassessmentmode,) for Linux: [https://aka.ms/computevm-linuxpatchassessmentmode](https://aka.ms/computevm-linuxpatchassessmentmode). |Audit, Deny, Disabled |[3.3.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Update%20Management%20Center/AzUpdateMgmtCenter_AutoAssessmentMode_Audit.json) | |[\[Preview\]: System updates should be installed on your machines (powered by Update Center)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff85bf3e0-d513-442e-89c3-1784ad63382b) |Your machines are missing system, security, and critical updates. Software updates often include critical patches to security holes. Such holes are frequently exploited in malware attacks so it's vital to keep your software updated. To install all outstanding patches and secure your machines, follow the remediation steps. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdatesV2_Audit.json) | |[Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0f936f-2f01-4bf5-b6be-d423792fa562) |Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerRegistryVulnerabilityAssessment_Audit.json) | |[Container registry images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F090c7b07-b4ed-4561-ad20-e9075f3ccaff) |Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_AzureContainerRegistryVulnerabilityAssessment_Audit.json) | |
governance | Azure Security Benchmarkv1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmarkv1.md | Title: Regulatory Compliance details for Azure Security Benchmark v1 description: Details of the Azure Security Benchmark v1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Encrypt sensitive information at rest |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
governance | Canada Federal Pbmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md | Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 This built-in initiative is deployed as part of the ||||| |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## 9 AppService |
governance | Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. ||||| |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## 9 AppService |
governance | Cis Azure 1 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. ||||| |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | |[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | |[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## 9 AppService |
governance | Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 This built-in initiative is deployed as part of the |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) | This built-in initiative is deployed as part of the |[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | |[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | This built-in initiative is deployed as part of the |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Use non-privileged accounts or roles when accessing nonsecurity functions. This built-in initiative is deployed as part of the |[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | |[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | |[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | This built-in initiative is deployed as part of the |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | |[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) | This built-in initiative is deployed as part of the ||||| |[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | |[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | ### Employ the principle of least functionality by configuring organizational systems to provide only essential capabilities. This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Windows machines should meet requirements for 'System Audit Policies - Privilege Use'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F87845465-c458-45f3-af66-dcd62176f397) |Windows machines should have the specified Group Policy settings in the category 'System Audit Policies - Privilege Use' for auditing nonsensitive and other privilege use. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SystemAuditPoliciesPrivilegeUse_AINE.json) | ### Control and monitor user-installed software. |
governance | Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](https://aka.ms/cs/auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) | Policy And Procedures |[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) | Policy And Procedures |[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) | |
governance | Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](https://aka.ms/cs/auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) | initiative definition. |[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) | initiative definition. |[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) | |
governance | Gov Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## Data Protection initiative definition. |[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) | |[App Service apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |-|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) | +|[Azure Machine Learning compute instances should be recreated to get the latest software updates](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff110a506-2dcb-422e-bcea-d533fc8c35e2) |Ensure Azure Machine Learning compute instances run on the latest available operating system. Security is improved and vulnerabilities reduced by running with the latest security patches. For more information, visit [https://aka.ms/azureml-ci-updates/](https://aka.ms/azureml-ci-updates/). |[parameters('effects')] |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/MachineLearningServices_ComputeInstanceUpdates_Audit.json) | |[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) | |[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) | |[Function apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e60b895-3786-45da-8377-9c6b4b6ac5f9) |Remote debugging requires inbound ports to be opened on Function apps. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_FunctionApp_Audit.json) | |
governance | Gov Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## 9 AppService |
governance | Gov Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## 9 AppService |
governance | Gov Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 This built-in initiative is deployed as part of the ||||| |[App Service apps should have remote debugging turned off](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcb510bfd-1cba-4d9f-a230-cb0976f4bb71) |Remote debugging requires inbound ports to be opened on an App Service app. Remote debugging should be turned off. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_DisableRemoteDebugging_WebApp_Audit.json) | |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | This built-in initiative is deployed as part of the ||||| |[App Service apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5744710e-cc2f-4ee8-8809-3b11e89f4bc9) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your app. Allow only required domains to interact with your app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_WebApp_Audit.json) | |[App Service apps should only be accessible over HTTPS](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | This built-in initiative is deployed as part of the |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Secure transfer to storage accounts should be enabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | |[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_JITNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Monitor and control remote access sessions. This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Cognitive Services accounts should disable public network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0725b4dd-7e76-479c-a735-68e7ee23d5ca) |To improve the security of Cognitive Services accounts, ensure that it isn't exposed to the public internet and can only be accessed from a private endpoint. Disable the public network access property as described in [https://go.microsoft.com/fwlink/?linkid=2129800](https://go.microsoft.com/fwlink/?linkid=2129800). This option disables access from any public address space outside the Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. This reduces data leakage risks. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisablePublicNetworkAccess_Audit.json) | |[Cognitive Services accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037eea7a-bd0a-46c5-9a66-03aea78705d3) |Network access to Cognitive Services accounts should be restricted. Configure network rules so only applications from allowed networks can access the Cognitive Services account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges. |Audit, Deny, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_NetworkAcls_Audit.json) | |[Container registries should not allow unrestricted network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd0793b48-0edc-4296-a390-4c75d1bdfd71) |Azure container registries by default accept connections over the internet from hosts on any network. To protect your registries from potential threats, allow access from only specific private endpoints, public IP addresses or address ranges. If your registry doesn't have network rules configured, it will appear in the unhealthy resources. Learn more about Container Registry network rules here: [https://aka.ms/acr/privatelink,](https://aka.ms/acr/privatelink,) [https://aka.ms/acr/portal/public-network](https://aka.ms/acr/portal/public-network) and [https://aka.ms/acr/vnet](https://aka.ms/acr/vnet). |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_NetworkRulesExist_AuditDeny.json) | |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Storage accounts should restrict network access](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | ### Separate the duties of individuals to reduce the risk of malevolent activity without collusion. This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Control and monitor user-installed software. |
governance | Gov Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Gov Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Gov Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Gov Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Gov Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.us/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Azure%20Government/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Information Flow Enforcement |
governance | Guest Configuration Baseline Docker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-docker.md | For more information, see [Understand the guest configuration feature of Azure P |Ensure the default ulimit's configured appropriately<br /><sub>(2.07)</sub> |Description: If the ulimits aren't set properly, the desired resource control might not be achieved and might even make the system unusable. |Run the docker in daemon mode and pass --default-ulimit as argument with respective ulimits as appropriate in your environment. Alternatively, you can also set a specific resource limitation to each container separately by using the `--ulimit` argument with respective ulimits as appropriate in your environment. | |Enable user namespace support<br /><sub>(2.08)</sub> |Description: The Linux kernel user namespace support in Docker daemon provides additional security for the Docker host system. It allows a container to have a unique range of user and group IDs which are outside the traditional user and group range utilized by the host system. For example, the root user will have expected administrative privilege inside the container but can effectively be mapped to an unprivileged UID on the host system. |Please consult Docker documentation for various ways in which this can be configured depending upon your requirements. Your steps might also vary based on platform - For example, on Red Hat, sub-UIDs and sub-GIDs mapping creation does not work automatically. You might have to create your own mapping. However, the high-level steps are as below: **Step 1:** Ensure that the files `/etc/subuid` and `/etc/subgid` exist.```touch /etc/subuid /etc/subgid```**Step 2:** Start the docker daemon with `--userns-remap` flag ```dockerd --userns-remap=default``` | |Ensure base device size isn't changed until needed<br /><sub>(2.10)</sub> |Description: Increasing the base device size allows all future images and containers to be of the new base device size, this may cause a denial of service by ending up in file system being over-allocated or full. |remove `--storage-opt dm.basesize` flag from the dockerd start command until you need it |-|Ensure that authorization for Docker client commands is enabled<br /><sub>(2.11)</sub> |Description: DockerΓÇÖs out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using DockerΓÇÖs remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). |**Step 1**: Install/Create an authorization plugin. **Step 2**: Configure the authorization policy as desired. **Step 3**: Start the docker daemon as below: ```dockerd --authorization-plugin=``` | -|Ensure centralized and remote logging is configured<br /><sub>(2.12)</sub> |Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. |**Step 1**: Setup the desired log driver by following its documentation. **Step 2**: Start the docker daemon with that logging driver. For example, ```dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx``` | +|Ensure that authorization for Docker client commands is enabled<br /><sub>(2.11)</sub> |Description: DockerΓÇÖs out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command. The same is true for callers using DockerΓÇÖs remote API to contact the daemon. If you require greater access control, you can create authorization plugins and add them to your Docker daemon configuration. Using an authorization plugin, a Docker administrator can configure granular access policies for managing access to Docker daemon. Third party integrations of Docker may implement their own authorization models to require authorization with the Docker daemon outside of docker's native authorization plugin (i.e. Kubernetes, Cloud Foundry, OpenShift). |**Step 1:** Install/Create an authorization plugin. **Step 2:** Configure the authorization policy as desired. **Step 3:** Start the docker daemon as below: ```dockerd --authorization-plugin=``` | +|Ensure centralized and remote logging is configured<br /><sub>(2.12)</sub> |Description: Centralized and remote logging ensures that all important log records are safe despite catastrophic events. Docker now supports various such logging drivers. Use the one that suits your environment the best. |**Step 1:** Setup the desired log driver by following its documentation. **Step 2:** Start the docker daemon with that logging driver. For example, ```dockerd --log-driver=syslog --log-opt syslog-address=tcp://192.xxx.xxx.xxx``` | |Ensure live restore is Enabled<br /><sub>(2.14)</sub> |Description: One of the important security triads is availability. Setting `--live-restore` flag in the docker daemon ensures that container execution isn't interrupted when the docker daemon isn't available. This also means that it's now easier to update and patch the docker daemon without execution downtime. |Run the docker in daemon mode and pass `--live-restore` as an argument. For Example,```dockerd --live-restore``` | |Ensure Userland Proxy is Disabled<br /><sub>(2.15)</sub> |Description: Docker engine provides two mechanisms for forwarding ports from the host to containers, hairpin NAT, and a userland proxy. In most circumstances, the hairpin NAT mode is preferred as it improves performance and makes use of native Linux iptables functionality instead of an additional component. Where hairpin NAT is available, the userland proxy should be disabled on startup to reduce the attack surface of the installation. |Run the Docker daemon as below: ```dockerd --userland-proxy=false``` | |Ensure experimental features are avoided in production<br /><sub>(2.17)</sub> |Description: Experimental is now a runtime docker daemon flag instead of a separate build. Passing `--experimental` as a runtime flag to the docker daemon, activates experimental features. Experimental is now considered a stable release, but with a couple of features which might not have tested and guaranteed API stability. |Don't pass `--experimental` as a runtime parameter to the docker daemon. | |Ensure containers are restricted from acquiring new privileges.<br /><sub>(2.18)</sub> |Description: A process can set the `no_new_priv` bit in the kernel. It persists across fork, clone and execve. The `no_new_priv` bit ensures that the process or its children processes don't gain any additional privileges via suid or sgid bits. This way numerous dangerous operations become a lot less dangerous because there's no possibility of subverting privileged binaries. Setting this at the daemon level ensures that by default all new containers are restricted from acquiring new privileges. |Run the Docker daemon as below: ```dockerd --no-new-privileges``` |-|Ensure that docker.service file ownership is set to root:root.<br /><sub>(3.01)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.service``` | -|Ensure that docker .service file permissions are set to 644 or more restrictive<br /><sub>(3.02)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` | -|Ensure that docker.socket file ownership is set to root:root.<br /><sub>(3.03)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.socket``` | -|Ensure that docker.socket file permissions are set to `644` or more restrictive<br /><sub>(3.04)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1**: Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2**: If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` | +|Ensure that docker.service file ownership is set to root:root.<br /><sub>(3.01)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.service``` | +|Ensure that docker .service file permissions are set to 644 or more restrictive<br /><sub>(3.02)</sub> |Description: `docker.service` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.service``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` | +|Ensure that docker.socket file ownership is set to root:root.<br /><sub>(3.03)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker remote API. Hence, it should be owned and group-owned by `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the ownership and group ownership for the file to `root`. For example, ```chown root:root /usr/lib/systemd/system/docker.socket``` | +|Ensure that docker.socket file permissions are set to `644` or more restrictive<br /><sub>(3.04)</sub> |Description: `docker.socket` file contains sensitive parameters that may alter the behavior of Docker daemon. Hence, it shouldn't be writable by any other user other than `root` to maintain the integrity of the file. |**Step 1:** Find out the file location: ```systemctl show -p FragmentPath docker.socket``` **Step 2:** If the file does not exist, this recommendation isn't applicable. If the file exists, execute the below command with the correct file path to set the file permissions to `644`. For example, ```chmod 644 /usr/lib/systemd/system/docker.service``` | |Ensure that /etc/docker directory ownership is set to `root:root`.<br /><sub>(3.05)</sub> |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should be owned and group-owned by `root` to maintain the integrity of the directory. | ```chown root:root /etc/docker``` This would set the ownership and group-ownership for the directory to `root`. | |Ensure that /etc/docker directory permissions are set to `755` or more restrictive<br /><sub>(3.06)</sub> |Description: /etc/docker directory contains certificates and keys in addition to various sensitive files. Hence, it should only be writable by `root` to maintain the integrity of the directory. | ```chmod 755 /etc/docker``` This would set the permissions for the directory to `755`. | |Ensure that registry certificate file ownership is set to root:root<br /><sub>(3.07)</sub> |Description: /etc/docker/certs.d/ directory contains Docker registry certificates. These certificate files must be owned and group-owned by `root` to maintain the integrity of the certificates. | ```chown root:root /etc/docker/certs.d//*``` This would set the ownership and group-ownership for the registry certificate files to `root`. | |
governance | Hipaa Hitrust 9 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md | Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Contractors are provided with minimal system and physical access only after the organization assesses the contractor's ability to comply with its security requirements and the contractor agrees to comply. This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | |[Document wireless access security controls](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8f835d6a-4d13-9a9c-37dc-176cebd37fda) |CMA_C1695 - Document wireless access security controls |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1695.json) | |[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### 1166.01e1System.12-01.e 01.02 Authorized Access to Information Systems This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Define access authorizations to support separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F341bc9f1-7489-07d9-4ec6-971573e1546a) |CMA_0116 - Define access authorizations to support separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0116.json) | |[Document separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6f7b584-877a-0d69-77d4-ab8b923a9650) |CMA_0204 - Document separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0204.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) | ### 1230.09c2Organizational.1-09.c 09.01 Documented Operating Procedures |
governance | Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | New Zealand Ism | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md | Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 This built-in initiative is deployed as part of the |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | ### 17.9.25 Contents of KMPs |
governance | Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) | |[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | |[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | |[Cognitive Services accounts should have local authentication methods disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F71ef260a-8f18-47b7-abcb-62d0673d94dc) |Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Azure Active Directory identities exclusively for authentication. Learn more at: [https://aka.ms/cs/auth](https://aka.ms/cs/auth). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cognitive%20Services/CognitiveServices_DisableLocalAuth_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Information Flow Enforcement initiative definition. |[Kubernetes cluster services should listen only on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F233a2a17-77ca-4fb1-9b6b-69223d272a44) |Restrict services to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[8.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ServiceAllowedPorts.json) | |[Kubernetes cluster should not allow privileged containers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95edb821-ddaf-4404-9732-666045e056b4) |Do not allow privileged containers creation in a Kubernetes cluster. This recommendation is part of CIS 5.2.1 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[9.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilege.json) | |[Kubernetes clusters should not allow container privilege escalation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c6e92c9-99f0-4e55-9cf2-0c234dc48f99) |Do not allow containers to run with privilege escalation to root in a Kubernetes cluster. This recommendation is part of CIS 5.2.5 which is intended to improve the security of your Kubernetes environments. This policy is generally available for Kubernetes Service (AKS), and preview for Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](https://aka.ms/kubepolicydoc). |audit, Audit, deny, Deny, disabled, Disabled |[7.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerNoPrivilegeEscalation.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | |[Windows machines should meet requirements of the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72650e9f-97bc-4b2a-ab5f-9781a9fcecbc) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureWindowsBaseline_AINE.json) | initiative definition. |[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | |[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | |[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | |[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | |[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) | |
governance | Nz Ism Restricted 3 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nz-ism-restricted-3-5.md | Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | ### 17.9.25 Contents of KMPs initiative definition. |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | |[Azure Machine Learning workspaces should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F45e05259-1eb5-4f70-9574-baf73e9d219b) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Machine Learning workspaces, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/machine-learning/how-to-configure-private-link](../../../machine-learning/how-to-configure-private-link.md). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Machine%20Learning/Workspace_PrivateEndpoint_Audit_V2.json) | |[Azure Policy Add-on for Kubernetes service (AKS) should be installed and enabled on your clusters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a15ec92-a229-4763-bb14-0ea34a568f8d) |Azure Policy Add-on for Kubernetes service (AKS) extends Gatekeeper v3, an admission controller webhook for Open Policy Agent (OPA), to apply at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/AKS_AzurePolicyAddOn_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Azure Spring Cloud should use network injection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf35e2a4-ef96-44e7-a9ae-853dd97032c4) |Azure Spring Cloud instances should use virtual network injection for the following purposes: 1. Isolate Azure Spring Cloud from Internet. 2. Enable Azure Spring Cloud to interact with systems in either on premises data centers or Azure service in other virtual networks. 3. Empower customers to control inbound and outbound network communications for Azure Spring Cloud. |Audit, Disabled, Deny |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Platform/Spring_VNETEnabled_Audit.json) | |[Container registries should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8eef0a8-67cf-4eb4-9386-14b0e78733d4) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network.By mapping private endpoints to your container registries instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/acr/private-link](https://aka.ms/acr/private-link). |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Container%20Registry/ACR_PrivateEndpointEnabled_Audit.json) | |[Private endpoint connections on Azure SQL Database should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7698e800-9299-47a6-b3b6-5a0fee576eed) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure SQL Database. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PrivateEndpoint_Audit.json) | initiative definition. |[Private endpoint should be enabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a1302fb-a631-4106-9753-f3d494733990) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MariaDB. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_EnablePrivateEndPoint_Audit.json) | |[Private endpoint should be enabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7595c971-233d-4bcf-bd18-596129188c49) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for MySQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_EnablePrivateEndPoint_Audit.json) | |[Private endpoint should be enabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0564d078-92f5-4f97-8398-b9f58a51f70b) |Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for PostgreSQL. Configure a private endpoint connection to enable access to traffic coming only from known networks and prevent access from all other IP addresses, including within Azure. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnablePrivateEndPoint_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) | |[Storage accounts should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6edd7eda-6dd8-40f7-810d-67160c639cd9) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your storage account, data leakage risks are reduced. Learn more about private links at - [https://aka.ms/azureprivatelinkoverview](https://aka.ms/azureprivatelinkoverview) |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountPrivateEndpointEnabled_Audit.json) | |[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | |
governance | Pci Dss 3 2 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md | Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Pci Dss 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md | Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Rbi Itf Banks 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md | Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | ### Authentication Framework For Customers-9.3 initiative definition. |[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | initiative definition. |[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) | |[API Management services should use a virtual network](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef619a2c-cc4d-4d03-b2ba-8c94a834d85b) |Azure Virtual Network deployment provides enhanced security, isolation and allows you to place your API Management service in a non-internet routable network that you control access to. These networks can then be connected to your on-premises networks using various VPN technologies, which enables access to your backend services within the network and/or on-premises. The developer portal and API gateway, can be configured to be accessible either from the Internet or only within the virtual network. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_VNETEnabled_Audit.json) | |[App Configuration should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fca610c1d-041c-4332-9d88-7ed3094967c7) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your app configuration instances instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_Audit.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure Event Grid domains should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9830b652-8523-49cc-b1b3-e17dce1127ca) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid domain instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Domains_PrivateEndpoint_Audit.json) | |[Azure Event Grid topics should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4b90e17e-8448-49db-875e-bd83fb6f804f) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Event Grid topic instead of the entire service, you'll also be protected against data leakage risks. Learn more at: [https://aka.ms/privateendpoints](https://aka.ms/privateendpoints). |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Grid/Topics_PrivateEndpoint_Audit.json) | initiative definition. |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### User Access Control / Management-8.2 initiative definition. |[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | |[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## Vulnerability Assessment And Penetration Test And Red Team Exercises initiative definition. |[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | |[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) | |[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) | |[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) | initiative definition. |[Function apps should not have CORS configured to allow every resource to access your apps](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0820b7b9-23aa-4725-a1ce-ae4558f718e5) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your Function app. Allow only required domains to interact with your Function app. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RestrictCORSAccess_FuntionApp_Audit.json) | |[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) |-|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | +|[Linux machines should meet requirements for the Azure compute security baseline](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc9b3da7-8347-4380-8e70-0a0361d8dedd) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is not configured correctly for one of the recommendations in the Azure compute security baseline. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AzureLinuxBaseline_AINE.json) | |[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | |[Storage accounts should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37e0d2fe-28a5-43d6-a273-67d37d1f5606) |Use new Azure Resource Manager for your storage accounts to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Classic_AuditForClassicStorages_Audit.json) | |[Virtual machines should be migrated to new Azure Resource Manager resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d84d5fb-01f6-4d12-ba4f-4a26081d403d) |Use new Azure Resource Manager for your virtual machines to provide security enhancements such as: stronger access control (RBAC), better auditing, Azure Resource Manager based deployment and governance, access to managed identities, access to key vault for secrets, Azure AD-based authentication and support for tags and resource groups for easier security management |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ClassicCompute_Audit.json) | initiative definition. |[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) | |[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) | |[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |-|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | |[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | |[Azure Cosmos DB accounts should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f905d99-2ab7-462c-a6b0-f709acca6c8f) |Use customer-managed keys to manage the encryption at rest of your Azure Cosmos DB. By default, the data is encrypted at rest with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance standards. Customer-managed keys enable the data to be encrypted with an Azure Key Vault key created and owned by you. You have full control and responsibility for the key lifecycle, including rotation and management. Learn more at [https://aka.ms/cosmosdb-cmk](https://aka.ms/cosmosdb-cmk). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_CMK_Deny.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |
governance | Rbi Itf Nbfc 2017 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md | Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Segregation of Functions-3.1 initiative definition. |[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | |[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Azure subscriptions should have a log profile for Activity Log](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7796937f-307b-4598-941c-67d3a05ebfe7) |This policy ensures if a log profile is enabled for exporting activity logs. It audits if there is no log profile created to export the logs either to a storage account or to an event hub. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Logprofile_activityLogs_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | initiative definition. |[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | |[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | |
governance | Rmit Malaysia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md | Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 initiative definition. |[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) | |[App Configuration should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d9f5e4c-9947-4579-9539-2a7695fbc187) |Disabling public network access improves security by ensuring that the resource isn't exposed on the public internet. You can limit exposure of your resources by creating private endpoints instead. Learn more at: [https://aka.ms/appconfig/private-endpoint](https://aka.ms/appconfig/private-endpoint). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Configuration/PrivateLink_PublicNetworkAccess_Audit.json) | |[App Service apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Function apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) | initiative definition. |[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | ### Access Control - 10.55 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ### Access Control - 10.61 initiative definition. ||||| |[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | |[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) |+|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | |[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | |[Guest Configuration extension should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae89ebca-1c92-4898-ac2c-9f63decb045c) |To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension. In-guest settings that the extension monitors include the configuration of the operating system, application configuration or presence, and environment settings. Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVm.json) | |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | |[Virtual machines' Guest Configuration extension should be deployed with system-assigned managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd26f7642-7545-4e18-9b75-8c9bbdee3a9a) |The Guest Configuration extension requires a system assigned managed identity. Azure virtual machines in the scope of this policy will be non-compliant when they have the Guest Configuration extension installed but do not have a system assigned managed identity. Learn more at [https://aka.ms/gcpol](https://aka.ms/gcpol) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_GCExtOnVmWithNoSAMI.json) | ### Access Control - 10.62 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Audit usage of custom RBAC roles](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa451c1ef-c6ca-483d-87ed-f49761e3ffb5) |Audit built-in roles such as 'Owner, Contributer, Reader' instead of custom RBAC roles, which are error prone. Using custom roles is treated as an exception and requires a rigorous review and threat modeling |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/Subscription_AuditCustomRBACRoles_Audit.json) |-|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | +|[Azure Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Azure Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) | ## Patch and End-of-Life System Management |
governance | Ukofficial Uknhs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md | Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
governance | Guidance For Throttled Requests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md | In every query response, Azure Resource Graph adds two throttling headers: - `x-ms-user-quota-resets-after` (hh:mm:ss): The time duration until a user's quota consumption is reset. -When a security principal has access to more than 5,000 subscriptions within the tenant or +When a security principal has access to more than 10,000 subscriptions within the tenant or management group [query scope](./query-language.md#query-scope), the response is limited to the-first 5,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`. +first 10,000 subscriptions and the `x-ms-tenant-subscription-limit-hit` header is returned as `true`. To illustrate how the headers work, let's look at a query response that has the header and values of `x-ms-user-quota-remaining: 10` and `x-ms-user-quota-resets-after: 00:00:03`. |
governance | Query Language | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/query-language.md | resources. The list of subscriptions or management groups to query can be manually defined to change the scope of the results. For example, the REST API `managementGroups` property takes the management group ID, which is different from the name of the management group. When `managementGroups` is specified,-resources from the first 5,000 subscriptions in or under the specified management group hierarchy +resources from the first 10,000 subscriptions in or under the specified management group hierarchy are included. `managementGroups` can't be used at the same time as `subscriptions`. Example: Query all resources within the hierarchy of the management group named `My Management |
governance | Get Resource Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md | Title: Get resource changes + Title: Get resource configuration changes description: Get resource configuration changes at scale Previously updated : 06/16/2022 Last updated : 08/17/2023 +- Find when changes were detected on an Azure Resource Manager property. +- View property change details. +- Query changes at scale across your subscriptions, management group, or tenant. This article shows how to query resource configuration changes through Resource Graph. - ## Prerequisites - To enable Azure PowerShell to query Azure Resource Graph, [add the module](../first-query-powershell.md#add-the-resource-graph-module). Each change resource has the following properties: | `targetResourceId` | The resourceID of the resource on which the change occurred. | ||| | `targetResourceType` | The resource type of the resource on which the change occurred. |-| `changeType` | Describes the type of change detected for the entire change record. Values are: Create, Update, and Delete. The **changes** property dictionary is only included when `changeType` is _Update_. For the delete case, the change resource will still be maintained as an extension of the deleted resource for 14 days, even if the entire Resource group has been deleted. The change resource won't block deletions or impact any existing delete behavior. | +| `changeType` | Describes the type of change detected for the entire change record. Values are: Create, Update, and Delete. The **changes** property dictionary is only included when `changeType` is _Update_. For the delete case, the change resource is maintained as an extension of the deleted resource for 14 days, even if the entire resource group was deleted. The change resource doesn't block deletions or affect any existing delete behavior. | | `changes` | Dictionary of the resource properties (with property name as the key) that were updated as part of the change: | | `propertyChangeType` | This property is deprecated and can be derived as follows `previousValue` being empty indicates Insert, empty `newValue` indicates Remove, when both are present, it's Update.| | `previousValue` | The value of the resource property in the previous snapshot. Value is empty when `changeType` is _Insert_. |-| `newValue` | The value of the resource property in the new snapshot. This property will be empty (absent) when `changeType` is _Remove_. | -| `changeCategory` | This property was optional and has been deprecated, this field will no longer be available| +| `newValue` | The value of the resource property in the new snapshot. This property is empty (absent) when `changeType` is _Remove_. | +| `changeCategory` | This property was optional and has been deprecated, this field is no longer available. | | `changeAttributes` | Array of metadata related to the change: | | `changesCount` | The number of properties changed as part of this change record. |-| `correlationId` | Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template will share the same correlation ID. | +| `correlationId` | Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template share the same correlation ID. | | `timestamp` | The datetime of when the change was detected. | | `previousResourceSnapshotId` | Contains the ID of the resource snapshot that was used as the previous state of the resource. | | `newResourceSnapshotId` | Contains the ID of the resource snapshot that was used as the new state of the resource. |-| `isTruncated` | When the number of property changes reaches beyond a certain number they're truncated and this property becomes present. | +| `isTruncated` | When the number of property changes reaches beyond a certain number, they're truncated and this property becomes present. | ## Get change events using Resource Graph resourcechangesΓÇ» ### Best practices -- Query for change events during a specific window of time and evaluate the change details. This query works best during incident management to understand _potentially_ related changes. +- Query for change events during a specific window of time and evaluate the change details. This query works best during incident management to understand _potentially_ related changes. - Keep a Configuration Management Database (CMDB) up to date. Instead of refreshing all resources and their full property sets on a scheduled frequency, only get what changed. - Understand what other properties may have been changed when a resource changed compliance state. Evaluation of these extra properties can provide insights into other properties that may need to be managed via an Azure Policy definition.-- The order of query commands is important. In this example, the `order by` must come before the `limit` command. This command order first orders the query results by the change time and then limits them to ensure that you get the five most recent results.-- Resource configuration changes supports changes to resource types from the [Resources table](../reference/supported-tables-resources.md#resources), `resourcecontainers` and `healthresources` table in Resource Graph. Changes are queryable for 14 days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query results to any of the Azure data stores (such as [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your desired retention.+- The order of query commands is important. In this example, the `order by` must come before the `limit` command. This command orders the query results by the change time and then limits them to ensure that you get the five most recent results. +- Resource configuration changes support changes to resource types from the Resource Graph tables [resources](../reference/supported-tables-resources.md#resources), [resourcecontainers](../reference/supported-tables-resources.md#resourcecontainers), and [healthresources](../reference/supported-tables-resources.md#healthresources). Changes are queryable for 14 days. For longer retention, you can [integrate your Resource Graph query with Azure Logic Apps](../tutorials/logic-app-calling-arg.md) and export query results to any of the Azure data stores like [Log Analytics](../../../azure-monitor/logs/log-analytics-overview.md) for your desired retention. ## Next steps |
governance | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/overview.md | Title: Overview of Azure Resource Graph description: Understand how the Azure Resource Graph service enables complex querying of resources at scale across subscriptions and tenants. Previously updated : 06/15/2022 Last updated : 08/15/2023 provide the following abilities: - Query resources with complex filtering, grouping, and sorting by resource properties. - Explore resources iteratively based on governance requirements. - Assess the impact of applying policies in a vast cloud environment.-- [Query changes made to resource properties](./how-to/get-resource-changes.md)- (preview). +- [Query changes made to resource properties](./how-to/get-resource-changes.md). -In this documentation, you'll go over each feature in detail. +In this documentation, you review each feature in detail. > [!NOTE] > Azure Resource Graph powers Azure portal's search bar, the new browse **All resources** experience, With Azure Resource Graph, you can: - Access the properties returned by resource providers without needing to make individual calls to each resource provider.-- View the last seven days of resource configuration changes to see what properties changed and- when. (preview) +- View the last 14 days of resource configuration changes to see which properties changed and + when. > [!NOTE] > As a _preview_ feature, some `type` objects have additional non-Resource Manager properties First, for details on operations and functions that can be used with Azure Resou ## Permissions in Azure Resource Graph -To use Resource Graph, you must have appropriate rights in [Azure role-based access -control (Azure RBAC)](../../role-based-access-control/overview.md) with at least read access to the -resources you want to query. Without at least `read` permissions to the Azure object or object -group, results won't be returned. +To use Resource Graph, you must have appropriate rights in [Azure role-based access control (Azure +RBAC)](../../role-based-access-control/overview.md) with at least `read` access to the resources you +want to query. No results are returned if you don't have at least `read` permissions to the Azure +object or object group. > [!NOTE] > Resource Graph uses the subscriptions available to a principal during login. To see resources of a > new subscription added during an active session, the principal must refresh the context. This > action happens automatically when logging out and back in. -Azure CLI and Azure PowerShell use subscriptions that the user has access to. When using REST API -directly, the subscription list is provided by the user. If the user has access to any of the +Azure CLI and Azure PowerShell use subscriptions that the user has access to. When you use a REST +API, the subscription list is provided by the user. If the user has access to any of the subscriptions in the list, the query results are returned for the subscriptions the user has access-to. This behavior is the same as when calling -[Resource Groups - List](/rest/api/resources/resourcegroups/list) \- you get resource groups you've -access to without any indication that the result may be partial. If there are no subscriptions in -the subscription list that the user has appropriate rights to, the response is a _403_ (Forbidden). +to. This behavior is the same as when calling [Resource Groups - List](/rest/api/resources/resourcegroups/list) +because you get resource groups that you can access, without any indication that the result may be +partial. If there are no subscriptions in the subscription list that the user has appropriate rights +to, the response is a _403_ (Forbidden). > [!NOTE] > In the **preview** REST API version `2020-04-01-preview`, the subscription list may be omitted. |
governance | Samples By Category | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-category.md | Otherwise, use <kbd>Ctrl</kbd>-<kbd>F</kbd> to use your browser's search feature [!INCLUDE [authorization-resources-role-definitions-permissions-list](../../includes/resource-graph/query/authorization-resources-role-definitions-permissions-list.md)] + [!INCLUDE [authorization-resources-troubleshoot-rbac-limits](../../includes/resource-graph/query/authorization-resources-troubleshoot-rbac-limits.md)] ## Azure Service Health |
governance | Samples By Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/samples/samples-by-table.md | details, see [Resource Graph tables](../concepts/query-language.md#resource-grap [!INCLUDE [authorization-resources-role-definitions-permissions-list](../../includes/resource-graph/query/authorization-resources-role-definitions-permissions-list.md)] + [!INCLUDE [authorization-resources-troubleshoot-rbac-limits](../../includes/resource-graph/query/authorization-resources-troubleshoot-rbac-limits.md)] ## ExtendedLocationResources |
hdinsight | Apache Domain Joined Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md | Title: Azure HDInsight architecture with Enterprise Security Package description: Learn how to plan Azure HDInsight security with Enterprise Security Package. -+ Last updated 05/11/2023 |
hdinsight | Apache Ambari Troubleshoot Metricservice Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-ambari-troubleshoot-metricservice-issues.md | java.lang.OutOfMemoryError: Java heap space 2021-04-13 05:57:37,546 INFO [timeline] timeline.HadoopTimelineMetricsSink: No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times. ``` -2. Get the Apache Ambari Metrics Collector pid and check GC performance +2. Get the Apache Ambari Metrics Collector `pid` and check GC performance ``` ps -fu ams | grep 'org.apache.ambari.metrics.AMSApplicationServer' |
hdinsight | Apache Hadoop Use Sqoop Mac Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-sqoop-mac-linux.md | description: Learn how to use Apache Sqoop to import and export between Apache H Previously updated : 07/18/2022 Last updated : 08/21/2023 # Use Apache Sqoop to import and export data between Apache Hadoop on HDInsight and Azure SQL Database |
hdinsight | Apache Hbase Accelerated Writes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-accelerated-writes.md | Title: Azure HDInsight Accelerated Writes for Apache HBase description: Gives an overview of the Azure HDInsight Accelerated Writes feature, which uses premium managed disks to improve performance of the Apache HBase Write Ahead Log. Previously updated : 07/18/2022 Last updated : 08/21/2023 # Azure HDInsight Accelerated Writes for Apache HBase This article provides background on the **Accelerated Writes** feature for Apach ## Overview of HBase architecture -In HBase, a **row** consists of one or more **columns** and is identified by a **row key**. Multiple rows make up a **table**. Columns contain **cells**, which are timestamped versions of the value in that column. Columns are grouped into **column families**, and all columns in a column-family are stored together in storage files called **HFiles**. +In HBase, a **row** consists of one or more **columns** and is identified by a **row key**. Multiple rows make up a **table**. Columns contain **cells**, which are timestamped versions of the value in that column. Columns are grouped into **column families**, and all columns in a column-family are stored together in storage files called `HFiles`. **Regions** in HBase are used to balance the data processing load. HBase first stores the rows of a table in a single region. The rows are spread across multiple regions as the amount of data in the table increases. **Region Servers** can handle requests for multiple regions. ## Write Ahead Log for Apache HBase -HBase first writes data updates to a type of commit log called a Write Ahead Log (WAL). After the update is stored in the WAL, it's written to the in-memory **MemStore**. When the data in memory reaches its maximum capacity, it's written to disk as an **HFile**. +HBase first writes data updates to a type of commit log called a Write Ahead Log (WAL). After the update is stored in the WAL, it's written to the in-memory **MemStore**. When the data in memory reaches its maximum capacity, it's written to disk as an `HFile`. -If a **RegionServer** crashes or becomes unavailable before the MemStore is flushed, the Write Ahead Log can be used to replay updates. Without the WAL, if a **RegionServer** crashes before flushing updates to an **HFile**, all of those updates are lost. +If a **RegionServer** crashes or becomes unavailable before the MemStore is flushed, the Write Ahead Log can be used to replay updates. Without the WAL, if a **RegionServer** crashes before flushing updates to an `HFile`, all of those updates are lost. ## Accelerated Writes feature in Azure HDInsight for Apache HBase Follow similar steps when scaling down your cluster: flush your tables and disab Following these steps will ensure a successful scale-down and avoid the possibility of a namenode going into safe mode due to under-replicated or temporary files. -If your namenode does go into safemode after a scale down, use hdfs commands to re-replicate the under-replicated blocks and get hdfs out of safe mode. This re-replication will allow you to restart HBase successfully. +If your namenode does go into safe mode after a scale down, use hdfs commands to re-replicate the under-replicated blocks and get hdfs out of safe mode. This re-replication will allow you to restart HBase successfully. ## Next steps |
hdinsight | Apache Hbase Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-replication.md | The following list shows you some general usage cases and their parameter settin > 1. Copy head node, worker node and ZooKeeper nodes host and IP mapping from /etc/hosts file of destination(sink) cluster. > 1. Add copied entries source cluster /etc/hosts file. These entries should be added to head nodes, worker nodes and ZooKeeper nodes. -**Step: 1** +**Step 1:** Create keytab file for the user using `ktutil`. `$ ktutil` 1. `addent -password -p admin@ABC.EXAMPLE.COM -k 1 -e RC4-HMAC` Create keytab file for the user using `ktutil`. > [!NOTE] > Make sure the keytab file is stored in `/etc/security.keytabs/` folder in the `<username>.keytab` format. -**Step 2** +**Step 2:** Run script action with `-ku` option 1. Provide `-ku <username>` on ESP clusters. |
hdinsight | Hdinsight Hadoop Create Linux Clusters Arm Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-arm-templates.md | description: Learn how to create clusters for HDInsight by using Resource Manage Previously updated : 07/31/2023 Last updated : 08/22/2023 # Create Apache Hadoop clusters in HDInsight by using Resource Manager templates |
hdinsight | Hdinsight Hadoop Use Data Lake Storage Gen2 Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-azure-cli.md | |
hdinsight | Hdinsight Hadoop Use Data Lake Storage Gen2 Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-use-data-lake-storage-gen2-portal.md | |
hdinsight | Apache Hive Warehouse Connector Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-operations.md | |
hdinsight | Apache Hive Warehouse Connector Zeppelin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md | Last updated 07/18/2022 HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we'll focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector. > [!NOTE]-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. ## Prerequisite |
hdinsight | Apache Kafka Mirror Maker 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirror-maker-2.md | This architecture features two clusters in different resource groups and virtual vi /etc/kafka/conf/connect-mirror-maker.properties ``` > [!NOTE]- > This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. + > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. 1. Property file looks like this. ``` |
hdinsight | Apache Kafka Mirroring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-mirroring.md | Configure IP advertising to enable a client to connect by using broker IP addres ## Start MirrorMaker > [!NOTE]-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. 1. From the SSH connection to the secondary cluster, use the following command to start the MirrorMaker process: |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
hdinsight | Apache Spark Job Debugging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-job-debugging.md | description: Use YARN UI, Spark UI, and Spark History server to track and debug Previously updated : 07/31/2023 Last updated : 08/22/2023 # Debug Apache Spark jobs running on Azure HDInsight |
healthcare-apis | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/disaster-recovery.md | Title: Disaster recovery for Azure API for FHIR description: In this article, you'll learn how to enable disaster recovery features for Azure API for FHIR.-+ Last updated 06/03/2022-+ # Disaster recovery for Azure API for FHIR -Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR. +Azure API for FHIR is a fully managed service, based on Fast Healthcare Interoperability Resources (FHIR®). To meet business and compliance requirements you can use the disaster recovery (DR) feature for Azure API for FHIR. The DR feature provides a Recovery Point Objective (RPO) of 15 minutes and a Recovery Time Objective (RTO) of 60 minutes. |
healthcare-apis | Find Identity Object Ids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/find-identity-object-ids.md | |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
healthcare-apis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
healthcare-apis | Smart On Fhir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md | Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se - After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments. -## SMART on FHIR using AHDS Samples OSS +## SMART on FHIR using AHDS Samples OSS (SMART on FHIR(Enhanced)) ### Step 1: Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes. Follow the steps listed under section [Manage Users: Assign Users to Role](https <summary> Click to expand! </summary> > [!NOTE]-> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence. +> This is another option to SMART on FHIR(Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence. ### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources. |
healthcare-apis | Dicom Cast Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-cast-overview.md | - Title: DICOMcast overview - Azure Health Data Services -description: In this article, you'll learn the concepts of DICOMcast. ---- Previously updated : 06/03/2022----# DICOMcast overview --> [!NOTE] -> On **July 31, 2023** DICOMcast will be retired. DICOMcast will continue to be available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [migration guidance](https://aka.ms/dicomcast-migration). --DICOMcast offers customers the ability to synchronize the data from a DICOM service to a [FHIR service](../../healthcare-apis/fhir/overview.md), which allows healthcare organizations to integrate clinical and imaging data. DICOMcast expands the use cases for health data by supporting both a streamlined view of longitudinal patient data and the ability to effectively create cohorts for medical studies, analytics, and machine learning. --## Architecture --[ ![Architecture diagram of DICOMcast](media/dicom-cast-architecture.png) ](media/dicom-cast-architecture.png#lightbox) ---1. **Poll for batch of changes**: DICOMcast polls for any changes via the [Change Feed](dicom-change-feed-overview.md), which captures any changes that occur in your Medical Imaging Server for DICOM. -1. **Fetch corresponding FHIR resources, if any**: If any DICOM service changes and correspond to FHIR resources, DICOMcast will fetch the related FHIR resources. DICOMcast synchronizes DICOM tags to the FHIR resource types *Patient* and *ImagingStudy*. -1. **Merge FHIR resources and 'PUT' as a bundle in a transaction**: The FHIR resources corresponding to the DICOMcast captured changes will be merged. The FHIR resources will be 'PUT' as a bundle in a transaction into your FHIR service. -1. **Persist state and process next batch**: DICOMcast will then persist the current state to prepare for next batch of changes. --The current implementation of DICOMcast: --- Supports a single-threaded process that reads from the DICOM change feed and writes to a FHIR service.-- Is hosted by Azure Container Instance in our sample template, but can be run elsewhere.-- Synchronizes DICOM tags to *Patient* and *ImagingStudy* FHIR resource types*.-- Is configurated to ignore invalid tags when syncing data from the change feed to FHIR resource types.- - If `EnforceValidationOfTagValues` is enabled, then the change feed entry won't be written to the FHIR service unless every tag that's mapped is valid. For more information, see the [Mappings](#mappings) section below. - - If `EnforceValidationOfTagValues` is disabled (default), and if a value is invalid, but it's not required to be mapped, then that particular tag won't be mapped. The rest of the change feed entry will be mapped to FHIR resources. If a required tag is invalid, then the change feed entry won't be written to the FHIR service. For more information about the required tags, see [Patient](#patient) and [Imaging Study](#imagingstudy) -- Logs errors to Azure Table Storage.- - Errors occur when processing change feed entries that are persisted in Azure Table storage that are in different tables. - - `InvalidDicomTagExceptionTable`: Stores information about tags with invalid values. Entries here don't necessarily mean that the entire change feed entry wasn't stored in FHIR service, but that the particular value had a validation issue. - - `DicomFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the change feed entry (such as invalid required tag). All entries in this table weren't stored to FHIR service. - - `FhirFailToStoreExceptionTable`: Stores information about change feed entries that weren't stored to FHIR service due to an issue with the FHIR service (such as conflicting resource already exists). All entries in this table weren't stored to FHIR service. - - `TransientRetryExceptionTable`: Stores information about change feed entries that faced a transient error (such as FHIR service too busy) and are being retried. Entries in this table note how many times they've been retried, but it doesn't necessarily mean that they eventually failed or succeeded to store to FHIR service. - - `TransientFailureExceptionTable`: Stores information about change feed entries that had a transient error, and went through the retry policy and still failed to store to FHIR service. All entries in this table failed to store to FHIR service. --## Mappings --The current implementation of DICOMcast has the following mappings: --### Patient --| Property | Tag ID | Tag Name | Required Tag?| Note | -| :- | :-- | :- | :-- | :-- | -| Patient.identifier.where(system = '') | (0010,0020) | PatientID | Yes | For now, the system will be empty string. We'll add support later for allowing the system to be specified. | -| Patient.name.where(use = 'usual') | (0010,0010) | PatientName | No | PatientName will be split into components and added as HumanName to the Patient resource. | -| Patient.gender | (0010,0040) | PatientSex | No | | -| Patient.birthDate | (0010,0030) | PatientBirthDate | No | PatientBirthDate only contains the date. This implementation assumes that the FHIR and DICOM services have data from the same time zone. | --### Endpoint --| Property | Tag ID | Tag Name | Note | -| :- | :-- | :- | : | -| Endpoint.status ||| The value 'active' will be used when creating the endpoint. | -| Endpoint.connectionType ||| The system 'http://terminology.hl7.org/CodeSystem/endpoint-connection-type' and value 'dicom-wado-rs' will be used when creating the endpoint. | -| Endpoint.address ||| The root URL to the DICOMWeb service will be used when creating the endpoint. The rule is described in 'http://hl7.org/fhir/imagingstudy.html#endpoint'. | --### ImagingStudy --| Property | Tag ID | Tag Name | Required | Note | -| :- | :-- | :- | : | : | -| ImagingStudy.identifier.where(system = 'urn:dicom:uid') | (0020,000D) | StudyInstanceUID | Yes | The value will have prefix of `urn:oid:`. | -| ImagingStudy.status | | | No | The value 'available' will be used when creating ImagingStudy. | -| ImagingStudy.modality | (0008,0060) | Modality | No | | -| ImagingStudy.subject | | | No | It will be linked to the [Patient](#mappings). | -| ImagingStudy.started | (0008,0020), (0008,0030), (0008,0201) | StudyDate, StudyTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. | -| ImagingStudy.endpoint | | | | It will be linked to the [Endpoint](#endpoint). | -| ImagingStudy.note | (0008,1030) | StudyDescription | No | | -| ImagingStudy.series.uid | (0020,000E) | SeriesInstanceUID | Yes | | -| ImagingStudy.series.number | (0020,0011) | SeriesNumber | No | | -| ImagingStudy.series.modality | (0008,0060) | Modality | Yes | | -| ImagingStudy.series.description | (0008,103E) | SeriesDescription | No | | -| ImagingStudy.series.started | (0008,0021), (0008,0031), (0008,0201) | SeriesDate, SeriesTime, TimezoneOffsetFromUTC | No | Refer to the section for details about how the [timestamp](#timestamp) is constructed. | -| ImagingStudy.series.instance.uid | (0008,0018) | SOPInstanceUID | Yes | | -| ImagingStudy.series.instance.sopClass | (0008,0016) | SOPClassUID | Yes | | -| ImagingStudy.series.instance.number | (0020,0013) | InstanceNumber | No| | -| ImagingStudy.identifier.where(type.coding.system='http://terminology.hl7.org/CodeSystem/v2-0203' and type.coding.code='ACSN')) | (0008,0050) | Accession Number | No | Refer to http://hl7.org/fhir/imagingstudy.html#notes. | --### Timestamp --DICOM has different date time VR types. Some tags (like Study and Series) have the date, time, and UTC offset stored separately. This means that the date might be partial. This code attempts to translate this into a partial date syntax allowed by the FHIR service. --## Summary --In this concept, we reviewed the architecture and mappings of DICOMcast. This feature is available as an open-source component that can be self-hosted. For more information about deploying the DICOMcast service, see the [deployment instructions](https://github.com/microsoft/dicom-server/blob/main/docs/quickstarts/deploy-dicom-cast.md). --> [!IMPORTANT] -> Ensure that you include the **resource IDs** of your DICOM service and FHIR service when you submit a support ticket. -- -## Next steps --To get started using the DICOM service, see -->[!div class="nextstepaction"] ->[Deploy DICOM service to Azure](deploy-dicom-services-in-azure.md) -->[!div class="nextstepaction"] ->[Using DICOMweb™Standard APIs with DICOM service](dicomweb-standard-apis-with-dicom-services.md) --FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | References For Dicom Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/references-for-dicom-service.md | This article describes our open-source projects on GitHub that provide source co * [Azure DICOM service with OHIF viewer](https://github.com/microsoft/dicom-ohif): The [OHIF viewer](https://ohif.org/) is an open-source, non-diagnostic DICOM viewer that uses DICOMweb APIs to find and render DICOM images. This project provides the guidance and sample templates for deploying the OHIF viewer and configuring it to integrate with the DICOM service. ### Medical imaging network demo environment-* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-prem radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow. +* [Medical Imaging Network Demo Environment](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/dicom-demo-env#readme): This hands-on lab / demo highlights how an organization with existing on-premises radiology infrastructure can take the first steps to intelligently moving their data to the cloud, without disruptions to the current workflow. ## Next steps For more information about using the DICOM service, see For more information about DICOM cast, see >[!div class="nextstepaction"]->[DICOM cast overview](dicom-cast-overview.md) +>[DICOM cast overview](https://github.com/microsoft/dicom-server/blob/main/docs/concepts/dicom-cast.md) FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Events Consume Logic Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-consume-logic-apps.md | Follow these steps to create a Logic App workflow to consume FHIR events: ## Prerequisites -Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy Events in the Azure portal](events-deploy-portal.md). +Before you begin this tutorial, you need to have deployed a FHIR service and enabled events. For more information about deploying events, see [Deploy events using the Azure portal](events-deploy-portal.md). ## Creating a Logic App You now need to fill out the details of your Logic App. Specify information for :::image type="content" source="media/events-logic-apps/events-logic-tabs.png" alt-text="Screenshot of the five tabs for specifying your Logic App." lightbox="media/events-logic-apps/events-logic-tabs.png"::: -- Tab 1 - Basics-- Tab 2 - Hosting-- Tab 3 - Monitoring-- Tab 4 - Tags-- Tab 5 - Review + Create+- Tab 1 - **Basics** +- Tab 2 - **Hosting** +- Tab 3 - **Monitoring** +- Tab 4 - **Tags** +- Tab 5 - **Review + Create** ### Basics - Tab 1 Enabling your plan makes it zone redundant. ### Hosting - Tab 2 -Continue specifying your Logic App by clicking "Next: Hosting". +Continue specifying your Logic App by selecting **Next: Hosting**. #### Storage Choose the type of storage you want to use and the storage account. You can use ### Monitoring - Tab 3 -Continue specifying your Logic App by clicking "Next: Monitoring". +Continue specifying your Logic App by selecting **Next: Monitoring**. #### Monitoring with Application Insights Enable Azure Monitor Application Insights to automatically monitor your applicat ### Tags - Tab 4 -Continue specifying your Logic App by clicking **Next: Tags**. +Continue specifying your Logic App by selecting **Next: Tags**. #### Use tags to categorize resources This example doesn't use tagging. ### Review + create - Tab 5 -Finish specifying your Logic App by clicking **Next: Review + create**. +Finish specifying your Logic App by selecting **Next: Review + create**. #### Review your Logic App If there are no errors, you'll finally see a notification telling you that your #### Your Logic App dashboard -Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by clicking Overview in the Logic App menu. Here's a Logic App dashboard: +Azure creates a dashboard when your Logic App is complete. The dashboard shows you the status of your app. You can return to your dashboard by selecting **Overview** in the Logic App menu. Here's a Logic App dashboard: :::image type="content" source="media/events-logic-apps/events-logic-overview.png" alt-text="Screenshot of your Logic Apps overview dashboard." lightbox="media/events-logic-apps/events-logic-overview.png"::: To set up a new workflow, fill in these details: Specify a new name for your workflow. Indicate whether you want the workflow to be stateful or stateless. Stateful is for business processes and stateless is for processing IoT events. -When you've specified the details, select "Create" to begin designing your workflow. +When you've specified the details, select **Create** to begin designing your workflow. ### Designing the workflow In your new workflow, select the name of the enabled workflow. -You can write code to design a workflow for your application, but for this tutorial, choose the Designer option on the Developer menu. +You can write code to design a workflow for your application, but for this tutorial, choose the **Designer** option on the **Developer** menu. -Next, select "Choose an operation" to display the "Add a Trigger" blade on the right. Then search for "Azure Event Grid" and select the "Azure" tab below. The Event Grid isn't a Logic App Built-in. +Next, select **Choose an operation** to display the **Add a Trigger** blade on the right. Then search for "Azure Event Grid" and select the **Azure** tab below. The Event Grid isn't a Logic App Built-in. :::image type="content" source="media/events-logic-apps/events-logic-grid.png" alt-text="Screenshot of the search results for Azure Event Grid." lightbox="media/events-logic-apps/events-logic-grid.png"::: -When you see the "Azure Event Grid" icon, select on it to display the Triggers and Actions available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md). +When you see the "Azure Event Grid" icon, select on it to display the **Triggers and Actions** available from Event Grid. For more information about Event Grid, see [What is Azure Event Grid?](./../../event-grid/overview.md). -Select "When a resource event occurs" to set up a trigger for the Azure Event Grid. +Select **When a resource event occurs** to set up a trigger for the Azure Event Grid. To tell Event Grid how to respond to the trigger, you must specify parameters and add actions. Fill in the details for subscription, resource type, and resource name. Then you - Resource deleted - Resource updated -For more information about event types, see [What FHIR resource events does Events support?](events-faqs.md#what-fhir-resource-events-does-events-support). +For more information about supported event types, see [Frequently asked questions about events](events-faqs.md). ### Adding an HTTP action -Once you've specified the trigger events, you must add more details. Select the "+" below the "When a resource event occurs" button. +Once you've specified the trigger events, you must add more details. Select the **+** below the **When a resource event occurs** button. -You need to add a specific action. Select "Choose an operation" to continue. Then, for the operation, search for "HTTP" and select on "Built-in" to select an HTTP operation. The HTTP action will allow you to query the FHIR service. +You need to add a specific action. Select **Choose an operation** to continue. Then, for the operation, search for "HTTP" and select on **Built-in** to select an HTTP operation. The HTTP action will allow you to query the FHIR service. The options in this example are: The options in this example are: At this point, you need to give the FHIR Reader access to your app, so it can verify that the event details are correct. Follow these steps to give it access: -1. The first step is to go back to your Logic App and select the Identity menu item. +1. The first step is to go back to your Logic App and select the **Identity** menu item. -2. In the System assigned tab, make sure the Status is "On". +2. In the System assigned tab, make sure the **Status** is "On". -3. Select on Azure role assignments. Select "Add role assignment". +3. Select on Azure role assignments. Select **Add role assignment**. 4. Specify the following options: At this point, you need to give the FHIR Reader access to your app, so it can ve - Subscription = your subscription - Role = FHIR Data Reader. -When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by clicking the name and then clicking the Select button. Finally, select "Review + assign" to assign the role. +When you've specified the first four steps, add the role assignment by Managed identity, using Subscription, Managed identity (Logic App Standard), and select your Logic App by selecting the name and then selecting the **Select** button. Finally, select **Review + assign** to assign the role. ### Add a condition -After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the "+" below HTTP to "Choose an operation". On the right, search for the word "condition". Select on "Built-in" to display the Control icon. Next select Actions and choose Condition. +After you have given FHIR Reader access to your app, go back to the Logic App workflow Designer. Then add a condition to determine whether the event is one you want to process. Select the **+** below HTTP to "Choose an operation". On the right, search for the word "condition". Select on **Built-in** to display the Control icon. Next select **Actions** and choose **Condition**. When the condition is ready, you can specify what actions happen if the condition is true or false. ### Choosing a condition criteria -In order to specify whether you want to take action for the specific event, begin specifying the criteria by clicking on **Condition** in the workflow. A set of condition choices are then displayed. +In order to specify whether you want to take action for the specific event, begin specifying the criteria by selecting on **Condition** in the workflow. A set of condition choices are then displayed. Under the **And** box, add these two conditions: The expression for getting the resourceType is `body('HTTP')?['resourceType']`. #### Event Type -You can select Event Type from the Dynamic Content. +You can select **Event Type** from the Dynamic Content. Here's an example of the Condition criteria: When you've entered the condition criteria, save your workflow. #### Workflow dashboard -To check the status of your workflow, select Overview in the workflow menu. Here's a dashboard for a workflow: +To check the status of your workflow, select **Overview** in the workflow menu. Here's a dashboard for a workflow: :::image type="content" source="media/events-logic-apps/events-logic-dashboard.png" alt-text="Screenshot of the Logic App workflow dashboard." lightbox="media/events-logic-apps/events-logic-dashboard.png"::: You can do the following operations from your workflow dashboard: ### Condition testing -Save your workflow by clicking the "Save" button. +Save your workflow by selecting the **Save** button. To test your new workflow, do the following steps: In this tutorial, you learned how to use Logic Apps to process FHIR events. To learn about Events, see > [!div class="nextstepaction"]-> [What are Events?](events-overview.md) +> [What are events?](events-overview.md) To learn about the Events frequently asked questions (FAQs), see > [!div class="nextstepaction"]-> [Frequently asked questions about Events](events-faqs.md) +> [Frequently asked questions about events](events-faqs.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-deploy-portal.md | Title: Deploy Events using the Azure portal - Azure Health Data Services -description: Learn how to deploy the Events feature using the Azure portal. + Title: Deploy events using the Azure portal - Azure Health Data Services +description: Learn how to deploy the events feature using the Azure portal. Last updated 06/23/2022 -# Quickstart: Deploy Events using the Azure portal +# Quickstart: Deploy events using the Azure portal > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -In this quickstart, learn how to deploy the Azure Health Data Services Events feature in the Azure portal to send FHIR and DICOM event messages. +In this quickstart, learn how to deploy the events feature in the Azure portal to send FHIR and DICOM event messages. ## Prerequisites -It's important that you have the following prerequisites completed before you begin the steps of deploying the Events feature in Azure Health Data Services. +It's important that you have the following prerequisites completed before you begin the steps of deploying the events feature. * [An active Azure account](https://azure.microsoft.com/free/search/?OCID=AID2100131_SEM_c4b0772dc7df1f075552174a854fd4bc:G:s&ef_id=c4b0772dc7df1f075552174a854fd4bc:G:s&msclkid=c4b0772dc7df1f075552174a854fd4bc) * [Microsoft Azure Event Hubs namespace and an event hub deployed in the Azure portal](../../event-hubs/event-hubs-create.md) It's important that you have the following prerequisites completed before you be * [FHIR service deployed in the workspace](../fhir/fhir-portal-quickstart.md) or [DICOM service deployed in the workspace](../dicom/deploy-dicom-services-in-azure.md) > [!IMPORTANT]-> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the Events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). +> You will also need to make sure that the Microsoft.EventGrid resource provider has been successfully registered with your Azure subscription to deploy the events feature. For more information, see [Azure resource providers and types - Register resource provider](../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider). > [!NOTE]-> For the purposes of this quickstart, we'll be using a basic Events set up and an event hub as the endpoint for Events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md). +> For the purposes of this quickstart, we'll be using a basic events set up and an event hub as the endpoint for events messages. To learn how to deploy Azure Event Hubs, see [Quickstart: Create an event hub using Azure portal](../../event-hubs/event-hubs-create.md). -## Deploy Events +## Deploy events -1. Browse to the workspace that contains the FHIR or DICOM service you want to send Events messages from and select the **Events** button on the left hand side of the portal. +1. Browse to the workspace that contains the FHIR or DICOM service you want to send events messages from and select the **Events** button on the left hand side of the portal. :::image type="content" source="media/events-deploy-in-portal/events-workspace-select.png" alt-text="Screenshot of workspace and select Events button." lightbox="media/events-deploy-in-portal/events-workspace-select.png"::: It's important that you have the following prerequisites completed before you be 3. In the **Create Event Subscription** box, enter the following subscription information. - * **Name**: Provide a name for your Events subscription. - * **System Topic Name**: Provide a name for your System Topic. + * **Name**: Provide a name for your events subscription. + * **System Topic Name**: Provide a name for your system topic. > [!NOTE]- > The first time you set up the Events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional Events subscriptions that you create within the workspace. + > The first time you set up the events feature, you will be required to enter a new **System Topic Name**. Once the system topic for the workspace is created, the **System Topic Name** will be used for any additional events subscriptions that you create within the workspace. * **Event types**: Type of FHIR or DICOM events to send messages for (for example: create, updated, and deleted).- * **Endpoint Details**: Endpoint to send Events messages to (for example: an Azure Event Hubs namespace + an event hub). + * **Endpoint Details**: Endpoint to send events messages to (for example: an Azure Event Hubs namespace + an event hub). >[!NOTE] > For the purposes of this quickstart, we'll use the **Event Schema** and the **Managed Identity Type** settings at their default values. It's important that you have the following prerequisites completed before you be ## Next steps -In this quickstart, you learned how to deploy Events using the Azure portal. +In this quickstart, you learned how to deploy events using the Azure portal. -To learn how to enable the Events metrics, see +To learn how to enable the events metrics, see > [!div class="nextstepaction"]-> [How to use Events metrics](events-use-metrics.md) +> [How to use events metrics](events-use-metrics.md) To learn how to export Event Grid system diagnostic logs and metrics, see > [!div class="nextstepaction"]-> [How to enable diagnostic settings for Events](events-enable-diagnostic-settings.md) +> [How to enable diagnostic settings for events](events-enable-diagnostic-settings.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Disable Delete Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-disable-delete-workspace.md | Title: How to disable the Events feature and delete Azure Health Data Services workspaces - Azure Health Data Services -description: Learn how to disable the Events feature and delete Azure Health Data Services workspaces. + Title: How to disable the events feature and delete Azure Health Data Services workspaces - Azure Health Data Services +description: Learn how to disable the events feature and delete Azure Health Data Services workspaces. Last updated 07/11/2023 -# How to disable the Events feature and delete Azure Health Data Services workspaces +# How to disable the events feature and delete Azure Health Data Services workspaces > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -In this article, learn how to disable the Events feature and delete Azure Health Data Services workspaces. +In this article, learn how to disable the events feature and delete Azure Health Data Services workspaces. -## Disable Events +## Disable events -To disable Events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted. +To disable events from sending event messages for a single **Event Subscription**, the **Event Subscription** must be deleted. 1. Select the **Event Subscription** to be deleted. In this example, we select an Event Subscription named **fhir-events**. To disable Events from sending event messages for a single **Event Subscription* :::image type="content" source="media/disable-delete-workspaces/events-select-subscription-delete.png" alt-text="Screenshot of events subscriptions and select delete and confirm the event subscription to be deleted." lightbox="media/disable-delete-workspaces/events-select-subscription-delete.png"::: -3. To completely disable Events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain. +3. To completely disable events, delete all **Event Subscriptions** so that no **Event Subscriptions** remain. :::image type="content" source="media/disable-delete-workspaces/events-disable-no-subscriptions.png" alt-text="Screenshot of Events subscriptions and delete all event subscriptions to disable events." lightbox="media/disable-delete-workspaces/events-disable-no-subscriptions.png"::: To avoid errors and successfully delete workspaces, follow these steps and in th ## Next steps -In this article, you learned how to disable the Events feature and delete workspaces. +In this article, you learned how to disable the events feature and delete workspaces. -To learn about how to troubleshoot Events, see +To learn about how to troubleshoot events, see > [!div class="nextstepaction"]-> [Troubleshoot Events](events-troubleshooting-guide.md) +> [Troubleshoot events](events-troubleshooting-guide.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Enable Diagnostic Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-enable-diagnostic-settings.md | Title: Enable Events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services -description: Learn how to enable Events diagnostic settings for diagnostic logs and metrics exporting. + Title: Enable events diagnostic settings for diagnostic logs and metrics export - Azure Health Data Services +description: Learn how to enable events diagnostic settings for diagnostic logs and metrics exporting. Last updated 06/23/2022 -# How to enable diagnostic settings for Events +# How to enable diagnostic settings for events > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -In this article, learn how to enable the Events diagnostic settings for Azure Event Grid system topics. +In this article, learn how to enable the events diagnostic settings for Azure Event Grid system topics. ## Resources In this article, learn how to enable the Events diagnostic settings for Azure Ev |More information about how to work with diagnostics logs.|[Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md)| > [!NOTE] -> It might take up to 15 minutes for the first Events diagnostic logs and metrics to display in the destination of your choice. +> It might take up to 15 minutes for the first events diagnostic logs and metrics to display in the destination of your choice. ## Next steps -In this article, you learned how to enable diagnostic settings for Events. +In this article, you learned how to enable diagnostic settings for events. -To learn how to use Events metrics using the Azure portal, see +To learn how to use events metrics using the Azure portal, see > [!div class="nextstepaction"]-> [How to use Events metrics](events-use-metrics.md) +> [How to use events metrics](events-use-metrics.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-faqs.md | Title: Frequently asked questions about Events - Azure Health Data Services -description: Learn about the frequently asked questions about Events. + Title: Frequently asked questions about events - Azure Health Data Services +description: Learn about the frequently asked questions about events. Last updated 07/11/2023 -# Frequently asked questions about Events +# Frequently asked questions about events > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. ## Events: The basics -## Can I use Events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service? +## Can I use events with a different FHIR/DICOM service other than the Azure Health Data Services FHIR/DICOM service? -No. The Azure Health Data Services Events feature only currently supports the Azure Health Data Services FHIR and DICOM services. +No. The Azure Health Data Services events feature only currently supports the Azure Health Data Services FHIR and DICOM services. -## What FHIR resource events does Events support? +## What FHIR resource changes does events support? Events are generated from the following FHIR service types: Events are generated from the following FHIR service types: For more information about the FHIR service delete types, see [FHIR REST API capabilities for Azure Health Data Services FHIR service](../../healthcare-apis/fhir/fhir-rest-api-capabilities.md). -## Does Events support FHIR bundles? +## Does events support FHIR bundles? -Yes. The Events feature is designed to emit notifications of data changes at the FHIR resource level. +Yes. The events feature is designed to emit notifications of data changes at the FHIR resource level. Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle-type.html) in the following ways: Events support these [FHIR bundle types](http://hl7.org/fhir/R4/valueset-bundle- > [!NOTE] > Events are not sent in the sequence of the data operations in the FHIR bundle. -## What DICOM image events does Events support? +## What DICOM image changes does events support? Events are generated from the following DICOM service types: Events are generated from the following DICOM service types: * **DicomImageUpdated** - The event emitted after a DICOM image gets updated successfully. -## What is the payload of an Events message? +## What is the payload of an events message? -For a detailed description of the Events message structure and both required and nonrequired elements, see [Events troubleshooting guide](events-troubleshooting-guide.md). +For a detailed description of the events message structure and both required and nonrequired elements, see [Events message structures](events-message-structure.md). -## What is the throughput for the Events messages? +## What is the throughput for the events messages? The throughput of the FHIR or DICOM service and the Event Grid govern the throughput of FHIR and DICOM events. When a request made to the FHIR service is successful, it returns a 2xx HTTP status code. It also generates a FHIR resource or DICOM image changing event. The current limitation is 5,000 events/second per workspace for all FHIR or DICOM service instances in the workspace. -## How am I charged for using Events? +## How am I charged for using events? -There are no extra charges for using [Azure Health Data Services Events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription. +There are no extra charges for using [Azure Health Data Services events](https://azure.microsoft.com/pricing/details/health-data-services/). However, applicable charges for the [Event Grid](https://azure.microsoft.com/pricing/details/event-grid/) are assessed against your Azure subscription. ## How do I subscribe to multiple FHIR and/or DICOM services in the same workspace separately? Yes. We recommend that you use different subscribers for each individual FHIR or Yes. Event Grid supports customer's Health Insurance Portability and Accountability Act (HIPAA) and Health Information Trust Alliance (HITRUST) obligations. For more information, see [Microsoft Azure Compliance Offerings](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/). -## What is the expected time to receive an Events message? +## What is the expected time to receive an events message? On average, you should receive your event message within one second after a successful HTTP request. 99.99% of the event messages should be delivered within five seconds unless the limitation of either the FHIR service, DICOM service, or [Event Grid](../../event-grid/quotas-limits.md) has been met. -## Is it possible to receive duplicate Events messages? +## Is it possible to receive duplicate events messages? -Yes. The Event Grid guarantees at least one Events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the Events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md). +Yes. The Event Grid guarantees at least one events message delivery with its push mode. There may be chances that the event delivery request returns with a transient failure status code for random reasons. In this situation, the Event Grid considers that as a delivery failure and resends the events message. For more information, see [Azure Event Grid delivery and retry](../../event-grid/delivery-and-retry.md). Generally, we recommend that developers ensure idempotency for the event subscriber. The event ID or the combination of all fields in the `data` property of the message content are unique per each event. The developer can rely on them to deduplicate. Generally, we recommend that developers ensure idempotency for the event subscri [FAQs about the Azure Health Data Services](../healthcare-apis-faqs.md) -[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md) - [FAQs about Azure Health Data Services DICOM service](../dicom/dicom-services-faqs.yml) +[FAQs about Azure Health Data Services FHIR service](../fhir/fhir-faq.md) + [FAQs about Azure Health Data Services MedTech service](../iot/iot-connector-faqs.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Message Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-message-structure.md | Title: Events message structure - Azure Health Data Services -description: Learn about the Events message structures and required values. +description: Learn about the events message structures and required values. -In this article, learn about the Events message structures, required and nonrequired elements, and see samples of Events message payloads. +In this article, learn about the events message structures, required and nonrequired elements, and see samples of events message payloads. > [!IMPORTANT]-> Events currently supports only the following operations: +> Events currently supports the following operations: > > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. > In this article, learn about the Events message structures, required and nonrequ ## Next steps -In this article, you learned about the Events message structures. +In this article, you learned about the events message structures. -To learn how to deploy Events using the Azure portal, see +To learn how to deploy events using the Azure portal, see > [!div class="nextstepaction"]-> [Deploy Events using the Azure portal](events-deploy-portal.md) +> [Deploy events using the Azure portal](events-deploy-portal.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-overview.md | Title: What are Events? - Azure Health Data Services -description: Learn about Events, its features, integrations, and next steps. + Title: What are events? - Azure Health Data Services +description: Learn about events, its features, integrations, and next steps. Last updated 07/11/2023 -# What are Events? +# What are events? > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -Events are a notification and subscription feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data. +Events are a subscription and notification feature in the Azure Health Data Services. Events enable customers to utilize and enhance the analysis and workflows of structured and unstructured data like vitals and clinical or progress notes, operations data, health data, and medical imaging data. -When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the Events feature sends notification messages to Events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The Events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace. +When FHIR resource changes or Digital Imaging and Communications in Medicine (DICOM) image changes are successfully written to the Azure Health Data Services, the events feature sends notification messages to events subscribers. These event notification occurrences can be sent to multiple endpoints to trigger automation ranging from starting workflows to sending email and text messages to support the changes occurring from the health data it originated from. The events feature integrates with the [Azure Event Grid service](../../event-grid/overview.md) and creates a system topic for the Azure Health Data Services workspace. > [!IMPORTANT]-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past resource changes or when the feature is turned off. +> FHIR resource and DICOM image change data is only written and event messages are sent when the events feature is turned on. The event feature doesn't send messages on past resource changes or when the feature is turned off. > [!TIP] > For more information about the features, configurations, and to learn about the use cases of the Azure Event Grid service, see [Azure Event Grid](../../event-grid/overview.md) > [!IMPORTANT] > Events currently supports the following operations: Events are designed to support growth and changes in healthcare technology needs ## Configurable -Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune Events message delivery options. +Choose the FHIR and DICOM event types that you want to receive messages about. Use the advanced features like filters, dead-lettering, and retry policies to tune events message delivery options. > [!NOTE] > The advanced features come as part of the Event Grid service. ## Extensible -Use Events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time. +Use events to send FHIR resource and DICOM image change messages to services like [Azure Event Hubs](../../event-hubs/event-hubs-about.md) or [Azure Functions](../../azure-functions/functions-overview.md) to trigger downstream automated workflows to enhance items such as operational data, data analysis, and visibility to the incoming data capturing near real time. ## Secure -Built on a platform that supports protected health information compliance with privacy, safety, and security in mind, the Events messages don't transmit sensitive data as part of the message payload. +Events are built on a platform that supports protected health information compliance with privacy, safety, and security in mind. -Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the Events message receiving endpoints of your choice. +Use [Azure Managed identities](../../active-directory/managed-identities-azure-resources/overview.md) to provide secure access from your Event Grid system topic to the events message receiving endpoints of your choice. ## Next steps -To learn about deploying Events using the Azure portal, see +To learn about deploying events using the Azure portal, see > [!div class="nextstepaction"]-> [Deploy Events using the Azure portal](./events-deploy-portal.md) +> [Deploy events using the Azure portal](events-deploy-portal.md) -To learn about the frequently asks questions (FAQs) about Events, see - -> [!div class="nextstepaction"] -> [Frequently asked questions about Events](./events-faqs.md) +To learn about troubleshooting events, see -To learn about troubleshooting Events, see +> [!div class="nextstepaction"] +> [Troubleshoot events](events-troubleshooting-guide.md) +To learn about the frequently asks questions (FAQs) about events, see + > [!div class="nextstepaction"]-> [Troubleshoot Events](./events-troubleshooting-guide.md) +> [Frequently asked questions about Events](events-faqs.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-troubleshooting-guide.md | Title: Troubleshoot Events - Azure Health Data Services -description: Learn how to troubleshoot Events. + Title: Troubleshoot events - Azure Health Data Services +description: Learn how to troubleshoot events. -# Troubleshoot Events +# Troubleshoot events > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -This article provides resources for troubleshooting Events. +This article provides resources to troubleshoot events. > [!IMPORTANT]-> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The Event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off. +> FHIR resource and DICOM image change data is only written and event messages are sent when the Events feature is turned on. The event feature doesn't send messages on past FHIR resource or DICOM image changes or when the feature is turned off. ## Resources for troubleshooting > [!IMPORTANT]-> Events currently supports only the following operations: +> Events currently supports the following operations: > > * **FhirResourceCreated** - The event emitted after a FHIR resource gets created successfully. > This article provides resources for troubleshooting Events. ### Events message structures -Use this resource to learn about the Events message structures, required and nonrequired elements, and see sample Events messages: -* [Events message structures](./events-message-structure.md) +Use this resource to learn about the events message structures, required and nonrequired elements, and see sample Events messages: +* [Events message structures](events-message-structure.md) ### How to's -Use this resource to learn how to deploy Events in the Azure portal: -* [Deploy Events using the Azure portal](./events-deploy-portal.md) +Use this resource to learn how to deploy events in the Azure portal: +* [Deploy events using the Azure portal](events-deploy-portal.md) > [!IMPORTANT] > The Event Subscription requires access to whichever endpoint you chose to send Events messages to. For more information, see [Enable managed identity for a system topic](../../event-grid/enable-identity-system-topics.md). -Use this resource to learn how to use Events metrics: -* [How to use Events metrics](./events-display-metrics.md) +Use this resource to learn how to use events metrics: +* [How to use events metrics](events-display-metrics.md) -Use this resource to learn how to enable diagnostic settings for Events: -* [How to enable diagnostic settings for Events](./events-export-logs-metrics.md) +Use this resource to learn how to enable diagnostic settings for events: +* [How to enable diagnostic settings for events](events-export-logs-metrics.md) ## Contact support If you have a technical question about Events or if you have a support related issue, see [Create a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview) and complete the required fields under the **Problem description** tab. For more information about Azure support options, see [Azure support plans](https://azure.microsoft.com/support/options/#support-plans). ## Next steps-In this article, you were provided resources for troubleshooting Events. +In this article, you were provided resources for troubleshooting events. -To learn about the frequently asked questions (FAQs) about Events, see +To learn about the frequently asked questions (FAQs) about events, see > [!div class="nextstepaction"]-> [Frequently asked questions about Events](events-faqs.md) +> [Frequently asked questions about events](events-faqs.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Events Use Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/events/events-use-metrics.md | Title: Use Events metrics - Azure Health Data Services -description: Learn how use Events metrics. + Title: Use events metrics - Azure Health Data Services +description: Learn how use events metrics. Last updated 07/11/2023 -# How to use Events metrics +# How to use events metrics > [!NOTE] > [Fast Healthcare Interoperability Resources (FHIR®)](https://www.hl7.org/fhir/) is an open healthcare specification. -In this article, learn how to use Events metrics using the Azure portal. +In this article, learn how to use events metrics using the Azure portal. > [!TIP] > To learn more about Azure Monitor and metrics, see [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md). > [!NOTE]-> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the Events message endpoint. +> For the purposes of this article, an [Azure Event Hubs](../../event-hubs/event-hubs-about.md) was used as the events message endpoint. ## Use metrics In this article, learn how to use Events metrics using the Azure portal. :::image type="content" source="media\events-display-metrics\events-metrics-subscription.png" alt-text="Screenshot of select the metrics button." lightbox="media\events-display-metrics\events-metrics-subscription.png"::: -4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events subscription metrics pages. +4. From this page, notice that the Event Hubs received the incoming message presented in the previous Events Subscription metrics pages. :::image type="content" source="media\events-display-metrics\events-metrics-event-hub.png" alt-text="Screenshot of displaying event hubs metrics." lightbox="media\events-display-metrics\events-metrics-event-hub.png"::: ## Next steps -In this tutorial, you learned how to use Events metrics using the Azure portal. +In this tutorial, you learned how to use events metrics using the Azure portal. -To learn how to export Events Azure Event Grid system diagnostic logs and metrics, see +To learn how to enable events diagnostic settings, see > [!div class="nextstepaction"]-> [Enable diagnostic settings for Events](events-enable-diagnostic-settings.md) +> [Enable diagnostic settings for events](events-enable-diagnostic-settings.md) FHIR® is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission. |
healthcare-apis | Configure Import Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md | To specify the Azure Storage account in JSON view, you need to use [REST API](/r Below steps walk through setting configurations for initial and incremental import mode. Choose the right import mode for your use case. -### Step 3.1: Set import configuration for Initial import mode. +### Step 3a: Set import configuration for Initial import mode. Do following changes to JSON: 1. Set enabled in importConfiguration to **true**. 2. Update the integrationDataStore with target storage account name. Do following changes to JSON: After you've completed this final step, you're ready to perform **Initial mode** import using $import. -### Step 3.2: Set import configuration for Incremental import mode. +### Step 3b: Set import configuration for Incremental import mode. Do following changes to JSON: 1. Set enabled in importConfiguration to **true**. |
healthcare-apis | Configure Settings Convert Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-settings-convert-data.md | To access and use the default templates for your conversion requests, ensure tha > > The default templates are provided only to help you get started with your data conversion workflow. These default templates are not intended for production and might change when Microsoft releases updates for the FHIR service. To have consistent data conversion behavior across different versions of the FHIR service, you must do the following: >-> 1. Host your own copy of the templates in an Azure Container Registry instance. +> 1. Host your own copy of the templates in an [Azure Container Registry](../../container-registry/container-registry-intro.md) (ACR) instance. > 2. Register the templates to the FHIR service. > 3. Use your registered templates in your API calls. > 4. Verify that the conversion behavior meets your requirements. In the example code, two example custom fields `customfield_message` and `custom ## Host your own templates -We recommend that you host your own copy of templates in an Azure Container Registry (ACR) instance. Hosting your own templates and using them for `$convert-data` operations involves the following six steps: +It's recommended that you host your own copy of templates in an [Azure Container Registry](../../container-registry/container-registry-intro.md) (ACR) instance. ACR can be used to host your custom templates and support with versioning. ++Hosting your own templates and using them for `$convert-data` operations involves the following seven steps: 1. [Create an Azure Container Registry instance](#step-1-create-an-azure-container-registry-instance) 2. [Push the templates to your Azure Container Registry instance](#step-2-push-the-templates-to-your-azure-container-registry-instance)-3. [Enable Azure Managed Identity in your FHIR service instance](#step-3-enable-azure-managed-identity-in-your-fhir-service-instance) +3. [Enable Azure Managed identity in your FHIR service instance](#step-3-enable-azure-managed-identity-in-your-fhir-service-instance) 4. [Provide Azure Container Registry access to the FHIR service managed identity](#step-4-provide-azure-container-registry-access-to-the-fhir-service-managed-identity) 5. [Register the Azure Container Registry server in the FHIR service](#step-5-register-the-azure-container-registry-server-in-the-fhir-service) 6. [Configure the Azure Container Registry firewall for secure access](#step-6-configure-the-azure-container-registry-firewall-for-secure-access)+7. [Verify the $convert-data operation](#step-7-verify-the-convert-data-operation) ### Step 1: Create an Azure Container Registry instance -Read the [Introduction to container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own Azure Container Registry instance. We recommend that you place your Azure Container Registry instance in the same resource group as your FHIR service. +Read the [Introduction to container registries in Azure](../../container-registry/container-registry-intro.md) and follow the instructions for creating your own ACR instance. We recommend that you place your ACR instance in the same resource group as your FHIR service. ### Step 2: Push the templates to your Azure Container Registry instance -After you create an Azure Container Registry instance, you can use the **FHIR Converter: Push Templates** command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your Azure Container Registry instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose. +After you create an ACR instance, you can use the **FHIR Converter: Push Templates** command in the [FHIR Converter extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-health-fhir-converter) to push your custom templates to your ACR instance. Alternatively, you can use the [Template Management CLI tool](https://github.com/microsoft/FHIR-Converter/blob/main/docs/TemplateManagementCLI.md) for this purpose. -To maintain different versions of custom templates in your ACR, you may push the image containing your custom templates into your ACR instance with different image tags. +To maintain different versions of custom templates in your Azure Container Registry, you may push the image containing your custom templates into your ACR instance with different image tags. * For more information about ACR registries, repositories, and artifacts, see [About registries, repositories, and artifacts](../../container-registry/container-registry-concepts.md). * For more information about image tag best practices, see [Recommendations for tagging and versioning container images](../../container-registry/container-registry-image-tag-version.md). To reference specific template versions in the API, be sure to use the exact image name and tag that contains the versioned template to be used. For the API parameter `templateCollectionReference`, use the appropriate **image name + tag** (for example: `<RegistryServer>/<imageName>:<imageTag>`). -### Step 3: Enable Azure Managed Identity in your FHIR service instance +### Step 3: Enable Azure Managed identity in your FHIR service instance 1. Go to your instance of the FHIR service in the Azure portal, and then select the **Identity** option. -2. Change the status to **On** to enable Managed Identity in the FHIR service. +2. Change the **Status** to **On** and select **Save** to enable the system-managed identity in the FHIR service. - ![Screenshot of the FHIR pane for enabling the managed identity feature.](media/convert-data/fhir-mi-enabled.png#lightbox) ### Step 4: Provide Azure Container Registry access to the FHIR service managed identity To reference specific template versions in the API, be sure to use the exact ima 2. Select **Add** > **Add role assignment**. If the **Add role assignment** option is unavailable, ask your Azure administrator to grant you the permissions for performing this task. - ![Screenshot of the "Access control" pane and the "Add role assignment" menu.](../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png) -- :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot of the 'Access control' pane and the 'Add role assignment' menu."::: + :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png" alt-text="Screenshot of the Access control pane and the 'Add role assignment' menu."::: 3. On the **Role** pane, select the [AcrPull](../../role-based-access-control/built-in-roles.md#acrpull) role. - [![Screenshot showing the "Add role assignment" pane.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)](../../../includes/role-based-access-control/media/add-role-assignment-page.png#lightbox) + :::image type="content" source="../../../includes/role-based-access-control/media/add-role-assignment-page.png" alt-text="Screenshot showing the Add role assignment pane." lightbox="../../../includes/role-based-access-control/media/add-role-assignment-page.png"::: 4. On the **Members** tab, select **Managed identity**, and then select **Select members**. For more information about assigning roles in the Azure portal, see [Azure built ### Step 5: Register the Azure Container Registry server in the FHIR service -You can register the Azure Container Registry server by using the Azure portal. +You can register the ACR server by using the Azure portal. To use the Azure portal: To use the Azure portal: 3. Select **Add** and then, in the dropdown list, select your registry server. 4. Select **Save**. - ![Screenshot of the Artifacts screen for registering an Azure Container Registry with a FHIR service.](media/convert-data/fhir-acr-add-registry.png#lightbox) + :::image type="content" source="media/convert-data/configure-settings-convert-data/fhir-acr-add-registry.png" alt-text="Screenshot of the Artifacts screen for registering an Azure Container Registry with a FHIR service." lightbox="media/convert-data/configure-settings-convert-data/fhir-acr-add-registry.png"::: -You can register up to 20 Azure Container Registry servers in the FHIR service. +You can register up to 20 ACR servers in the FHIR service. > [!NOTE] > It might take a few minutes for the registration to take effect. ### Step 6: Configure the Azure Container Registry firewall for secure access -1. In the Azure portal, on the left pane, select **Networking** for the Azure Container Registry instance. -- ![Screenshot of the Networking screen for configuring an Azure Container Registry firewall.](media/convert-data/networking-container-registry.png#lightbox) --2. On the **Public access** tab, select **Selected networks**. --3. In the **Firewall** section, specify the IP address in the **Address range** box. +There are many methods for securing ACR using the built-in firewall depending on your particular use case. -Add IP ranges to allow access from the Internet or your on-premises networks. +* [Connect privately to an Azure container registry using Azure Private Link](../../container-registry/container-registry-private-link.md) +* [Configure public IP network rules](../../container-registry/container-registry-access-selected-networks.md) +* [Azure Container Registry mitigating data exfiltration with dedicated data endpoints](../../container-registry/container-registry-dedicated-data-endpoints.md) +* [Restrict access to a container registry using a service endpoint in an Azure virtual network](../../container-registry/container-registry-vnet.md) +* [Allow trusted services to securely access a network-restricted container registry](../../container-registry/allow-access-trusted-services.md) +* [Configure rules to access an Azure container registry behind a firewall](../../container-registry/container-registry-firewall-access-rules.md) +* [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519) -The following table lists the IP addresses for the Azure regions where the FHIR service is available: --| Azure region | Public IP address | -|:|:| -| Australia East | 20.53.47.210 | -| Brazil South | 191.238.72.227 | -| Canada Central | 20.48.197.161 | -| Central India | 20.192.47.66 | -| East US | 20.62.134.242, 20.62.134.244, 20.62.134.245 | -| East US 2 | 20.62.60.115, 20.62.60.116, 20.62.60.117 | -| France Central | 51.138.211.19 | -| Germany North | 51.116.60.240 | -| Germany West Central | 20.52.88.224 | -| Japan East | 20.191.167.146 | -| Japan West | 20.189.228.225 | -| Korea Central | 20.194.75.193 | -| North Central US | 52.162.111.130, 20.51.0.209 | -| North Europe | 52.146.137.179 | -| Qatar Central | 20.21.36.225 | -| South Africa North | 102.133.220.199 | -| South Central US | 20.65.134.83 | -| Southeast Asia | 20.195.67.208 | -| Sweden Central | 51.12.28.100 | -| Switzerland North | 51.107.247.97 | -| UK South | 51.143.213.211 | -| UK West | 51.140.210.86 | -| West Central US | 13.71.199.119 | -| West Europe | 20.61.103.243, 20.61.103.244 | -| West US 2 | 20.51.13.80, 20.51.13.84, 20.51.13.85 | -| West US 3 | 20.150.245.165 | --You can also completely disable public access to your Azure Container Registry instance while still allowing access from your FHIR service. To do so: --1. In the Azure portal container registry, select **Networking**. -2. Select the **Public access** tab, select **Disabled**, and then select **Allow trusted Microsoft services to access this container registry**. --![Screenshot of the "Networking" pane for disabling public network access to an Azure Container Registry instance.](media/convert-data/configure-private-network-container-registry.png#lightbox) +> [!NOTE] +> The FHIR service has been registered as a trusted Microsoft service with Azure Container Registry. -### Verify the $convert-data operation +### Step 7: Verify the $convert-data operation Make a call to the `$convert-data` operation by specifying your template reference in the `templateCollectionReference` parameter: You should receive a `Bundle` response that contains the health data converted i ## Next steps -In this article, you've learned how to configure settings for `$convert-data` for converting health data into FHIR by using the FHIR service in Azure Health Data Services. +In this article, you've learned how to configure the settings for `$convert-data` to begin converting various health data formats into the FHIR format. For an overview of `$convert-data`, see |
healthcare-apis | Fhir Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md | For more information, see [Supported FHIR features](fhir-features-supported.md). FHIR service is our implementation of the FHIR specification that sits in the Azure Health Data Services, which allows you to have a FHIR service and a DICOM service within a single workspace. Azure API for FHIR was our initial GA product and is still available as a stand-alone product. The main feature differences are: * FHIR service has a limit of 4 TB, and Azure API for FHIR supports more than 4 TB.-* FHIR service support [transaction bundles](https://www.hl7.org/fhir/http.html#transaction). +* FHIR service support additional capabilities as +** [Transaction bundles](https://www.hl7.org/fhir/http.html#transaction). +** [Incremental Import](configure-import-data.md). +** [Autoscaling](fhir-service-autoscale.md) is enabled by default. * Azure API for FHIR has more platform features (such as customer managed keys, and cross region DR) that aren't yet available in FHIR service in Azure Health Data Services. ### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server? SMART (Substitutable Medical Applications and Reusable Technology) on FHIR is a ### Does the FHIR service support SMART on FHIR? -We have a basic SMART on FHIR proxy as part of the managed service. If this doesn’t meet your needs, you can use the open-source FHIR proxy for more advanced SMART scenarios. +Yes, SMART on FHIR capability is supported using [AHDS samples](https://aka.ms/azure-health-data-services-smart-on-fhir-sample). This is referred to as SMART on FHIR(Enhanced). SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg). For more information, visit [SMART on FHIR(Enhanced) Documentation](smart-on-fhir.md). + ### Can I create a custom FHIR resource? There are two basic Delete types supported within the FHIR service. These are [D ### Can I perform health checks on FHIR service? -To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is succesful. -In case of errors, you will recieve error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios. +To perform health check on FHIR service , enter `{{fhirurl}}/health/check` in the GET request. You should be able to see Status of FHIR service. HTTP Status code response with 200 and OverallStatus as "Healthy" in response, means your health check is successful. +In case of errors, you will receive error response with HTTP status code 404 (Not Found) or status code 500 (Internal Server Error), and detailed information in response body in some scenarios. ## Next steps |
healthcare-apis | Frequently Asked Questions Convert Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/frequently-asked-questions-convert-data.md | You can use the `$convert-data` endpoint as a component within an ETL (extract, However, the `$convert-data` operation itself isn't an ETL pipeline. -## How can I persist the data into the FHIR service? +## Where can I find an example of an ETL pipeline that I can reference? ++There's an example published in the [Azure Data Factory Template Gallery](../../data-factory/solution-templates-introduction.md#template-gallery) named **Transform HL7v2 health data to FHIR R4 format and write to ADLS Gen2**. This template transforms HL7v2 messages read from an Azure Data Lake Storage (ADLS) Gen2 or an Azure Blob Storage account into the FHIR R4 format. It then persists the transformed FHIR bundle JSON file into an ADLS Gen2 or a Blob Storage account. Once youΓÇÖre in the Azure Data Factory Template Gallery, you can search for the template. +++> [!IMPORTANT] +> The purpose of this template is to help you get started with an ETL pipeline. Any steps in this pipeline can be removed, added, edited, or customized to fit your needs. +> +> In a scenario with batch processing of HL7v2 messages, this template does not take sequencing into account. Post processing will be needed if sequencing is a requirement. ++## How can I persist the converted data into the FHIR service using Postman? You can use the FHIR service's APIs to persist the converted data into the FHIR service by using `POST {{fhirUrl}}/{{FHIR resource type}}` with the request body containing the FHIR resource to be persisted in JSON format. -* For more information about using Postman with the FHIR service, see [Access the Azure Health Data Services FHIR service using Postman](use-postman.md). +For more information about using Postman with the FHIR service, see [Access the Azure Health Data Services FHIR service using Postman](use-postman.md). ## Is there a difference in the experience of the $convert-data endpoint in Azure API for FHIR versus in the Azure Health Data Services? |
healthcare-apis | Selectable Search Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/selectable-search-parameters.md | -# Selectable search parameter capability +# Selectable search parameter capability Searching for resources is fundamental to FHIR. Each resource in FHIR carries information as a set of elements, and search parameters work to query the information in these elements. As the FHIR service in Azure health data services is provisioned, inbuilt search parameters are enabled by default. During the ingestion of data in the FHIR service, specific properties from FHIR resources are extracted and indexed with these search parameters. This is done to perform efficient searches. The selectable search parameter functionality allows you to enable or disable inbuilt search parameters. This functionality helps you to store more resources in allocated storage space and have performance improvements, by enabling only needed search parameters. |
healthcare-apis | Smart On Fhir | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md | Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser - After registering the application, make note of the applicationId for client application. - Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments. -## SMART on FHIR Enhanced using Azure Health Data Services Samples +## SMART on FHIR using Azure Health Data Services Samples (SMART on FHIR (Enhanced)) -### Step 1 : Set up FHIR SMART user role +### Step 1: Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes. -### Step 2 : FHIR server integration with samples +### Step 2: FHIR server integration with samples For integration with Azure Health Data Services samples, you would need to follow the steps in samples open source solution. -**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This steps listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more). +**[Click on the link](https://aka.ms/azure-health-data-services-smart-on-fhir-sample)** to navigate to Azure Health Data Service Samples OSS. This step listed in the document will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more). > [!NOTE] > Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg) compliance, using Azure Active Directory as the identity provider workflow. For integration with Azure Health Data Services samples, you would need to follo <summary> Click to expand! </summary> > [!NOTE]-> This is another option to using "SMART on FHIR Enhanced using AHDS Samples" mentioned above. We suggest you to adopt SMART on FHIR enhanced. SMART on FHIR Proxy option is legacy option. -> SMART on FHIR enhanced version provides added capabilities than SMART on FHIR proxy. SMART on FHIR enhanced capability can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg). +> This is another option to SMART on FHIR(Enhanced) using AHDS Samples mentioned above. We suggest you to adopt SMART on FHIR(Enhanced). SMART on FHIR Proxy option is legacy option. +> SMART on FHIR(Enhanced) provides added capabilities than SMART on FHIR proxy. SMART on FHIR(Enhanced) can be considered to meet requirements with [SMART on FHIR Implementation Guide (v 1.0.0)](https://hl7.org/fhir/smart-app-launch/1.0.0/) and [§170.315(g)(10) Standardized API for patient and population services criterion](https://www.healthit.gov/test-method/standardized-api-patient-and-population-services#ccg). -### Step 1 : Set admin consent for your client application +### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources. Add the reply URL to the public client application that you created earlier for <!![Reply URL configured for the public client](media/tutorial-smart-on-fhir/configure-reply-url.png)> -### Step 3 : Get a test patient +### Step 3: Get a test patient To test the FHIR service and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient. -### Step 4 : Download the SMART on FHIR app launcher +### Step 4: Download the SMART on FHIR app launcher The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup. Use this command to run the application: dotnet run ``` -### Step 5 : Test the SMART on FHIR proxy +### Step 5: Test the SMART on FHIR proxy After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen: |
healthcare-apis | Use Postman | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/use-postman.md | In this article, you learned how to access the FHIR service in Azure Health Data >[What is FHIR service?](overview.md) -For a starter collection of sample Postman queries, please see our [samples repo](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/sample-postman-queries) on Github. +For a starter collection of sample Postman queries, please see our [samples repo](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/sample-postman-queries) on GitHub. FHIR® is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7. |
healthcare-apis | Healthcare Apis Configure Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-configure-private-link.md | Ensure the region for the new private endpoint is the same as the region for you [![Screen image of the Azure portal Basics Tab.](media/private-link/private-link-basics.png)](media/private-link/private-link-basics.png#lightbox) -For the resource type, search and select **Microsoft.HealthcareApis/services** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated. +For the resource type, search and select **Microsoft.HealthcareApis/workspaces** from the drop-down list. For the resource, select the workspace in the resource group. The target subresource, **healthcareworkspace**, is automatically populated. [![Screen image of the Azure portal Resource tab.](media/private-link/private-link-resource.png)](media/private-link/private-link-resource.png#lightbox) |
healthcare-apis | Overview Of Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-samples.md | Title: The MedTech service scenario-based mappings samples - Azure Health Data Services + Title: MedTech service scenario-based mappings samples - Azure Health Data Services description: Learn about the MedTech service scenario-based mappings samples. -The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings. +The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings. ## Sample resources Each MedTech service scenario-based sample contains the following resources: ## CalculatedContent -[Conversions using functions](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/calculatedcontent/conversions-using-functions) +[Conversions using functions](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings/calculatedcontent/conversions-using-functions) ## IotJsonPathContent -[Single device message into multiple resources](https://github.com/Azure-Samples/azure-health-data-services-samples/tree/main/samples/medtech-service-mappings/iotjsonpathcontent/single-device-message-into-multiple-resources) +[Single device message into multiple resources](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings/iotjsonpathcontent/single-device-message-into-multiple-resources) ## Next steps |
healthcare-apis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
industrial-iot | Tutorial Publisher Deploy Opc Publisher Standalone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md | A typical set of IoT Edge Module Container Create Options for OPC Publisher is: { "Hostname": "opcpublisher", "Cmd": [- "--pf=./pn.json", + "--pf=/appdata/pn.json", "--aa" ], "HostConfig": { |
industry | Get Sensor Data From Sensor Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/get-sensor-data-from-sensor-partner.md | Title: Get sensor data from the partners description: This article describes how to get sensor data from partners. + Last updated 11/04/2019 |
industry | Ingest Historical Telemetry Data In Azure Farmbeats | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industry/agriculture/ingest-historical-telemetry-data-in-azure-farmbeats.md | |
internet-peering | Peering Service Partner Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/peering-service-partner-overview.md | + + Title: Azure Peering Service partner overview +description: Learn how to become an Azure Peering Service partner. ++++ Last updated : 08/18/2023+++# Azure Peering Service partner overview ++This article helps you understand how to become an Azure Peering Service partner. It also describes the different types of Peering Service connections and the monitoring platform. For more information about Azure Peering Service, see [Azure Peering Service overview](../peering-service/about.md) ++## Peering Service partner requirements ++To become a Peering Service partner, follow these technical requirements: ++- The Peer MUST provide its own Autonomous System Number (ASN), which MUST be public. +- The Peer MUST have redundant Interconnect (PNI) at each interconnect location to ensure local redundancy. +- The Peer MUST supply and advertise their own publicly routable IPv4 address space used by Peer's endpoints. +- The Peer MUST supply detail of what class of traffic and endpoints are housed in each advertised subnet. +- The Peer MUST NOT terminate the peering on a device running a stateful firewall. +- The Peer CANNOT have two local connections configured on the same router, as diversity is required. +- The Peer CANNOT apply rate limiting to their connection. +- The Peer CANNOT configure a local redundant connection as a backup connection. Backup connections must be in a different location than primary connections. +- It's recommended to create Peering Service peerings in multiple locations so geo-redundancy can be achieved. +- Primary, backup, and redundant sessions all must have the same bandwidth. +- All infrastructure prefixes are registered in the Azure portal and advertised with community string 8075:8007. +- Microsoft configures all the interconnect links as LAG (link bundles) by default, so, peer MUST support LACP (Link Aggregation Control Protocol) on the interconnect links. ++If you follow all of the requirements listed and would like to become a Peering Service partner, an agreement must be signed. Contact peeringservice@microsoft.com to get started. ++## Types of Peering Service connections ++To become a Peering Service partner, you must request a direct peering interconnect with Microsoft. They come in three types depending on your use case. ++- **AS8075** - A direct peering interconnect enabled for Peering Service made for Internet Service providers (ISPs) +- **AS8075 (with Voice)** - A direct peering interconnect enabled for Peering Service made for Internet Service providers (ISPs). This type is optimized for communications services (messaging, conferencing, etc.), and allows you to integrate your communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams. +- **AS8075 (with exchange route server)** - A direct peering interconnect enabled for Peering Service and made for Internet Exchange providers (IXPs) who require a route server. ++### Monitoring platform ++Service monitoring is offered to analyze user traffic and routing. Metrics are available in the Azure portal to track the performance and availability of your Peering Service connection. For more information, see [Peering Service monitoring platform](../peering-service/about.md#monitoring-platform) ++In addition, Peering Service partners are able to see received routes reported in the Azure portal. +++## Next steps ++- To establish a Direct interconnect for Peering Service, see [Internet peering for Peering Service walkthrough](walkthrough-peering-service-all.md). +- To establish a Direct interconnect for Peering Service Voice, see [Internet peering for Peering Service Voice walkthrough](walkthrough-communications-services-partner.md). +- To establish a Direct interconnect for Communications Exchange with Route Server, see [Internet peering for MAPS Exchange with Route Server walkthrough](walkthrough-exchange-route-server-partner.md). |
internet-peering | Walkthrough Communications Services Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-communications-services-partner.md | Title: Internet peering for Peering Service Voice Services walkthrough + Title: Internet peering for Peering Service Voice walkthrough description: Learn about Internet peering for Peering Service Voice Services, its requirements, the steps to establish direct interconnect, and how to register and activate a prefix. -# Internet peering for Peering Service Voice Services walkthrough +# Internet peering for Peering Service Voice walkthrough -In this article, you learn steps to establish a Peering Service interconnect between a voice services provider and Microsoft. +In this article, you learn how to establish a Peering Service interconnect between a voice services provider and Microsoft. **Voice Services Providers** are the organizations that offer communication services (messaging, conferencing, and other communications services.) and want to integrate their communications services infrastructure (SBC, SIP gateways, and other infrastructure device) with Azure Communication Services and Microsoft Teams. |
internet-peering | Walkthrough Peering Service All | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-peering-service-all.md | To establish direct interconnect for Peering Service, follow these requirements: ## Establish Direct Interconnect for Peering Service +Ensure that you sign a Microsoft Azure Peering Service agreement before proceeding. For more information, see [Azure Peering Service partner overview requirements](./peering-service-partner-overview.md#peering-service-partner-requirements). + To establish a Peering Service interconnect with Microsoft, follow the following steps: ### 1. Associate your public ASN with your Azure subscription |
iot-central | Howto Use Commands | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-commands.md | The following screenshot shows how the successful command response displays in t :::image type="content" source="media/howto-use-commands/simple-command-ui.png" alt-text="Screenshot showing how to view command payload for a standard command." lightbox="media/howto-use-commands/simple-command-ui.png"::: +> [!NOTE] +> For standard commands, there's a timeout of 30 seconds. If a device doesn't respond within 30 seconds, IoT Central assumes that the command failed. This timeout period isn't configurable. + ## Long-running commands In a long-running command, a device doesn't immediately complete the command. Instead, the device acknowledges receipt of the command and then later confirms that the command completed. This approach lets a device complete a long-running operation without keeping the connection to IoT Central open. |
iot-develop | Quickstart Devkit Stm B L475e Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-stm-b-l475e-iot-hub.md | Title: Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub quickstart -description: Use Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry. + Title: Quickstart - Connect an STMicroelectronics B-L475E-IOT01A to Azure IoT Hub +description: A quickstart that uses Azure RTOS embedded software to connect an STMicroelectronics B-L475E-IOT01A device to Azure IoT Hub and send telemetry. ms.devlang: c Last updated 06/27/2023+# CustomerIntent: As an embedded device developer, I want to use Azure RTOS to connect my device to Azure IoT Hub, so that I can learn about device connectivity and development. # Quickstart: Connect an STMicroelectronics B-L475E-IOT01A Discovery kit to IoT Hub For debugging the application, see [Debugging with Visual Studio Code](https://g [!INCLUDE [iot-develop-cleanup-resources](../../includes/iot-develop-cleanup-resources.md)] -## Next steps +## Next step In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You connected the STM DevKit to Azure, and carried out tasks such as viewing telemetry and calling a method on the device. |
iot-edge | Debug Module Vs Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/debug-module-vs-code.md | Select **Start Debugging** or select **F5**. Select the process to attach to. In The Docker and Moby engines support SSH connections to containers allowing you to debug in Visual Studio Code connected to a remote device. You need to meet the following prerequisites before you can use this feature. +Remote SSH debugging prerequisites may be different depending on the language you are using. The following sections describe the setup for .NET. For information on other languages, see [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh) for an overview. Details about how to configure remote debugging are included in debugging sections for each language in the Visual Studio Code documentation. + ### Configure Docker SSH tunneling 1. Follow the steps in [Docker SSH tunneling](https://code.visualstudio.com/docs/containers/ssh#_set-up-ssh-tunneling) to configure SSH tunneling on your development computer. SSH tunneling requires public/private key pair authentication and a Docker context defining the remote device endpoint. |
iot-edge | How To Manage Device Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md | Title: Manage IoT Edge certificates+ description: How to install and manage certificates on an Azure IoT Edge device to prepare for production deployment. -> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You do not need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. In many cases, it's actually an intermediate CA certificate. +> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You don't need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. Often, it's actually an intermediate CA certificate. ## Prerequisites Edge Daemon issues module server and identity certificates for use by Edge modul ### Renewal -Server certificates may be issued off the Edge CA certificate or through a DPS-configured CA. Regardless of the issuance method, these certificates must be renewed by the module. +Server certificates may be issued off the Edge CA certificate. Regardless of the issuance method, these certificates must be renewed by the module. If you develop a custom module, you must implement the renewal logic in your module. ++The *edgeHub* module supports a certificate renewal feature. You can configure the *edgeHub* module server certificate renewal using the following environment variables: ++* **ServerCertificateRenewAfterInMs**: Sets the duration in milliseconds when the *edgeHub* server certificate is renewed irrespective of certificate expiry time. +* **MaxCheckCertExpiryInMs**: Sets the duration in milliseconds when *edgeHub* service checks the *edgeHub* server certificate expiration. If the variable is set, the check happens irrespective of certificate expiry time. ++For more information about the environment variables, see [EdgeHub and EdgeAgent environment variables](https://github.com/Azure/iotedge/blob/main/doc/EnvironmentVariables.md). ## Changes in 1.2 and later |
iot-edge | Iot Edge Certs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md | Title: Understand how IoT Edge uses certificates for security+ description: How Azure IoT Edge uses certificate to validate devices, modules, and downstream devices enabling secure connections between them. |
iot-edge | Tutorial Configure Est Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md | Using Device Provisioning Service allows you to automatically issue and renew ce 1. In the [Azure portal](https://portal.azure.com), navigate to your instance of IoT Hub Device Provisioning Service. 1. Under **Settings**, select **Manage enrollments**.-1. Select **Add enrollment group** then complete the following steps to configure the enrollment: +1. Select **Add enrollment group** then complete the following steps to configure the enrollment. +1. On the **Registration + provisioning** tab, choose the following settings: - :::image type="content" source="./media/tutorial-configure-est-server/dps-add-enrollment.png" alt-text="A screenshot adding DPS enrollment group using the Azure portal."::: + :::image type="content" source="./media/tutorial-configure-est-server/device-provisioning-service-add-enrollment-latest.png" alt-text="A screenshot adding DPS enrollment group using the Azure portal."::: |Setting | Value | |--||- |Group name | Provide a friendly name for this group enrollment | - |Attestation Type | Select **Certificate** | - |IoT Edge device | Select **True** | - |Certificate Type | Select **CA Certificate** | + |Attestation mechanism| Select **X.509 certificates uploaded to this Device Provisioning Service instance** | |Primary certificate | Choose your certificate from the dropdown list |+ |Group name | Provide a friendly name for this group enrollment | + |Provisioning status | Select **Enable this enrollment** checkbox | ++1. On the **IoT hubs** tab, choose your IoT Hub from the list. +1. On the **Device settings** tab, select the **Enable IoT Edge on provisioned devices** checkbox. The other settings aren't relevant to the tutorial. You can accept the default settings. -1. Select **Save**. +1. Select **Review + create**. Now that an enrollment exists for the device, the IoT Edge runtime can automatically manage device certificates for the linked IoT Hub. |
iot-hub-device-update | Create Update Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update-group.md | An Azure CLI environment: To create a device group, the first step is to add a tag to the target set of devices in IoT Hub. Tags can only be successfully added to your device after it has been connected to Device Update. -Device Update tags use the following format: +Device Update tags use the format in the following example: ```json+"etag": "", +"deviceId": "", +"deviceEtag": "", +"version": <version>, "tags": { "ADUGroup": "<CustomTagValue>" } |
iot-hub-device-update | Create Update | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/create-update.md | The [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5) * `--update-provider`, `--update-name`, and `--update-version`: These three parameters define the **updateId** object that is a unique identifier for each update. * `--compat`: The **compatibility** object is a set of name-value pairs that describe the properties of a device that this update is compatible with. * The same exact set of compatibility properties can't be used with more than one provider and name combination.-* `--step`: The update handler on the device (for example, `microsoft/script:1`, `microsoft/swupdate:1`, or `microsoft/apt:1`) and its associated properties for this update. +* `--step`: The update **handler** on the device (for example, `microsoft/script:1`, `microsoft/swupdate:1`, or `microsoft/apt:1`) and its associated **properties** for this update. * `--file`: The paths to your update file or files. For more information about these parameters, see [Import schema and API information](import-schema.md). For handler properties, you may need to escape certain characters in your JSON. The `init` command supports advanced scenarios, including the [related files feature](related-files.md) that allows you to define the relationship between different update files. For more examples and a complete list of optional parameters, see [az iot du init v5](/cli/azure/iot/du/update/init#az-iot-du-update-init-v5). -Once you've created your import manifest and saved it as a JSON file, you're ready to [import your update](import-update.md). +Once you've created your import manifest and saved it as a JSON file, you're ready to [import your update](import-update.md). If you are planning to use the Azure portal UI for importing, be sure to name your import manifest in the following format: "\<manifestname\>.importmanifest.json". ## Create an advanced Device Update import manifest for a proxy update |
iot-hub | Iot Hub Devguide Quotas Throttling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-quotas-throttling.md | IoT Hub enforces other operational limits: | Operation | Limit | | | -- |-| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. The only way to increase this limit is to contact [Microsoft Support](https://azure.microsoft.com/support/options/).| +| Devices | The total number of devices plus modules that can be registered to a single IoT hub is capped at 1,000,000. | | File uploads | 10 concurrent file uploads per device. | | Jobs<sup>1</sup> | Maximum concurrent jobs are 1 (for Free and S1), 5 (for S2), and 10 (for S3). However, the max concurrent [device import/export jobs](iot-hub-bulk-identity-mgmt.md) is 1 for all tiers. <br/>Job history is retained up to 30 days. | | Additional endpoints | Basic and standard SKU hubs may have 10 additional endpoints. Free SKU hubs may have one additional endpoint. | |
iot-hub | Iot Hub Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-managed-identity.md | In IoT Hub, managed identities can be used for egress connectivity from IoT Hub ## System-assigned managed identity -### Add and remove a system-assigned managed identity in Azure portal +### Enable or disable system-assigned managed identity in Azure portal -1. Sign in to the Azure portal and navigate to your desired IoT hub. -2. Navigate to **Identity** in your IoT Hub portal -3. Under **System-assigned** tab, select **On** and click **Save**. -4. To remove system-assigned managed identity from an IoT hub, select **Off** and click **Save**. +1. Sign in to the Azure portal and navigate to your IoT hub. +2. Select **Identity** from the **Security settings** section of the navigation menu. +3. Select the **System-assigned** tab. +4. Set the system-assigned managed identity **Status** to **On** or **Off**, then select **Save**. - :::image type="content" source="./media/iot-hub-managed-identity/system-assigned.png" alt-text="Screenshot showing where to turn on system-assigned managed identity for an I O T hub."::: + >[!NOTE] + >You can't turn off system-assigned managed identity while it's in use. Make sure that no custom endpoints are using system-assigned managed identity authentication before disabling the feature. ++ :::image type="content" source="./media/iot-hub-managed-identity/system-assigned.png" alt-text="Screenshot showing where to turn on system-assigned managed identity for an IoT hub."::: ### Enable system-assigned managed identity at hub creation time using ARM template |
iot-hub | Module Twins Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-cli.md | This article shows you how to create an Azure CLI session in which you: * Make sure that port 8883 is open in your firewall. The samples in this article use MQTT protocol, which communicates over port 8883. This port can be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub). +## Module authentication ++You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example: ++```bash +openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" +``` + ## Prepare the Cloud Shell If you want to use the Azure Cloud Shell, you must first launch and configure it. If you use the CLI locally, skip to the [Prepare a CLI session](#prepare-a-cli-session) section. |
iot-hub | Module Twins Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-dotnet.md | At the end of this article, you have two .NET console apps: * An IoT hub. Create one with the [CLI](iot-hub-create-using-cli.md) or the [Azure portal](iot-hub-create-through-portal.md). +## Module authentication ++You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example: ++```bash +openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" +``` + ## Get the IoT hub connection string [!INCLUDE [iot-hub-howto-module-twin-shared-access-policy-text](../../includes/iot-hub-howto-module-twin-shared-access-policy-text.md)] |
iot-hub | Module Twins Node | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-node.md | At the end of this article, you have two Node.js apps: * Node.js version 10.0.x or later. [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-node/tree/main/doc/node-devbox-setup.md) describes how to install Node.js for this article on either Windows or Linux. +## Module authentication ++You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example: ++```bash +openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" +``` + ## Get the IoT hub connection string [!INCLUDE [iot-hub-howto-module-twin-shared-access-policy-text](../../includes/iot-hub-howto-module-twin-shared-access-policy-text.md)] |
iot-hub | Module Twins Portal Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-portal-dotnet.md | In this article, you will learn how to: * A registered device. Register one in the [Azure portal](iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub). +## Module authentication ++You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example: ++```bash +openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" +``` + ## Create a module identity in the portal Within one device identity, you can create up to 20 module identities. To add an identity, follow these steps: |
iot-hub | Module Twins Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/module-twins-python.md | At the end of this article, you have three Python apps: * [Python version 3.7 or later](https://www.python.org/downloads/) is recommended. Make sure to use the 32-bit or 64-bit installation as required by your setup. When prompted during the installation, make sure to add Python to your platform-specific environment variable. +## Module authentication ++You can use symmetric keys or X.509 certificates to authenticate module identities. For X.509 certificate authentication, the module's certificate *must* have its common name (CN) formatted like `CN=<deviceid>/<moduleid>`. For example: ++```bash +openssl req -new -key d1m1.key.pem -out d1m1.csr -subj "/CN=device01\/module01" +``` + ## Get the IoT hub connection string In this article, you create a back-end service that adds a device in the identity registry and then adds a module to that device. This service requires the **registry write** permission (which also includes **registry read**). You also create a service that adds desired properties to the module twin for the newly created module. This service needs the **service connect** permission. Although there are default shared access policies that grant these permissions individually, in this section, you create a custom shared access policy that contains both of these permissions. |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
iot-hub | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
iot | Iot Overview Solution Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-solution-extensibility.md | IoT Hub and IoT Central both let you [route device telemetry to different endpoi In addition to device telemetry, both IoT Hub and IoT Central can send property update and device connection status messages to other endpoints. Routing these messages enables you to build integrations with other services that need device status information: -- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).-- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md).+- [IoT Hub routing](../iot-hub/iot-hub-devguide-messages-d2c.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such as [Azure Event Hubs](../event-hubs/event-hubs-about.md), [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), and [Cosmos DB](../cosmos-db/introduction.md). - [IoT Hub Event Grid integration](../iot-hub/iot-hub-event-grid.md) uses Azure Event Grid to distribute IoT Hub events such as device connectivity, device lifecycle, and telemetry events to other Azure services. - [IoT Central rules](../iot-central/core/howto-configure-rules.md) can send device telemetry and property values to webhooks, [Microsoft Power Automate](/power-automate/getting-started/), and [Azure Logic Apps](/azure/logic-apps/logic-apps-overview/). - [IoT Central data export](../iot-central/core/howto-export-data.md) can send device telemetry, property change events, device connectivity events, and device lifecycle events to destinations such [Azure Blob Storage](../storage/blobs/storage-blobs-overview.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview/), [Azure Event Hubs](../event-hubs/event-hubs-about.md), and webhooks. |
key-vault | Certificate Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/certificate-scenarios.md | Certificates are composed of three interrelated resources linked together as a K ## Creating your first Key Vault certificate Before a certificate can be created in a Key Vault (KV), prerequisite steps 1 and 2 must be successfully accomplished and a key vault must exist for this user / organization. -**Step 1** - Certificate Authority (CA) Providers +**Step 1:** Certificate Authority (CA) Providers - On-boarding as the IT Admin, PKI Admin or anyone managing accounts with CAs, for a given company (ex. Contoso) is a prerequisite to using Key Vault certificates. The following CAs are the current partnered providers with Key Vault. Learn more [here](./create-certificate.md#partnered-ca-providers) - DigiCert - Key Vault offers OV TLS/SSL certificates with DigiCert. - GlobalSign - Key Vault offers OV TLS/SSL certificates with GlobalSign. -**Step 2** - An account admin for a CA provider creates credentials to be used by Key Vault to enroll, renew, and use TLS/SSL certificates via Key Vault. +**Step 2:** An account admin for a CA provider creates credentials to be used by Key Vault to enroll, renew, and use TLS/SSL certificates via Key Vault. -**Step 3** - A Contoso admin, along with a Contoso employee (Key Vault user) who owns certificates, depending on the CA, can get a certificate from the admin or directly from the account with the CA. +**Step 3a:** A Contoso admin, along with a Contoso employee (Key Vault user) who owns certificates, depending on the CA, can get a certificate from the admin or directly from the account with the CA. - Begin an add credential operation to a key vault by [setting a certificate issuer](/rest/api/keyvault/certificates/set-certificate-issuer/set-certificate-issuer) resource. A certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It is used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details. - Ex. MyDigiCertIssuer Certificates are composed of three interrelated resources linked together as a K For more information on creating accounts with CA Providers, see the related post on the [Key Vault blog](/archive/blogs/kv/manage-certificates-via-azure-key-vault). -**Step 3.1** - Set up [certificate contacts](/rest/api/keyvault/certificates/set-certificate-contacts/set-certificate-contacts) for notifications. This is the contact for the Key Vault user. Key Vault does not enforce this step. +**Step 3b:** Set up [certificate contacts](/rest/api/keyvault/certificates/set-certificate-contacts/set-certificate-contacts) for notifications. This is the contact for the Key Vault user. Key Vault does not enforce this step. -Note - This process, through step 3.1, is a onetime operation. +Note - This process, through **Step 3b**, is a onetime operation. ## Creating a certificate with a CA partnered with Key Vault ![Create a certificate with a Key Vault partnered certificate authority](../media/certificate-authority-2.png) -**Step 4** - The following descriptions correspond to the green numbered steps in the preceding diagram. +**Step 4:** The following descriptions correspond to the green numbered steps in the preceding diagram. (1) - In the diagram above, your application is creating a certificate which internally begins by creating a key in your key vault. (2) - Key Vault sends an TLS/SSL Certificate Request to the CA. (3) - Your application polls, in a loop and wait process, for your Key Vault for certificate completion. The certificate creation is complete when Key Vault receives the CAΓÇÖs response with x509 certificate. |
key-vault | Assign Access Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/assign-access-policy.md | description: How to use the Azure CLI to assign a Key Vault access policy to a s tags: azure-resource-manager-+ |
key-vault | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/best-practices.md | Azure Key Vault safeguards encryption keys and secrets like certificates, connec ## Use separate key vaults -Our recommendation is to use a vault per application per environment (development, pre-production, and production), per region. This helps you not share secrets across environments and regions. It will also reduce the threat in case of a breach. +Our recommendation is to use a vault per application per environment (development, pre-production, and production), per region. Granular isolation helps you not share secrets across applications, environments and regions, and it also reduce the threat if there is a breach. ### Why we recommend separate key vaults Key vaults define security boundaries for stored secrets. Grouping secrets into Encryption keys and secrets like certificates, connection strings, and passwords are sensitive and business critical. You need to secure access to your key vaults by allowing only authorized applications and users. [Azure Key Vault security features](security-features.md) provides an overview of the Key Vault access model. It explains authentication and authorization. It also describes how to secure access to your key vaults. -Suggestions for controlling access to your vault are as follows: +Recommendations for controlling access to your vault are as follows: - Lock down access to your subscription, resource group, and key vaults using role-based access control (RBAC).-- Create access policies for every vault.-- Use the principle of least privilege access to grant access.-- Turn on firewall and [virtual network service endpoints](overview-vnet-service-endpoints.md).+ - Assign RBAC roles at Key Vault scope for applications, services, and workloads requiring persistent access to Key Vault + - Assign just-in-time eligible RBAC roles for operators, administrators and other user accounts requiring privileged access to Key Vault using [Privileged Identity Management (PIM)](../../active-directory/privileged-identity-management/pim-configure.md) + - Require at least one approver + - Enforce multi-factor authentication +- Restrict network access with [Private Link](private-link-service.md), [firewall and virtual networks](network-security.md) ## Turn on data protection for your vault For more information, see [Azure Key Vault soft-delete overview](soft-delete-ove ## Backup -Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault. +Purge protection prevents malicious and accidental deletion of vault objects for up to 90 days. In scenarios, when purge protection is not a possible option, we recommend backup vault objects, which can't be recreated from other sources like encryption keys generated within the vault. For more information about backup, see [Azure Key Vault backup and restore](backup.md) A multitenant solution is built on an architecture where components are used to ## Frequently Asked Questions: ### Can I use Key Vault role-based access control (RBAC) permission model object-scope assignments to provide isolation for application teams within Key Vault?-No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but any administrative operations like network access control, monitoring, and objects management require vault level permissions which will then expose secure information to operators across application teams. +No. RBAC permission model allows to assign access to individual objects in Key Vault to user or application, but only for read. Any administrative operations like network access control, monitoring, and objects management require vault level permissions. Having one Key Vault per application provides secure isolation for operators across application teams. ## Next steps |
key-vault | Rbac Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md | -> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Secrets User** and **Key Vault Reader** role assignments for 'Microsoft Azure App Service' global identity. +> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model, but you can use Azure PowerShell, Azure CLI, ARM template deployments. App Service certificate management requires **Key Vault Secrets User** and **Key Vault Reader** role assignments for App Service global identity, for example Microsoft Azure App Service' in public cloud. Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources. |
key-vault | Hsm Protected Keys Ncipher | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/hsm-protected-keys-ncipher.md | See the following table for a list of prerequisites for bring your own key (BYOK | A subscription to Azure |To create an Azure Key Vault, you need an Azure subscription: [Sign up for free trial](https://azure.microsoft.com/pricing/free-trial/) | | The Azure Key Vault Premium service tier to support HSM-protected keys |For more information about the service tiers and capabilities for Azure Key Vault, see the [Azure Key Vault Pricing](https://azure.microsoft.com/pricing/details/key-vault/) website. | | nCipher nShield HSMs, smartcards, and support software |You must have access to a nCipher Hardware Security Module and basic operational knowledge of nCipher nShield HSMs. See [nCipher nShield Hardware Security Module](https://www.arrow.com/ecs-media/8441/33982ncipher_nshield_family_brochure.pdf) for the list of compatible models, or to purchase an HSM if you do not have one. |-| The following hardware and software:<ol><li>An offline x64 workstation with a minimum Windows operation system of Windows 7 and nCipher nShield software that is at least version 11.50.<br/><br/>If this workstation runs Windows 7, you must [install Microsoft .NET Framework 4.5](https://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe).</li><li>A workstation that is connected to the Internet and has a minimum Windows operating system of Windows 7 and [Azure PowerShell](/powershell/azure/) **minimum version 1.1.0** installed.</li><li>A USB drive or other portable storage device that has at least 16-MB free space.</li></ol> |For security reasons, we recommend that the first workstation is not connected to a network. However, this recommendation is not programmatically enforced.<br/><br/>In the instructions that follow, this workstation is referred to as the disconnected workstation.</p></blockquote><br/>In addition, if your tenant key is for a production network, we recommend that you use a second, separate workstation to download the toolset, and upload the tenant key. But for testing purposes, you can use the same workstation as the first one.<br/><br/>In the instructions that follow, this second workstation is referred to as the Internet-connected workstation.</p></blockquote><br/> | +| The following hardware and software:<ol><li>An offline x64 workstation with a minimum Windows operation system of Windows 7 and nCipher nShield software that is at least version 11.50.<br/><br/>If this workstation runs Windows 7, you must [install Microsoft .NET Framework 4.5](https://download.microsoft.com/download/b/a/4/ba4a7e71-2906-4b2d-a0e1-80cf16844f5f/dotnetfx45_full_x86_x64.exe).</li><li>A workstation that is connected to the Internet and has a minimum Windows operating system of Windows 7 and [Azure PowerShell](/powershell/azure/) **minimum version 1.1.0** installed.</li><li>A USB drive or other portable storage device that has at least 16-MB free space.</li></ol> |For security reasons, we recommend that the first workstation is not connected to a network. However, this recommendation is not programmatically enforced.<br/><br/>In the instructions that follow, this workstation is referred to as the disconnected workstation.</p><br/>In addition, if your tenant key is for a production network, we recommend that you use a second, separate workstation to download the toolset, and upload the tenant key. But for testing purposes, you can use the same workstation as the first one.<br/><br/>In the instructions that follow, this second workstation is referred to as the Internet-connected workstation.</p><br/> | ## Generate and transfer your key to Azure Key Vault HSM This program creates a **Security World** file at %NFAST_KMDATA%\local\world, wh > If your HSM does not support the newer cypher suite DLf3072s256mRijndael, you can replace `--cipher-suite= DLf3072s256mRijndael` with `--cipher-suite=DLf1024s160mRijndael`. > > Security world created with new-world.exe that ships with nCipher software version 12.50 is not compatible with this BYOK procedure. There are two options available:-> 1) Downgrade nCipher software version to 12.40.2 to create a new security world. -> 2) Contact nCipher support and request them to provide a hotfix for 12.50 software version, which allows you to use 12.40.2 version of new-world.exe that is compatible with this BYOK procedure. +> 1. Downgrade nCipher software version to 12.40.2 to create a new security world. +> 2. Contact nCipher support and request them to provide a hotfix for 12.50 software version, which allows you to use 12.40.2 version of new-world.exe that is compatible with this BYOK procedure. Then: |
key-vault | Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/azure-policy.md | Title: Integrate Azure Managed HSM with Azure Policy description: Learn how to integrate Azure Managed HSM with Azure Policy Previously updated : 03/31/2021 Last updated : 08/23/2023 -# Integrate Azure Managed HSM with Azure Policy (preview) +# Integrate Azure Managed HSM with Azure Policy [Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they're compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they'll be able to see a drill-down of which resources and components are compliant and which aren't. For more information, see the [Overview of the Azure Policy service](../../governance/policy/overview.md). Using RSA keys with smaller key sizes is not a secure design practice. You may b ## Enabling and managing a Managed HSM policy through the Azure CLI -### Register preview feature in your subscription --In the subscription that customer owns, run the following Azure CLI command line as Contributor or Owner role of the subscription, --```azurecli-interactive -az feature register --namespace Microsoft.KeyVault --name MHSMGovernance -``` --If there is an existing HSM pool in this subscription, update will be carried to these pools. Full enablement of the policy may take up to 30 mins. See [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md?tabs=azure-cli). - ### Giving permission to scan daily To check the compliance of the pool's inventory keys, the customer must assign the "Managed HSM Crypto Auditor" role to "Azure Key Vault Managed HSM Key Governance Service"(App ID: a1b76039-a76c-499f-a2dd-846b4cc32627) so it can access key's metadata. Without the grant of permission, inventory keys are not going to be reported on Azure Policy compliance report, only new keys, updated keys, imported keys and rotated keys will be checked on compliance. To do so, a user who has role of "Managed HSM Administrator" to the Managed HSM needs to run the following Azure CLI commands: To check the compliance of the pool's inventory keys, the customer must assign t On windows: ```azurecli-interactive-az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query objectId +az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query id ``` Copy the `id` printed, paste it in the following command: az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor" On Linux or Windows Subsystem of Linux: ```azurecli-interactive-spId=$(az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query objectId|cut -d "\"" -f2) +spId=$(az ad sp show --id a1b76039-a76c-499f-a2dd-846b4cc32627 --query id|cut -d "\"" -f2) echo $spId az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor" --assignee-object-id $spId --hsm-name <hsm name> ``` az keyvault role assignment create --scope / --role "Managed HSM Crypto Auditor" Policy assignments have concrete values defined for policy definitions' parameters. In the [Azure portal](https://portal.azure.com/?Microsoft_Azure_ManagedHSM_assettypeoptions=%7B%22ManagedHSM%22:%7B%22options%22:%22%22%7D%7D&Microsoft_Azure_ManagedHSM=true&feature.canmodifyextensions=true}), go to "Policy", filter on the "Key Vault" category, find these four preview key governance policy definitions. Select one, then select "Assign" button on top. Fill in each field. If the policy assignment is for request denials, use a clear name about the policy because, when a request is denied, the policy assignment's name will appear in the error. Select Next, uncheck "Only show parameters that need input or review", and enter values for parameters of the policy definition. Skip the "Remediation", and create the assignment. The service will need up to 30 minutes to enforce "Deny" assignments. -- [Preview]: Azure Key Vault Managed HSM keys should have an expiration date-- [Preview]: Azure Key Vault Managed HSM keys using RSA cryptography should have a specified minimum key size-- [Preview]: Azure Key Vault Managed HSM Keys should have more than the specified number of days before expiration-- [Preview]: Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names+- Azure Key Vault Managed HSM keys should have an expiration date +- Azure Key Vault Managed HSM keys using RSA cryptography should have a specified minimum key size +- Azure Key Vault Managed HSM Keys should have more than the specified number of days before expiration +- Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names You can also do this operation using the Azure CLI. See [Create a policy assignment to identify non-compliant resources with Azure CLI](../../governance/policy/assign-policy-azurecli.md). |
key-vault | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md | Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
key-vault | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
lab-services | Administrator Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/administrator-guide.md | Title: Administrator guide | Microsoft Docs + Title: Administrator guide description: This guide helps administrators who create and manage lab plans by using Azure Lab Services. Previously updated : 07/04/2022 Last updated : 08/28/2023 |
lab-services | Capacity Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/capacity-limits.md | Title: Capacity limits in Azure Lab Services + Title: Capacity limits description: Learn about VM capacity limits in Azure Lab Services. Previously updated : 07/04/2022 Last updated : 08/28/2023 # Capacity limits in Azure Lab Services -Azure Lab Services has default capacity limits on Azure subscriptions that adhere to Azure Compute quota limitations and to mitigate fraud. All Azure subscriptions will have an initial capacity limit, which can vary based on subscription type, number of standard compute cores, and GPU cores available inside Azure Lab Services. It restricts how many virtual machines you can create inside your lab before you need to request for a limit increase. +Azure Lab Services has default capacity limits on Azure subscriptions that adhere to Azure Compute quota limitations and to mitigate fraud. All Azure subscriptions have an initial capacity limit, which can vary based on subscription type, number of standard compute cores, and GPU cores available inside Azure Lab Services. The capacity limit restricts how many virtual machines you can create inside your lab before you need to request a limit increase. -If you're close to or have reached your subscription's core limit, you'll see messages from Azure Lab Services. Actions that are affected by core limits include: +If you're close to, or have reached your subscription's core limit, you see warning messages from Azure Lab Services in the portal. The core limits affect the following actions: - Create a lab - Publish a lab - Increase lab capacity -These actions may be disabled if there no more cores that can be enabled for your subscription. +These actions may be disabled if there are no more cores available for your subscription. :::image type="content" source="./media/capacity-limits/warning-message.png" alt-text="Screenshot of core limit warning in Azure Lab Services."::: Before you set up a large number of VMs across your labs, we recommend that you Azure Lab Services enables you to create labs in different Azure regions. The default limit for the total number of regions you can use for creating labs varies by offer category type. For example, the default for Pay-As-You-Go subscriptions is two regions. -If you have reached the Azure regions limit for your subscription, you can only create labs in regions that you're already using. When you create a new lab in another region, the lab creation will fail with an error message. +If you have reached the Azure regions limit for your subscription, you can only create labs in regions that you're already using. When you create a new lab in another region, the lab creation fails with an error message. To overcome the Azure region restriction, you have the following options: You can contact Azure support and create a support ticket to lift the region res ## Best practices for requesting a limit increase [!INCLUDE [lab-services-request-capacity-best-practices](includes/lab-services-request-capacity-best-practices.md)] -## Next steps --See the following articles: +## Related content - As an admin, see [VM sizing](administrator-guide.md#vm-sizing). - As an admin, see [Request a capacity increase](./how-to-request-capacity-increase.md) |
lab-services | How To Configure Firewall Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-configure-firewall-settings.md | Title: Firewall settings for Azure Lab Services description: Learn how to determine the public IP address of VMs in a lab created using a lab plan so information can be added to firewall rules. ms.lab- Previously updated : 08/01/2022 Last updated : 08/28/2023 -# Firewall settings for Azure Lab Services +# Determine firewall settings for Azure Lab Services +This article covers how to find the specific public IP address used by a lab in Azure Lab Services. You can use these IP addresses to configure your firewall settings and specify inbound and outbound rules to enable lab users to connect to their lab virtual machines. -> [!NOTE] -> If using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Firewall settings for labs when using lab accounts](how-to-configure-firewall-settings-1.md). +Each organization or school configures their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow lab users to access their VM when connecting from the local network. -Each organization or school will configure their own network in a way that best fits their needs. Sometimes that includes setting firewall rules that block Remote Desktop Protocol (RDP) or Secure Shell (SSH) connections to machines outside their own network. Because Azure Lab Services runs in the public cloud, some extra configuration maybe needed to allow students to access their VM when connecting from the campus network. +Each lab uses single public IP address and multiple ports. All VMs, both the template VM and lab VMs, use this public IP address. The public IP address doesn't change for a lab. Each VM is assigned a different port number. The port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect lab creators and lab users to the correct VM. -Each lab uses single public IP address and multiple ports. All VMs, both the template VM and student VMs, will use this public IP address. The public IP address won't change for the life of lab. Each VM will have a different port number. The port ranges for SSH connections are 4980-4989 and 5000-6999. The port ranges for RDP connections are 4990-4999 and 7000-8999. The combination of public IP address and port number is used to connect educators and students to the correct VM. This article will cover how to find the specific public IP address used by a lab. That information can be used to update inbound and outbound firewall rules so students can access their VMs. +If you're using a version of Azure Lab Services prior to the [August 2022 Update](lab-services-whats-new.md), see [Firewall settings for labs when using lab accounts](how-to-configure-firewall-settings-1.md). >[!IMPORTANT]->Each lab will have a different public IP address. +>Each lab has a different public IP address. > [!NOTE]-> If your school needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you will need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering). +> If your organization needs to perform content filtering, such as for compliance with the [Children's Internet Protection Act (CIPA)](https://www.fcc.gov/consumers/guides/childrens-internet-protection-act), you need to use 3rd party software. For more information, read guidance on [content filtering with Lab Services](./administrator-guide.md#content-filtering). ## Find public IP for a lab -If using a customizable lab, then we can get the public ip anytime after the lab is created. If using a non-customizable lab, the lab must be published and have capacity of at least 1 to be able to get the public IP for the lab. +If you're using a customizable lab, then you can get the public IP address anytime after the lab is created. If you're using a non-customizable lab, the lab must be published and have capacity of at least 1 to be able to get the public IP address for the lab. -We're going to use the Az.LabServices PowerShell module to get the public IP address for a lab. For more examples using Az.LabServices PowerShell module and how to use it, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](how-to-create-lab-plan-powershell.md) and [Quickstart: Create a lab using PowerShell and the Azure module](how-to-create-lab-powershell.md). For more information about cmdlets available in the Az.LabServices PowerShell module, see [Az.LabServices reference](/powershell/module/az.labservices/) +You can use the `Az.LabServices` PowerShell module to get the public IP address for a lab. ```powershell $ResourceGroupName = "MyResourceGroup" if ($LabPublicIP){ } ``` +For more examples of using the `Az.LabServices` PowerShell module and how to use it, see [Quickstart: Create a lab plan using PowerShell and the Azure modules](how-to-create-lab-plan-powershell.md) and [Quickstart: Create a lab using PowerShell and the Azure module](how-to-create-lab-powershell.md). For more information about cmdlets available in the Az.LabServices PowerShell module, see [Az.LabServices reference](/powershell/module/az.labservices/). + ## Conclusion -Now we know the public IP address for the lab. Inbound and outbound rules can be created for the organization's firewall for the public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999. Once the rules are updated, students can access their VMs without the network firewall blocking access. +You can now determine the public IP address for a lab. You can create inbound and outbound rules for the organization's firewall for the public IP address and the port ranges 4980-4989, 5000-6999, and 7000-8999. Once the rules are updated, lab users can then access their VMs without the network firewall blocking access. -## Next steps +## Related content -- As an admin, [enable labs to connect your vnet](how-to-connect-vnet-injection.md).-- As an educator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md).-- As an educator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm).+- As an admin, [enable labs to connect to your virtual network](how-to-connect-vnet-injection.md). +- As a lab creator, work with your admin to [create a lab with a shared resource](how-to-create-a-lab-with-shared-resource.md). +- As a lab creator, [publish your lab](how-to-create-manage-template.md#publish-the-template-vm). |
lab-services | How To Create Lab Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-bicep.md | In this article, you learn how to create a lab using a Bicep file. For a detail [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] + ## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)] [!INCLUDE [Create and manage labs](./includes/lab-services-prerequisite-create-lab.md)] [!INCLUDE [Existing lab plan](./includes/lab-services-prerequisite-lab-plan.md)] -## Review the Bicep file +## Review the code ++# [Bicep](#tab/bicep) The Bicep file used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab/). :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.labservices/lab/main.bicep"::: -## Deploy the Bicep file +# [ARM](#tab/arm) ++The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab/). +++One Azure resource is defined in the template: ++- **[Microsoft.LabServices/labs](/azure/templates/microsoft.labservices/labs)**: resource type description. ++More Azure Lab Services template samples can be found in [Azure Quickstart Templates](/samples/browse/?expanded=azure&products=azure-resource-manager&terms=lab%20services). For more information how to create a lab without a lab plan using automation, see [Create Azure LabServices lab template](/samples/azure/azure-quickstart-templates/lab/). ++++## Deploy the resources ++# [Bicep](#tab/bicep) 1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. The Bicep file used in this article is from [Azure Quickstart Templates](/sample When the deployment finishes, you should see a messaged indicating the deployment succeeded. +# [ARM](#tab/arm) ++1. Select the following link to sign in to Azure and open a template. The template creates a lab. ++ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-using-lab-plan%2fazuredeploy.json"::: ++2. Optionally, change the name of the lab. +3. Select the **resource group** the lab plan you're going to use. +4. Enter the required values for the template: ++ 1. **adminUser**. The name of the user that will be added as an administrator for the lab VM. + 2. **adminPassword**. The password for the administrator user for the lab VM. + 3. **labPlanId**. The resource ID for the lab plan to be used. The **Id** is listed in the **Properties** page of the lab plan resource in Azure. ++ :::image type="content" source="./media/how-to-create-lab-template/lab-plan-properties-id.png" alt-text="Screenshot of properties page for lab plan in Azure Lab Services with ID property highlighted."::: ++5. Select **Review + create**. +6. Select **Create**. ++The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md). +++ ## Review deployed resources Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group. +To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLab** cmdlet. + # [CLI](#tab/CLI) ```azurecli-interactive Remove-AzResourceGroup -Name exampleRG ## Next steps -In this article, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. +In this article, you deployed a simple virtual machine using a Bicep file or ARM template. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. > [!div class="nextstepaction"] > [Configure a template VM](how-to-create-manage-template.md) |
lab-services | How To Create Lab Plan Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-bicep.md | Title: Create a lab plan using Bicep + Title: Create a lab plan using Bicep or ARM -description: Learn how to create an Azure Lab Services lab plan by using Bicep. +description: Learn how to create an Azure Lab Services lab plan by using Bicep or ARM templates. Previously updated : 05/23/2022 Last updated : 08/28/2023 -# Create a lab plan in Azure Lab Services using a Bicep file +# Create a lab plan using a Bicep file or ARM template -In this article, you learn how to create a lab plan using a Bicep file. For a detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). +In this article, you learn how to create a lab plan using a Bicep file or Azure Resource Manager (ARM) template. For a detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). [!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)] + ## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)] In this article, you learn how to create a lab plan using a Bicep file. For a d ## Review the Bicep file +# [Bicep](#tab/bicep) + The Bicep file used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab-plan/). :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.labservices/lab-plan/main.bicep"::: -## Deploy the Bicep file +# [ARM](#tab/arm) ++The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab-plan/). +++One Azure resource is defined in the template: ++- **[Microsoft.LabServices/labplans](/azure/templates/microsoft.labservices/labplans)**: The lab plan serves as a collection of configurations and settings that apply to the labs created from it. ++More Azure Lab Services template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Labservices&pageNumber=1&sort=Popular). ++++## Deploy the resources ++# [Bicep](#tab/bicep) 1. Save the Bicep file as **main.bicep** to your local computer. 1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. The Bicep file used in this article is from [Azure Quickstart Templates](/sample When the deployment finishes, you should see a messaged indicating the deployment succeeded. +# [ARM](#tab/arm) ++1. Select the following link to sign in to Azure and open a template. The template creates a lab plan. ++ :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-plan%2fazuredeploy.json"::: ++1. Optionally, change the name of the lab plan. +1. Select the **Resource group**. +1. Select **Review + create**. +1. Select **Create**. ++The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md). +++ ## Review deployed resources Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group. +To use Azure PowerShell, first verify the `Az.LabServices` module is installed. Then use the `Get-AzLabServicesLabPlan` cmdlet. + # [CLI](#tab/CLI) ```azurecli-interactive Remove-AzResourceGroup -Name exampleRG -## Next steps +## Next step -In this article, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. +In this article, you deployed a simple virtual machine using a Bicep file or ARM template. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs. > [!div class="nextstepaction"] > [Managing Labs](how-to-manage-labs.md) |
lab-services | How To Create Lab Plan Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-plan-template.md | - Title: Create a lab plan by using Azure Resource Manager template (ARM template)- -description: Learn how to create an Azure Lab Services lab plan by using Azure Resource Manager template (ARM template). --- Previously updated : 06/04/2022---# Create a lab plan in Azure Lab Services using an ARM template --In this article, you learn how to use an Azure Resource Manager (ARM) template to create a lab plan. Lab plans are used when creating labs for Azure Lab Services. For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). ---If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. ---## Prerequisites ---## Review the template --The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab-plan/). ---One Azure resource is defined in the template: --- **[Microsoft.LabServices/labplans](/azure/templates/microsoft.labservices/labplans)**: The lab plan serves as a collection of configurations and settings that apply to the labs created from it.--More Azure Lab Services template samples can be found in [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Labservices&pageNumber=1&sort=Popular). --## Deploy the template --1. Select the following link to sign in to Azure and open a template. The template creates a lab plan. -- :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-plan%2fazuredeploy.json"::: --1. Optionally, change the name of the lab plan. -1. Select the **Resource group**. -1. Select **Review + create**. -1. Select **Create**. --The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md). --## Review deployed resources --You can either use the Azure portal to check the lab plan, or use the Azure PowerShell script to list the lab plan created. --To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLabPlan** cmdlet. --```azurepowershell-interactive -Import-Module Az.LabServices --$labplanName = Read-Host -Prompt "Enter your lab plan name" -Get-AzLabServicesLabPlan -Name $labplanName --Write-Host "Press [ENTER] to continue..." -``` --## Clean up resources --When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group -), which deletes the lab plan. --```azurepowershell-interactive -$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name" -Remove-AzResourceGroup -Name $resourceGroupName --Write-Host "Press [ENTER] to continue..." -``` --## Next steps --For a step-by-step tutorial that guides you through the process of creating a lab, see: --> [!div class="nextstepaction"] -> [Create a lab using an ARM template](how-to-create-lab-template.md) |
lab-services | How To Create Lab Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-lab-template.md | - Title: Create a lab by using Azure Resource Manager template (ARM template)- -description: Learn how to create an Azure Lab Services lab by using Azure Resource Manager template (ARM template). --- Previously updated : 05/10/2022---# Create a lab in Azure Lab Services using an ARM template --In this article, you learn how to use an Azure Resource Manager (ARM) template to create a lab. You learn how to create a lab with Windows 11 Pro image. Once a lab is created, an educator [configures the template](how-to-create-manage-template.md), [adds lab users](how-to-manage-lab-users.md), and [publishes the lab](tutorial-setup-lab.md#publish-lab). For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md). ---If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal. ---## Prerequisites ---## Review the template --The template used in this article is from [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/lab/). ---One Azure resource is defined in the template: --- **[Microsoft.LabServices/labs](/azure/templates/microsoft.labservices/labs)**: resource type description.--More Azure Lab Services template samples can be found in [Azure Quickstart Templates](/samples/browse/?expanded=azure&products=azure-resource-manager&terms=lab%20services). For more information how to create a lab without a lab plan using automation, see [Create Azure LabServices lab template](/samples/azure/azure-quickstart-templates/lab/). --## Deploy the template --1. Select the following link to sign in to Azure and open a template. The template creates a lab. -- :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Screenshot of the Deploy to Azure button to deploy resources with a template." link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3a%2f%2fraw.githubusercontent.com%2fAzure%2fazure-quickstart-templates%2fmaster%2fquickstarts%2fmicrosoft.labservices%2flab-using-lab-plan%2fazuredeploy.json"::: --2. Optionally, change the name of the lab. -3. Select the **resource group** the lab plan you're going to use. -4. Enter the required values for the template: -- 1. **adminUser**. The name of the user that will be added as an administrator for the lab VM. - 2. **adminPassword**. The password for the administrator user for the lab VM. - 3. **labPlanId**. The resource ID for the lab plan to be used. The **Id** is listed in the **Properties** page of the lab plan resource in Azure. -- :::image type="content" source="./media/how-to-create-lab-template/lab-plan-properties-id.png" alt-text="Screenshot of properties page for lab plan in Azure Lab Services with I D property highlighted."::: --5. Select **Review + create**. -6. Select **Create**. --The Azure portal is used here to deploy the template. You can also use Azure PowerShell, Azure CLI, or the REST API. To learn other deployment methods, see [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md). --## Review deployed resources --You can either use the Azure portal to check the lab, or use the Azure PowerShell script to list the lab resource created. --To use Azure PowerShell, first verify the Az.LabServices module is installed. Then use the **Get-AzLabServicesLab** cmdlet. --```azurepowershell-interactive -Import-Module Az.LabServices --$lab = Read-Host -Prompt "Enter your lab name" -Get-AzLabServicesLab -Name $lab --Write-Host "Press [ENTER] to continue..." -``` --To verify educators can use the lab, navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about managing labs, see [View all labs](./how-to-manage-labs.md). --## Clean up resources --When no longer needed, [delete the resource group](../azure-resource-manager/management/delete-resource-group.md?tabs=azure-portal#delete-resource-group -), which deletes the lab and other resources in the same group. --```azurepowershell-interactive -$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name" -Remove-AzResourceGroup -Name $resourceGroupName --Write-Host "Press [ENTER] to continue..." -``` --Alternately, an educator may delete a lab from the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). For more information about deleting labs, see [Delete a lab](how-to-manage-labs.md#delete-a-lab). --## Next steps --For a step-by-step tutorial that guides you through the process of creating a template, see: --> [!div class="nextstepaction"] -> [Tutorial: Create and deploy your first ARM template](../azure-resource-manager/templates/template-tutorial-create-first-template.md) |
lab-services | How To Create Manage Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-create-manage-template.md | Title: Manage a template of a lab in Azure Lab Services | Microsoft Docs -description: Learn how to create and manage a lab template in Azure Lab Services. + Title: Manage a lab template +description: Learn how to create and manage a lab template in Azure Lab Services. You can use a template to customize the base VM image for lab VMs. Previously updated : 07/04/2022 Last updated : 08/28/2023 -# Create and manage a template in Azure Lab Services +# Create and manage a lab template in Azure Lab Services -A template in a lab is a base VM image from which all users' virtual machines are created. Modify the template VM so that it's configured with exactly what you want to provide to the lab users. You can provide a name and description of the template that the lab users see. Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab using the template. The number of VMs created during publish equals lab capacity. If using [Teams integration](lab-services-within-teams-overview.md), or [Canvas integration](lab-services-within-canvas-overview.md), the number of VMs created during publish equals the number of users in the lab. All virtual machines have the same configuration as the template. +A lab template is a base VM image from which all lab users' virtual machines are created. You can use a template to customize the base VM image for lab VMs. For example, you might install extra software components, such as Visual Studio, or configure the operating system to disable the web server process. In this article, you learn how to create and manage a lab template. -When you create a lab, the template VM is created but it's not started. You can start it, connect to it, and install any pre-requisite software for the lab, and then publish it. When you publish the template VM, it's automatically shut down for you if you haven't done so. This article describes how to manage a template VM of a lab. +When you [publish a lab](./tutorial-setup-lab.md#publish-lab), Azure Lab Services creates the lab VMs, based on the template VM image. If you modify the template VM at a later stage, when you republish the template VM, all lab VMs are updated to match the new template. When you republish a template VM, Azure Lab Services reimages the lab VMs and removes all changes and data on the VM. ++When you create a lab, the template VM is created but it's not started. You can start it, connect to it, and install any prerequisite software for the lab, and then publish it. When you publish the template VM, it's automatically shut down for you if you haven't done so. ++The number of VMs created during publish equals lab capacity. If you're using [Teams integration](lab-services-within-teams-overview.md), or [Canvas integration](lab-services-within-canvas-overview.md), the number of VMs created during publish equals the number of users in the lab. > [!NOTE] > Template VMs incur cost when running, so ensure that the template VM is shutdown when you aren't using it. ## Set or update template title and description -Use the following steps to set title and description for the lab. Educators and students will see the title and description on the tiles of the [My Virtual Machines](instructor-access-virtual-machines.md) page. +Lab creators and lab users can see the title and description on the tiles of the [My Virtual Machines](instructor-access-virtual-machines.md) page. ++Use the following steps to set title and description for the lab: ++1. On the **Template** page, enter the new **title** for the lab. -1. On the **Template** page, enter the new **title** for the lab. 2. Enter the new **description** for the template. When you move the focus out of the text box, it's automatically saved. - ![Template name and description](./media/how-to-create-manage-template/template-name-description.png) + :::image type="content" source="./media/how-to-create-manage-template/template-name-description.png" alt-text="Screenshot that shows the Template page in the Lab Services portal, allowing users to edit the template title and description."::: ## Update a template VM -Use the following steps to update a template VM. +Use the following steps to update a template VM: 1. On the **Template** page for the lab, select **Start template** on the toolbar.-1. Wait until the template VM is started, and then select **Connect to template** on the toolbar to connect to the template VM. Depending on the setting for the lab, you'll connect using Remote Desktop Protocol (RDP) or Secure Shell (SSH). -1. Once you connect to the template and make changes, it will no longer have the same setup as the virtual machines last published to your users. Template changes won't be reflected on your students' existing virtual machines until after you publish again. - ![Connect to the template VM](./media/how-to-create-manage-template/connect-template-vm.png) +1. Wait until the template VM is started, and then select **Connect to template** on the toolbar to connect to the template VM. ++ Depending on the setting for the lab, you connect using Remote Desktop Protocol (RDP) or Secure Shell (SSH). ++ :::image type="content" source="./media/how-to-create-manage-template/connect-template-vm.png" alt-text="Screenshot that shows the Template page in the Lab Service portal, highlighting the Connect to template button."::: 1. Install any software that's required for students to do the lab (for example, Visual Studio, Azure Storage Explorer, etc.).+ 1. Disconnect (close your remote desktop session) from the template VM.+ 1. **Stop** the template VM by selecting **Stop template**.-1. Follow steps in the next section to **Publish** the updated template VM. ++> [!NOTE] +> Template changes are not available on lab users' existing virtual machines until after you publish the lab template again. Follow steps in the next section to publish the updated template VM. ## Publish the template VM In this step, you publish the template VM. When you publish the template VM, Azure Lab Services creates VMs in the lab by using the template. All virtual machines have the same configuration as the template. +> [!CAUTION] +> When you republish a template VM, Azure Lab Services reimages the lab VMs and removes all changes and data on the VM. + 1. On the **Template** page, select **Publish** on the toolbar. - ![Publish template button](./media/how-to-create-manage-template/template-page-publish-button.png) + Publishing is a permanent action and can't be undone. ++1. On the **Publish template** page, enter the number of virtual machines you want to create in the lab, and then select **Publish**. - > [!WARNING] - > Publishing is a permanent action. It can't be undone. + :::image type="content" source="./media/how-to-create-manage-template/publish-template-number-vms.png" alt-text="Screenshot that shows the Publish template window, allowing you to specify the lab capacity (number of lab VMs in the lab)."::: -2. On the **Publish template** page, enter the number of virtual machines you want to create in the lab, and then select **Publish**. + You can track the publishing status on the template. If you're using [lab plans](lab-services-whats-new.md), publishing can take up to 20 minutes. - ![Publish template - number of VMs](./media/how-to-create-manage-template/publish-template-number-vms.png) -3. You see the **status of publishing** the template on page. If using [Azure Lab Services August 2022 Update](lab-services-whats-new.md), publishing can take up to 20 minutes. +1. Wait until the publishing is complete and then switch to the **Virtual machine pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile. - ![Publish template - progress](./media/how-to-create-manage-template/publish-template-progress.png) -4. Wait until the publishing is complete and then switch to the **Virtual machines pool** page by selecting **Virtual machines** on the left menu or by selecting **Virtual machines** tile. Confirm that you see virtual machines that are in **Unassigned** state. These VMs aren't assigned to students yet. They should be in **Stopped** state. You can start a student VM, connect to the VM, stop the VM, and delete the VM on this page. You can start them in this page or let your students start the VMs. + Confirm that you see virtual machines that are marked **Unassigned**, which indicates the lab VMs aren't assigned to lab users yet. The lab VMs should be in **Stopped** state. You can start a lab VM, connect to the VM, stop the VM, and delete the VM on this page. ![Virtual machines in stopped state](./media/how-to-create-manage-template/virtual-machines-stopped.png)+ :::image type="content" source="./media/how-to-create-manage-template/virtual-machines-stopped.png" alt-text="Screenshot that shows the Virtual machine pool page in the Lab Services portal, showing the list of unassigned lab VMs."::: ## Known issues -When you create a new lab from an exported lab VM image, youΓÇÖre unable to login with the credentials you used for creating the lab. Follow these steps to [troubleshoot the login problem](./troubleshoot-access-lab-vm.md#unable-to-login-with-the-credentials-you-used-for-creating-the-lab). --## Next steps +When you create a new lab from an exported lab VM image, youΓÇÖre unable to sign in with the credentials you used for creating the lab. Follow these steps to [troubleshoot the sign-in problem](./troubleshoot-access-lab-vm.md#unable-to-login-with-the-credentials-you-used-for-creating-the-lab). -See the following articles: +## Related content - [As an admin, create and manage lab plans](how-to-manage-lab-plans.md) - [As a lab owner, create and manage labs](how-to-manage-labs.md) |
lab-services | How To Prepare Windows Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-prepare-windows-template.md | Title: Prepare Windows lab template description: Prepare a Windows-based lab template in Azure Lab Services. Configure commonly used software and OS settings, such as Windows Update, OneDrive, and Microsoft 365. + Install other apps commonly used for teaching through the Windows Store app. Sug ## Next steps -- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md)+- Learn how to manage cost by [controlling Windows shutdown behavior](how-to-windows-shutdown.md) |
lab-services | How To Use Restrict Allowed Virtual Machine Sku Sizes Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy.md | Title: How to restrict the virtual machine sizes allowed for labs + Title: Restrict allowed lab VM sizes description: Learn how to use the Lab Services should restrict allowed virtual machine SKU sizes Azure Policy to restrict educators to specified virtual machine sizes for their labs. Previously updated : 08/23/2022 Last updated : 08/28/2023 -# How to restrict the virtual machine sizes allowed for labs +# Restrict allowed virtual machine sizes for labs -In this how to, you'll learn how to use the *Lab Services should restrict allowed virtual machine SKU sizes* Azure policy to control the SKUs available to educators when they're creating labs. In this example, you'll see how a lab administrator can allow only non-GPU SKUs, so educators can create only non-GPU SKU labs. +In this article, you learn how to restrict the list of allowed lab virtual machine sizes for creating new labs by using an Azure policy. As a platform administrator, you can use policies to lay out guardrails for teams to manage their own resources. [Azure Policy](/azure/governance/policy/) helps audit and govern resource state. [!INCLUDE [lab plans only note](./includes/lab-services-new-update-focused-article.md)] ## Configure the policy -1. In the [Azure portal](https://portal.azure.com), go to your subscription. +1. Sign in to the [Azure portal](https://portal.azure.com), and then go to your subscription. 1. From the left menu, under **Settings**, select **Policies**. -1. Under **Authoring**, select **Assignments**. +1. Under **Compliance**, select **Assign Policy**. -1. Select **Assign Policy**. :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy.png" alt-text="Screenshot showing the Policy Compliance dashboard with Assign policy highlighted."::: 1. Select the **Scope** which you would like to assign the policy to, and then select **Select**. - You can also select a resource group if you need the policy to apply more granularly. ++ Select the subscription to apply the policy to all resources. You can also select a resource group if you need the policy to apply more granularly. + :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-scope.png" alt-text="Screenshot showing the Scope pane with subscription highlighted."::: -1. Select Policy Definition. In Available definitions, search for *Lab Services*, select **Lab Services should restrict allowed virtual machine SKU sizes** and then select **Select**. +1. Select **Policy definition**. ++1. In **Available Definitions**, search for *Lab Services*, select **Lab Services should restrict allowed virtual machine SKU sizes**, and then select **Add**. + :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-definitions.png" alt-text="Screenshot showing the Available definitions pane with Lab Services should restrict allowed virtual machine SKU sizes highlighted. "::: -1. On the Basics tab, select **Next**. +1. On the **Basics** tab, select **Next**. -1. On the Parameters tab, clear **Only show parameters that need input or review** to show all parameters. - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters.png" alt-text="Screenshot showing the Parameters tab with Only show parameters that need input or review highlighted. "::: +1. On the **Parameters** tab, clear **Only show parameters that need input or review** to show all parameters. -1. The **Allowed SKU names** parameter shows the SKUs allowed when the policy is applied. By default all the available SKUs are allowed. You must clear the check boxes for any SKU that you don't wish to allow educators to use to create labs. In this example, only the following non-GPU SKUs are allowed: - - CLASSIC_FSV2_2_4GB_128_S_SSD - - CLASSIC_FSV2_4_8GB_128_S_SSD - - CLASSIC_FSV2_8_16GB_128_S_SSD - - CLASSIC_DSV4_4_16GB_128_P_SSD - - CLASSIC_DSV4_8_32GB_128_P_SSD + :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters.png" alt-text="Screenshot showing the Parameters tab with Only show parameters that need input or review highlighted. "::: - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters-vms.png" alt-text="Screenshot showing the Allowed SKUs."::: +1. In **Allowed SKU names**, clear the check boxes for any SKU that you don't allow for creating labs. - Use the table below to determine which SKU names to apply. + By default all the available SKUs are allowed. Use the following table to determine which SKU names you want to allow. |SKU Name|VM Size|VM Size Details| |--|--|--| In this how to, you'll learn how to use the *Lab Services should restrict allowe |CLASSIC_NVV4_8_28GB_128_S_SSD| Small GPU (Visualization) |8vCPUs, 28 GB RAM, 128 GB, Standard SSD |CLASSIC_NVV3_12_112GB_128_S_SSD| Medium GPU (Visualization) |12vCPUs, 112 GB RAM, 128 GB, Standard SSD -1. In **Effect**, select **Deny**. Selecting deny will prevent a lab from being created if an educator tries to use a GPU SKU. +1. In **Effect**, select **Deny** to prevent a lab from being created when a VM SKU isn't allowed. + :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-parameters-effect.png" alt-text="Screenshot showing the effect list."::: -1. Select **Next**. - -1. On the Remediation tab, select **Next**. - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-remediation.png" alt-text="Screenshot showing the Remediation tab with Next highlighted."::: - -1. On the Non-compliance tab, in **Non-compliance messages**, enter a non-compliance message of your choice like ΓÇ£Selected SKU is not allowedΓÇ¥, and then select **Next**. - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-message.png" alt-text="Screenshot showing the Non-compliance tab with an example non-compliance message."::: +1. Optionally, on the **Non-compliance messages** tab, enter a noncompliance message. -1. On the Review + Create tab, select **Create** to create the policy assignment. - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-review-create.png" alt-text="Screenshot showing the Review and Create tab."::: + :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-message.png" alt-text="Screenshot showing the Non-compliance tab with an example noncompliance message."::: -You've created a policy assignment for *Lab Services should restrict allowed virtual machine SKU sizes* and allowed only the use of non-GPU SKUs for labs. Attempting to create a lab with any other SKU will fail. +1. On the **Review + Create** tab, select **Create** to create the policy assignment. ++You've created a policy assignment to allow only specific virtual machine sizes for creating labs. If a lab creator attempts to create a lab with any other SKU, the creation fails. > [!NOTE] > New policy assignments can take up to 30 minutes to take effect. ## Exclude resources -When applying a built-in policy, you can choose to exclude certain resources, with the exception of lab plans. For example, if the scope of your policy assignment is a subscription, you can exclude resources in a specified resource group. Exclusions are configured using the Exclusions property on the Basics tab when creating a policy definition. +When applying a built-in policy, you can choose to exclude certain resources, except for lab plans. For example, if the scope of your policy assignment is a subscription, you can exclude resources in a specified resource group. ++You can configure exclusions when creating a policy definition by specifying the **Exclusions** property on the **Basics** tab. :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-basics-exclusions.png" alt-text="Screenshot showing the Basics tab with Exclusions highlighted."::: ## Exclude a lab plan -Lab plans cannot be excluded using the Exclusions property on the Basics tab. To exclude a lab plan from a policy assignment, you first need to get the lab plan resource ID, and then use it to specify the lab pan you want to exclude on the Parameters tab. +You can exclude a lab plan from a policy assignment by specifying the lab plan ID in the policy definition. ++1. To get the lab plan ID: ++ 1. In the [Azure portal](https://portal.azure.com), select your lab plan. + 1. Under **Setting**, select **Properties**, and then copy the **Id**. ++ :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/resource-id.png" alt-text="Screenshot showing the lab plan properties with Id highlighted."::: ++1. To exclude the lab plan from the policy assignment: -### Locate and copy lab plan resource ID -Use the following steps to locate and copy the resource ID so that you can paste it into the exclusion configuration. -1. In the [Azure portal](https://portal.azure.com), go to the lab plan you want to exclude. + 1. Assign a new policy definition. + 1. On the **Parameters** tab, clear **Only show parameters that need input or review**. + 1. For **Lab Plan Id to exclude**, enter the lab plan ID you copied earlier. -1. Under Settings, select Properties, and then copy the **Resource ID**. - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/resource-id.png" alt-text="Screenshot showing the lab plan properties with resource ID highlighted."::: + :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-exclude-lab-plan-id.png" alt-text="Screenshot showing the Parameter tab with Lab Plan ID to exclude highlighted."::: -### Enter the lab plan to exclude in the policy -Now you have a lab plan resource ID, you can use it to exclude the lab plan as you assign the policy. -1. On the Parameters tab, clear **Only show parameters that need input or review**. -1. For **Lab Plan ID to exclude**, enter the lab plan resource ID you copied earlier. - :::image type="content" source="./media/how-to-use-restrict-allowed-virtual-machine-sku-sizes-policy/assign-policy-exclude-lab-plan-id.png" alt-text="Screenshot showing the Parameter tab with Lab Plan ID to exclude highlighted."::: +## Related content +- [Use Azure Policy to audit and manage Azure Lab Services?](./azure-polices-for-lab-services.md) -## Next steps -See the following articles: -- [WhatΓÇÖs new with Azure Policy for Lab Services?](azure-polices-for-lab-services.md)-- [Built-in Policies](../governance/policy/samples/built-in-policies.md#lab-services)-- [What is Azure policy?](../governance/policy/overview.md)+- [What is Azure policy?](/azure/governance/policy/overview) |
lab-services | Lab Account Owner Support Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-account-owner-support-information.md | Title: Set up support information (lab account owner) description: Describes how a lab account owner can set support contact information. Lab creators and lab users can view and use it to get help. Previously updated : 04/25/2022 Last updated : 08/28/2023 The support information includes: 1. Select **Save** on the toolbar. :::image type="content" source="./media/lab-account-owner-support-information/lab-account-internal-support-page.png" alt-text="Screenshot of the Internal support page.":::--## Next steps --See the following articles: --- [View contact information (lab creator)](lab-creator-support-information.md)-- [View contact information (lab user)](lab-user-support-information.md) |
lab-services | Lab Creator Support Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-creator-support-information.md | - Title: View support information (lab creator) -description: This article explains how lab creators can view support information that they can use to get help. Previously updated : 04/25/2022----# View support information (lab creator in Azure Lab Services) --This article explains how you (as a lab creator) can view the following support information: --- URL-- Email-- Phone-- Additional instructions--You can use this information to get help when you run into any technical issues while creating a lab in a lab plan. --## View support information --1. Sign in to Azure Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). -2. Select question mark (**?**) at the top-right corner of the page. -3. Confirm that you see the links to the **view support website**, **email support**, and **support phone number**. -- :::image type="content" source="./media/lab-creator-support-information/support-information.png" alt-text="Screenshot that shows the links to the support information."::: --## Next steps --See the following article to learn about how a lab user views the support contact information: --- [View contact information (lab user)](lab-user-support-information.md) |
lab-services | Lab Services Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-services-whats-new.md | Title: What's New in Azure Lab Services | Microsoft Docs + Title: What's New in Azure Lab Services description: Learn what's new in the Azure Lab Services August 2022 Updates. Previously updated : 07/04/2022 Last updated : 08/28/2023 |
lab-services | Lab User Support Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/lab-user-support-information.md | - Title: View support information (lab user) -description: This article explains how a lab user or educator can view support information that he/she can use to get help. Previously updated : 06/26/2020----# View support information (lab user in Azure Lab Services) -This article explains how you (as a lab user) can view the following support information: --- URL-- Email-- Phone-- Additional instructions--You can use this information to get help when you run into any technical issues while using a lab in a lab account. -- -## View support information -1. Sign in to Lab Services web portal: [https://labs.azure.com](https://labs.azure.com). -2. Select the **lab or virtual machine** for which you need help, and select **?** at the top-right corner of the page. -3. Confirm that you see links to the **view support website**, **email support**, and **support phone number**. -- ![View support information](./media/lab-user-support-information/support-information.png) -4. You can view support contact information for another lab by switching to that lab in the drop-down list. -- ![Switch to another lab](./media/lab-user-support-information/switch-another-lab.png) -5. Now, you see the support contact information for the other lab. -- ![Other lab's support information](./media/lab-user-support-information/second-lab-support-information.png) --## Next steps -See the following article to learn about how a lab user views the support contact information: --- [How a lab account owner can set support contact information](lab-account-owner-support-information.md)-- [How a lab creator can view support contact information](lab-creator-support-information.md) |
lab-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md | Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
lab-services | Tutorial Create Lab With Advanced Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-create-lab-with-advanced-networking.md | Title: Use advanced networking in Azure Lab Services | Microsoft Docs description: Create an Azure Lab Services lab plan with advanced networking. Create two labs and verify they share same virtual network when published. Previously updated : 07/27/2022 Last updated : 08/28/2023 -Azure Lab Services provides a feature called advanced networking. Advanced networking enables you to control the network for labs created using lab plans. You can use advanced networking to implement various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), or lab to lab communication. Learn more about the [supported networking scenarios in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md). +Azure Lab Services advanced networking enables you to control the network for labs created using lab plans. You can use advanced networking to implement various scenarios including [connecting to licensing servers](how-to-create-a-lab-with-shared-resource.md), using [hub-spoke model for Azure Networking](/azure/architecture/reference-architectures/hybrid-networking/), or lab to lab communication. In this tutorial, you set up lab-to-lab communication for a web development class. -Let's focus on the lab to lab communication scenario. For our example, we'll create labs for a web development class. Each student will need access to both a server VM and a client VM. The server and client VMs must be able to communicate with each other. We'll test communication by configuring Internet Control Message Protocol (ICMP) for each VM and allowing the VMs to ping each other. +After you complete this tutorial, you'll have a lab with two lab virtual machines that are able to communicate with each other: a server VM and a client VM. :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/architecture-two-labs-with-advanced-networking.png" alt-text="Architecture diagram showing two labs that use the same subnet of a virtual network."::: +Learn more about the [supported networking scenarios in Azure Lab Services](./concept-lab-services-supported-networking-scenarios.md). + In this tutorial, you learn how to: > [!div class="checklist"] In this tutorial, you learn how to: ## Prerequisites [!INCLUDE [Azure subscription](./includes/lab-services-prerequisite-subscription.md)]+ [!INCLUDE [Azure manage resources](./includes/lab-services-prerequisite-manage-resources.md)] ## Create a resource group [!INCLUDE [resource group definition](../../includes/resource-group.md)] -The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, we'll put all resources for this tutorial in the same resource group. +The following steps show how to use the Azure portal to [create a resource group](../azure-resource-manager/management/manage-resource-groups-portal.md). For simplicity, you create all resources for this tutorial in the same resource group. 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Resource groups**. The following steps show how to use the Azure portal to create a virtual network 1. Select **Next: IP Addresses**. :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-virtual-network-basics-page.png" alt-text="Screenshot of Basics tab of Create virtual network page in the Azure portal.":::-1. On the **IP Addresses** tab, create a subnet that will be used by the labs. +1. On the **IP Addresses** tab, create a subnet that is used by the labs. 1. Select **+ Add subnet** 1. For **Subnet name**, enter **labservices-subnet**.- 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 will have enough IP addresses for 251 lab VMs. (Five IP addresses are reserved by Azure for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](../virtual-network/virtual-network-manage-subnet.md). + 1. For **Subnet address range**, enter range in CIDR notation. For example, 10.0.1.0/24 has enough IP addresses for 251 lab VMs. (Azure reserves five IP addresses for every subnet.) To create a subnet with more available IP addresses for VMs, use a different CIDR prefix length. For example, 10.0.0.0/20 would have room for over 4000 IP addresses for lab VMs. For more information about adding subnets, see [Add a subnet](../virtual-network/virtual-network-manage-subnet.md). 1. Select **OK**. 1. Select **Review + Create**. + :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/create-virtual-network-ip-addresses-page.png" alt-text="Screenshot of IP addresses tab of the Create virtual network page in the Azure portal."::: 1. Once validation passes, select **Create**. ## Delegate subnet to Azure Lab Services -In this section, we'll configure the subnet to be used with Azure Lab Services. To tell Azure Lab Services that a subnet may be used, the subnet must be [delegated to the service](../virtual-network/manage-subnet-delegation.md). +Next, you configure the subnet to be used with Azure Lab Services. To use a subnet with Azure Lab Services, the subnet must be [delegated to the service](../virtual-network/manage-subnet-delegation.md). 1. Open the **MyVirtualNetwork** resource. 1. Select the **Subnets** item on the left menu. In this section, we'll configure the subnet to be used with Azure Lab Services. [!INCLUDE [nsg intro](../../includes/virtual-networks-create-nsg-intro-include.md)] -An NSG is required when using advanced networking in Azure Lab Services. In this section, we'll create the NSG. In the following section, we'll add some inbound rules needed to access lab VMs. +An NSG is required when using advanced networking in Azure Lab Services. To create an NSG, complete the following steps: To create an NSG, complete the following steps: ## Update the network security group inbound rules -To ensure that students can RDP to the lab VMs, we need to create an **Allow** security rule. When using Linux, we need to adapt the rule for SSH. Let's create a rule that allows both RDP and SSH traffic. We'll use the subnet range defined in the previous section. +To ensure that lab users can use remote desktop to connect to the lab VMs, you need to create a security rule to allow this type of traffic. When you use Linux, you need to adapt the rule for SSH. ++To create a rule that allows both RDP and SSH traffic for the subnet you created previously: 1. Open **MyNsg**. 1. Select **Inbound security rules** on the left menu. To ensure that students can RDP to the lab VMs, we need to create an **Allow** s 1. Select **Add**. :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/nsg-add-inbound-rule.png" alt-text="Screenshot of Add inbound rule window for Network security group.":::+ 1. Wait for the rule to be created.-1. Select **Refresh** on the menu bar. Our new rule will now show in the list of rules. +1. Select **Refresh** on the menu bar. The new rule now shows in the list of rules. ## Associate network security group to virtual network -We now have an NSG with an inbound security rule to allow lab VMs to connect to the virtual network. Let's associate the NSG with the virtual network we created earlier. +You now have an NSG with an inbound security rule to allow lab VMs to connect to the virtual network. ++To associate the NSG with the virtual network you created earlier: 1. Open **MyVirtualNetwork**. 1. Select **Subnets** on the left menu. We now have an NSG with an inbound security rule to allow lab VMs to connect to :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/associate-nsg-with-subnet.png" alt-text="Screenshot of Associate subnet page in the Azure portal."::: > [!WARNING]-> Connecting the network security group to the subnet is a **required step**. Students will not be able to connect to their VMs if there is no network security group associated with the subnet. +> Connecting the network security group to the subnet is a **required step**. Lab users are not able to connect to their lab VMs if there is no network security group associated with the subnet. ## Create a lab plan using advanced networking -Now that we have the network created and configured, we can create the lab plan. +Now that the virtual network is created and configured, you can create the lab plan: 1. Select **Create a resource** in the upper left-hand corner of the Azure portal. 1. Search for **lab plan**. Now that we have the network created and configured, we can create the lab plan. ## Create two labs -Next, let's create two labs that are using advanced networking. These labs will use the **labservices-subnet** we associated with Azure Lab Services. Any lab VMs created using **MyLabPlan** will be able to communicate with each other. Communication can be restricted by using NSGs, firewalls, etc. +Next, create two labs that use advanced networking. These labs use the **labservices-subnet** that's associated with Azure Lab Services. Any lab VMs created using **MyLabPlan** can communicate with each other. Communication can be restricted by using NSGs, firewalls, and more. -To create a lab, see the following steps. We'll run the steps twice. Once to create the lab with the server VMs and once to create the lab with the client VMs. +Perform the following steps to create both labs. Repeat these steps the server VM and the client VM. -1. Navigate to Lab Services web site: [https://labs.azure.com](https://labs.azure.com). +1. Navigate to the Azure Lab Services website: [https://labs.azure.com](https://labs.azure.com). 1. Select **Sign in** and enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts. 1. Select **MyResourceGroup** from the dropdown on the menu bar. 1. Select **New lab**. :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-button.png" alt-text="Screenshot of Azure Lab Services portal. New lab button is highlighted."::: 1. In the **New Lab** window, do the following actions:- 1. Specify a **name**. The name should be easily identifiable. We'll use **MyServerLab** for the lab with the server VMs and **MyClientLab** for the lab with the client VMs. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). - 1. Choose a **virtual machine image**. For simplicity we'll use **Windows 11 Pro**, but you can choose another available image if you want. For more information about enabling virtual machine images, see [Specify Marketplace images available to lab creators](specify-marketplace-images.md). + 1. Specify a **name**. The name should be easily identifiable. Use **MyServerLab** for the lab with the server VMs and **MyClientLab** for the lab with the client VMs. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). + 1. Choose a **virtual machine image**. For this tutorial, use **Windows 11 Pro**, but you can choose another available image if you want. For more information about enabling virtual machine images, see [Specify Marketplace images available to lab creators](specify-marketplace-images.md). 1. For **size**, select **Medium**.- 1. **Region** will only have one region. When a lab uses advanced networking, the lab must be in the same region as the associated subnet. + 1. **Region** only has one region. When a lab uses advanced networking, the lab must be in the same region as the associated subnet. 1. Select **Next**. :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-window.png" alt-text="Screenshot of the New lab window for Azure Lab Services."::: -1. On the **Virtual machine credentials** page, specify default administrator credentials for all VMs in the lab. Specify the **name** and **password** for the administrator. By default all the student VMs will have the same password as the one specified here. Select **Next**. +1. On the **Virtual machine credentials** page, specify default administrator credentials for all VMs in the lab. Specify the **name** and **password** for the administrator. By default all the lab VMs have the same password as the one specified here. Select **Next**. :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/new-lab-credentials.png" alt-text="Screenshot that shows the Virtual machine credentials window when creating a new Azure Lab Services lab."::: To create a lab, see the following steps. We'll run the steps twice. Once to c ## Enable ICMP on the lab templates -Once the labs have been created, we'll enable ICMP (ping). Using ping is a simple example to show the template and lab VMs from different labs may communicate with each other. First, we'll enable ICMP on the template VMs for both labs. Enabling ICMP on the template VM will also enable it on the lab VMs. Once the labs are published, the lab VMs will be able to ping each other. +Once the labs are created, enable ICMP (ping) for testing communication between the lab VMs. First, enable ICMP on the template VMs for both labs. Enabling ICMP on the template VM also enables it on the lab VMs. Once the labs are published, the lab VMs are able to ping each other. To enable ICMP, complete the following steps for each template VM in each lab. To enable ICMP, complete the following steps for each template VM in each lab. :::image type="content" source="./media/tutorial-create-lab-with-advanced-networking/lab-connect-to-template.png" alt-text="Screenshot of Azure Lab Services template page. The Connect to template menu button is highlighted."::: -Now that were logged on to the template VM, let's modify the firewall rules on the VM to allow ICMP. Since we're using Windows 11, we can use PowerShell and the [Enable-NetFilewallRule](/powershell/module/netsecurity/enable-netfirewallrule) cmdlet. To open a PowerShell window: +When you're logged in to the template VM, modify the firewall rules on the VM to allow ICMP. Because you're using Windows 11, you can use PowerShell and the [Enable-NetFilewallRule](/powershell/module/netsecurity/enable-netfirewallrule) cmdlet. To open a PowerShell window: 1. Select the Start button. 1. Type "PowerShell" In this step, you publish the lab. When you publish the template VM, Azure Lab S ## Test communication between lab VMs -In this section weΓÇÖll, wrap up by showing that the two student virtual machines in different labs are able to communicate with each other. +In this section, confirm that the two lab virtual machines in different labs are able to communicate with each other. -First, let's start and connect to a lab VM from each lab. Complete the following steps for each lab. +First, start and connect to a lab VM from each lab. Complete the following steps for each lab. 1. Open the lab in the [Azure Lab Services website](https://labs.azure.com). 1. Select **Virtual machine pool** on the left menu. 1. Select a single VM listed in the virtual machine pool.-1. Take note of the **Private IP Address** for the VM. We'll need the private IP addresses of both the server lab and client lab VMs later. +1. Take note of the **Private IP Address** for the VM. You need the private IP addresses of both the server lab and client lab VMs later. 1. Select the **State** slider to change the state from **Stopped** to **Starting**. > [!NOTE]- > When an educator turns on a student VM, quota for the student isn't affected. Quota for a user specifies the number of lab hours available to a student outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users). + > When an lab educator starts a lab VM, quota for the lab user isn't affected. Quota for a user specifies the number of lab hours available to a lab user outside of the scheduled class time. For more information on quotas, see [Set quotas for users](how-to-manage-lab-users.md?#set-quotas-for-users). 1. Once the **State** is **Running**, select the connect icon for the running VM. Open the download RDP file to connect to the VM. For more information about connection experiences on different operating systems, see [Connect to a lab VM](connect-virtual-machine.md). :::image type="content" source="media/tutorial-create-lab-with-advanced-networking/virtual-machine-pool-running-vm.png" alt-text="Screen shot of virtual machine pool page for Azure Lab Services lab."::: -Now we can use the ping utility to test cross-lab communication. From the lab VM in the server lab, open a command prompt. Use `ping {ip-address}`. The `{ip-address}` is the **Private IP Address** of the client VM, that we noted previously. Test can also be done from the VM from the client lab to the lab VM in the server lab. +Now, use the ping utility to test cross-lab communication. From the lab VM in the server lab, open a command prompt. Use `ping {ip-address}`. The `{ip-address}` is the **Private IP Address** of the client VM, that you noted previously. This test can also be done from the lab VM from the client lab to the lab VM in the server lab. :::image type="content" source="medi.png" alt-text="Screen shot command window with the ping command executed."::: If you're not going to continue to use this application, delete the virtual netw ## Next steps >[!div class="nextstepaction"]->[Add students to the labs](how-to-manage-lab-users.md) +>[Add lab users to the labs](how-to-manage-lab-users.md) |
lighthouse | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md | Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
load-balancer | Load Balancer Test Frontend Reachability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md | Based on the current health probe state of your backend instances, you receive d ## Usage considerations - ICMP pings can't be disabled and are allowed by default on Standard Public Load Balancers.+- ICMP pings with packet sizes larger than 64 bytes will be dropped, leading to timeouts. > [!NOTE] > ICMP ping requests are not sent to the backend instances; they are handled by the Load Balancer. |
load-balancer | Quickstart Load Balancer Standard Internal Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md | Create a virtual network by using [az network vnet create](/cli/azure/network/vn In this example, you create an Azure Bastion host. The Azure Bastion host is used later in this article to securely manage the virtual machines and test the load balancer deployment. > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] - > - ### Create a bastion public IP address Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public IP address for the Azure Bastion host. |
load-balancer | Quickstart Load Balancer Standard Internal Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md | -Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer for a backend pool with two virtual machines. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets. +Get started with Azure Load Balancer by using the Azure portal to create an internal load balancer for a backend pool with two virtual machines. Other resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets. :::image type="content" source="media/quickstart-load-balancer-standard-internal-portal/internal-load-balancer-resources.png" alt-text="Diagram of resources deployed for internal load balancer."::: +> [!NOTE] +> In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md) +> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md) ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Get started with Azure Load Balancer by using the Azure portal to create an inte Sign in to the [Azure portal](https://portal.azure.com). +## Create NAT gateway ++All outbound internet traffic traverses the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results. ++1. Select **+ Create**. ++1. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information: ++ | Setting | Value | + | - | -- | + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **Create new**. </br> Enter **CreateIntLBQS-rg** in Name. </br> Select **OK**. | + | **Instance details** | | + | NAT gateway name | Enter **myNATgateway**. | + | Region | Select **East US**. | + | Availability zone | Select **None**. | + | Idle timeout (minutes) | Enter **15**. | ++1. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page. ++1. Select **Create a new public IP address** under **Public IP addresses**. ++1. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**. ++1. Select **OK**. ++1. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab. ++1. Select **Create**. + ## Create the virtual network When you create an internal load balancer, a virtual network is configured as the network for the load balancer. A private IP address in the virtual network is configured as the frontend for th An Azure Bastion host is created to securely manage the virtual machines and install IIS. -In this section, you'll create a virtual network, subnet, and Azure Bastion host. +In this section, you create a virtual network, subnet, and Azure Bastion host. 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results. -2. In **Virtual networks**, select **+ Create**. +1. In **Virtual networks**, select **+ Create**. -3. In **Create virtual network**, enter or select this information in the **Basics** tab: +1. In **Create virtual network**, enter or select this information in the **Basics** tab: | **Setting** | **Value** | ||--| | **Project Details** | | | Subscription | Select your Azure subscription |- | Resource Group | Select **Create new**. </br> In **Name** enter **CreateIntLBQS-rg**. </br> Select **OK**. | + | Resource Group | Select **CreateIntLBQS-rg**. | | **Instance details** | | | Name | Enter **myVNet** |- | Region | Select **West US 3** | + | Region | Select **East US** | -4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page. +1. Select the **IP Addresses** tab or select the **Next** button at the bottom of the page. -5. In the **IP Addresses** tab, enter this information: +1. In the **IP Addresses** tab, enter this information: | Setting | Value | |--|-| | IPv4 address space | Enter **10.1.0.0/16** | -6. Under **Subnet name**, select the word **default**. +1. Under **Subnets**, select the word **default**. -7. In **Edit subnet**, enter this information: +1. In **Edit subnet**, enter this information: | Setting | Value | |--|-| | Subnet name | Enter **myBackendSubnet** | | Subnet address range | Enter **10.1.0.0/24** |+ | **Security** | | + | NAT Gateway | Select **myNATgateway**. | -8. Select **Save**. +1. Select **Add**. -9. Select the **Security** tab. +1. Select the **Security** tab. -10. Under **BastionHost**, select **Enable**. Enter this information: +1. Under **BastionHost**, select **Enable**. Enter this information: | Setting | Value | |--|-| | Bastion name | Enter **myBastionHost** |- | AzureBastionSubnet address space | Enter **10.1.1.0/27** | - | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. | + | AzureBastionSubnet address space | Enter **10.1.1.0/26** | + | Public IP Address | Select **Create new**. </br> Enter **myBastionIP** in Name. </br> Select **OK**. | > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] - > --11. Select the **Review + create** tab or select the **Review + create** button. +1. Select the **Review + create** tab or select the **Review + create** button. -12. Select **Create**. +1. Select **Create**. > [!NOTE] In this section, you'll create a virtual network, subnet, and Azure Bastion host In this section, you create a load balancer that load balances virtual machines. -During the creation of the load balancer, you'll configure: +During the creation of the load balancer, you configure: -* Frontend IP address -* Backend pool -* Inbound load-balancing rules +- Frontend IP address +- Backend pool +- Inbound load-balancing rules 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. -2. In the **Load balancer** page, select **Create**. +1. In the **Load balancer** page, select **Create**. -3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information: +1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information: - | Setting | Value | - | | | + | Setting | Value | + | | | | **Project details** | | | Subscription | Select your subscription. | | Resource group | Select **CreateIntLBQS-rg**. | | **Instance details** | |- | Name | Enter **myLoadBalancer** | - | Region | Select **West US 3**. | + | Name | Enter **myLoadBalancer** | + | Region | Select **East US**. | | SKU | Leave the default **Standard**. |- | Type | Select **Internal**. | + | Type | Select **Internal**. | | Tier | Leave the default of **Regional**. | - :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/create-standard-internal-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true"::: -4. Select **Next: Frontend IP configuration** at the bottom of the page. --5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**. --6. Enter **myFrontend** in **Name**. --7. Select **myBackendSubnet** in **Subnet**. --8. Select **Dynamic** for **Assignment**. --9. Select **Zone-redundant** in **Availability zone**. --10. Select **Add**. --11. Select **Next: Backend pools** at the bottom of the page. --12. In the **Backend pools** tab, select **+ Add a backend pool**. --13. Enter **myBackendPool** for **Name** in **Add backend pool**. +1. Select **Next: Frontend IP configuration** at the bottom of the page. -14. Select **NIC** or **IP Address** for **Backend Pool Configuration**. +1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter or select the following information: -15. Select **IPv4** or **IPv6** for **IP version**. --16. Select **Add**. --17. Select the **Next: Inbound rules** button at the bottom of the page. --18. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**. --19. In **Add load balancing rule**, enter or select the following information: + | Setting | Value | + | - | -- | + | Name | Enter **myFrontend** | + | Private IP address version | Select **IPv4** or **IPv6** depending on your requirements. | | Setting | Value | | - | -- |+ | Name | Enter **myFrontend** | + | Virtual network | Select **myVNet** | + | Subnet | Select **myBackendSubnet** | + | Assignment | Select **Dynamic** | + | Availability zone | Select **Zone-redundant** | ++1. Select **Add**. +1. Select **Next: Backend pools** at the bottom of the page. +1. In the **Backend pools** tab, select **+ Add a backend pool**. +1. Enter **myBackendPool** for **Name** in **Add backend pool**. +1. Select **IP Address** for **Backend Pool Configuration**. +1. Select **Save**. +1. Select the **Next: Inbound rules** button at the bottom of the page. +1. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**. +1. In **Add load balancing rule**, enter or select the following information: ++ | **Setting** | **Value** | + | -- | | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **myFrontend**. | During the creation of the load balancer, you'll configure: | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |- | TCP reset | Select **Enabled**. | - | Floating IP | Select **Disabled**. | --20. Select **Add**. --21. Select the blue **Review + create** button at the bottom of the page. --22. Select **Create**. -- > [!NOTE] - > In this example you'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md) - > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md) --## Create NAT gateway --In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network. --1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results. --2. In **NAT gateways**, select **+ Create**. + | Enable TCP reset | Select **checkbox** . | + | Enable Floating IP | Leave the default of unselected. | -3. In **Create network address translation (NAT) gateway**, enter or select the following information: +1. Select **Save**. - | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **CreateIntLBQS-rg**. | - | **Instance details** | | - | NAT gateway name | Enter **myNATgateway**. | - | Region | Select **West US 3**. | - | Availability zone | Select **None**. | - | Idle timeout (minutes) | Enter **15**. | --4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page. +1. Select the blue **Review + create** button at the bottom of the page. -5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**. --6. Enter **myNATgatewayIP** in **Name** in **Add a public IP address**. --7. Select **OK**. --8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page. --9. In **Virtual network**, select **myVNet**. --10. Select **myBackendSubnet** under **Subnet name**. --11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab. --12. Select **Create**. +1. Select **Create**. ## Create virtual machines -In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**). +In this section, you create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1** and **Zone 2**). These VMs are added to the backend pool of the load balancer that was created earlier. These VMs are added to the backend pool of the load balancer that was created ea | Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myVM1** |- | Region | Select **(US) West US 3** | + | Region | Select **(US) East US** | | Availability Options | Select **Availability zones** | | Availability zone | Select **1** | | Security type | Select **Standard**. | These VMs are added to the backend pool of the load balancer that was created ea ## Create test virtual machine -In this section, you'll create a VM named **myTestVM**. This VM will be used to test the load balancer configuration. +In this section, you create a VM named **myTestVM**. This VM is used to test the load balancer configuration. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. In this section, you'll create a VM named **myTestVM**. This VM will be used to | Resource Group | Select **CreateIntLBQS-rg** | | **Instance details** | | | Virtual machine name | Enter **myTestVM** |- | Region | Select **(US) West US 3** | + | Region | Select **(US) East US** | | Availability Options | Select **No infrastructure redundancy required** | | Security type | Select **Standard**. |- | Image | Select **Windows Server 2019 Datacenter - Gen2** | + | Image | Select **Windows Server 2022 Datacenter - x64 Gen2** | | Azure Spot instance | Leave the default of unselected. | | Size | Choose VM size or take default setting | | **Administrator account** | | In this section, you'll create a VM named **myTestVM**. This VM will be used to ## Test the load balancer -In this section, you'll test the load balancer by connecting to the **myTestVM** and verifying the webpage. +In this section, you test the load balancer by connecting to the **myTestVM** and verifying the webpage. 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. In this section, you'll test the load balancer by connecting to the **myTestVM** 7. Enter the username and password entered during VM creation. -8. Open **Internet Explorer** on **myTestVM**. +8. Open **Microsoft Edge** on **myTestVM**. 9. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**. :::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot shows a browser window displaying the customized page, as expected." border="true"::: -To see the load balancer distribute traffic across both VMs, you can force-refresh your web browser from the client machine. +1. To see the load balancer distribute traffic across both VMs, navigate to the VM shown in the browser message, and stop the VM. +1. Refresh the browser window. The page should still display the customized page. The load balancer is now only sending traffic to the remaining VM. ## Clean up resources When no longer needed, delete the resource group, load balancer, and all related In this quickstart, you: -* Created an internal Azure Load Balancer +- Created an internal Azure Load Balancer -* Attached 2 VMs to the load balancer +- Attached 2 VMs to the load balancer -* Configured the load balancer traffic rule, health probe, and then tested the load balancer +- Configured the load balancer traffic rule, health probe, and then tested the load balancer To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"] |
load-balancer | Quickstart Load Balancer Standard Internal Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-powershell.md | $gwpublicip = New-AzPublicIpAddress @gwpublicip * Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network - > [!IMPORTANT] -- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] -- > +> [!IMPORTANT] +> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] ```azurepowershell-interactive |
load-balancer | Quickstart Load Balancer Standard Public Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md | Create a network security group rule using [az network nsg rule create](/cli/azu In this section, you'll create the resources for Azure Bastion. Azure Bastion is used to securely manage the virtual machines in the backend pool of the load balancer. > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] -> - ### Create a public IP address Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a public ip address for the bastion host. The public IP is used by the bastion host for secure access to the virtual machine resources. |
load-balancer | Quickstart Load Balancer Standard Public Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-powershell.md | Use a NAT gateway to provide outbound internet access to resources in the backen * Use [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) to associate the NAT gateway to the subnet of the virtual network > [!IMPORTANT]- > [!INCLUDE [Pricing](../../includes/bastion-pricing.md)] -> - ```azurepowershell-interactive ## Create public IP address for NAT gateway ## $ip = @{ |
load-balancer | Troubleshoot Outbound Connection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-outbound-connection.md | Title: Troubleshoot common outbound connectivity issues with Azure Load Balancer - -description: In this article, learn to troubleshoot for common problems with outbound connectivity with Azure Load Balancer. This includes most common issues of SNAT exhaustion and connection timeouts. + Title: Troubleshoot Azure Load Balancer outbound connectivity issues +description: Learn troubleshooting guidance for outbound connections in Azure Load Balancer. This includes issues of SNAT exhaustion and connection timeouts. Previously updated : 05/22/2023 Last updated : 08/24/2023 -# Troubleshoot common outbound connectivity issues with Azure Load Balancer +# Troubleshoot Azure Load Balancer outbound connectivity issues -This article provides troubleshooting guidance for common problems that can occur with outbound connections from an Azure Load Balancer. Most problems with outbound connectivity that customers experience is due to source network address translation (SNAT) port exhaustion and connection timeouts leading to dropped packets. +Learn troubleshooting guidance for outbound connections in Azure Load Balancer. This includes understanding source network address translation (SNAT) and it's impact on connections, using individual public IPs on VMs, and designing applications for connection efficiency to avoid SNAT port exhaustion. Most problems with outbound connectivity that customers experience is due to SNAT port exhaustion and connection timeouts leading to dropped packets. To learn more about SNAT ports, see [Source Network Address Translation for outbound connections](load-balancer-outbound-connections.md). Follow [Standard load balancer diagnostics with metrics, alerts, and resource he It's important to optimize your Azure deployments for outbound connectivity. Optimization can prevent or alleviate issues with outbound connectivity. -### Use a NAT gateway for outbound connectivity to the Internet +### Deploy NAT gateway for outbound Internet connectivity Azure NAT Gateway is a highly resilient and scalable Azure service that provides outbound connectivity to the internet from your virtual network. A NAT gatewayΓÇÖs unique method of consuming SNAT ports helps resolve common SNAT exhaustion and connection issues. For more information about Azure NAT Gateway, see [What is Azure NAT Gateway?](../virtual-network/nat-gateway/nat-overview.md). To learn more about default outbound access and default port allocation, see [So To increase the number of available SNAT ports per VM, configure outbound rules with manual port allocation on your load balancer. For example, if you know you have a maximum of 10 VMs in your backend pool, you can allocate up to 6,400 SNAT ports per VM rather than the default 1,024. If you need more SNAT ports, you can add multiple frontend IP addresses for outbound connections to multiply the number of SNAT ports available. Make sure you understand why you're exhausting SNAT ports before adding more frontend IP addresses. -For detailed guidance, see [Design your applications to use connections efficiently](#design-your-applications-to-use-connections-efficiently) later in this article. To add more IP addresses for outbound connections, create a frontend IP configuration for each new IP. When outbound rules are configured, you're able to select multiple frontend IP configurations for a backend pool. It's recommended to use different IP addresses for inbound and outbound connectivity. Different IP addresses isolate traffic for improved monitoring and troubleshooting. +For detailed guidance, see [Design your applications to use connections efficiently](#design-connection-efficient-applications) later in this article. To add more IP addresses for outbound connections, create a frontend IP configuration for each new IP. When outbound rules are configured, you're able to select multiple frontend IP configurations for a backend pool. It's recommended to use different IP addresses for inbound and outbound connectivity. Different IP addresses isolate traffic for improved monitoring and troubleshooting. ### Configure an individual public IP on VM We highly recommend considering utilizing NAT gateway instead, as assigning indi > >Private Link is the recommended option over service endpoints for private access to Azure hosted services. For more information on the difference between Private Link and service endpoints, see [Compare Private Endpoints and Service Endpoints](../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). -## Design your applications to use connections efficiently +## Design connection-efficient applications When you design your applications, ensure they use connections efficiently. Connection efficiency can reduce or eliminate SNAT port exhaustion in your deployed applications. |
load-testing | Concept Azure Load Testing Vnet Injection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-azure-load-testing-vnet-injection.md | -In this article, you'll learn about the scenarios for deploying Azure Load Testing in a virtual network (VNET). This deployment is sometimes called VNET injection. +In this article, you learn about the scenarios for deploying Azure Load Testing in a virtual network (VNET). This deployment is sometimes called VNET injection. This functionality enables the following usage scenarios: |
load-testing | How To Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-customer-managed-keys.md | Azure Load Testing uses the customer-managed key to encrypt the following data i - Once customer-managed key encryption is enabled on a resource, it can't be disabled. +- If the customer-managed key is stored in an Azure Key Vault behind a firewall, public access should be enabled on the firewall to allow Azure Load Testing to access the key. + ## Configure your Azure key vault To use customer-managed encryption keys with Azure Load Testing, you need to store the key in Azure Key Vault. You can use an existing or create a new key vault. The load testing resource and key vault may be in different regions or subscriptions in the same tenant. |
load-testing | How To High Scale Load | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-high-scale-load.md | Title: Configure Azure Load Testing for high-scale load tests + Title: Configure high-scale load tests -description: Learn how to configure Azure Load Testing to run high-scale load tests by simulating large amounts of virtual users. +description: Learn how to configure test engine instances in Azure Load Testing to run high-scale load tests. Monitor engine health metrics to find an optimal configuration for your load test. Previously updated : 07/18/2022 Last updated : 08/22/2023 # Configure Azure Load Testing for high-scale load -In this article, learn how to set up a load test for high-scale load with Azure Load Testing. --Configure multiple test engine instances to scale out the number of virtual users for your load test and simulate a high number of requests per second. To achieve an optimal load distribution, you can monitor the test instance health metrics in the Azure Load Testing dashboard. +In this article, you learn how to configure your load test for high-scale with Azure Load Testing. Configure multiple test engine instances to scale out the number of virtual users for your load test and simulate a high number of requests per second. To achieve an optimal load distribution, you can monitor the test instance health metrics in the Azure Load Testing dashboard. ## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -- An existing Azure Load Testing resource. To create an Azure Load Testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md).+- An existing Azure load testing resource. To create an Azure load testing resource, see the quickstart [Create and run a load test](./quickstart-create-and-run-load-test.md). ## Determine requests per second To achieve a target number of requests per second, configure the total number of ## Test engine instances and virtual users -In the Apache JMeter script, you can specify the number of parallel threads. Each thread represents a virtual user that accesses the application endpoint in parallel. We recommend that you keep the number of threads in a script below a maximum of 250. +In the Apache JMeter script, you can specify the number of parallel threads. Each thread represents a virtual user that accesses the application endpoint. We recommend that you keep the number of threads in a script below a maximum of 250. -In Azure Load Testing, *test engine* instances are responsible for running the Apache JMeter script. You can configure the number of instances for a load test. All test engine instances run in parallel. +In Azure Load Testing, *test engine* instances are responsible for running the Apache JMeter script. All test engine instances run in parallel. You can configure the number of instances for a load test. The total number of virtual users for a load test is then: VUs = (# threads) * (# test engine instances). For example, to simulate 1,000 virtual users, set the number of threads in the A The location of the Azure Load Testing resource determines the location of the test engine instances. All test engine instances within a Load Testing resource are hosted in the same Azure region. -## Configure your test plan +## Configure test engine instances ++You can specify the number of test engine instances for each test. Your test script runs in parallel across each of these instances to simulate load to your application. ++To configure the number of instances for a test: -In this section, you configure the scaling settings of your load test. +# [Azure portal](#tab/portal) 1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription. In this section, you configure the scaling settings of your load test. 1. Select **Apply** to modify the test and use the new configuration when you rerun it. +# [Azure Pipelines / GitHub Actions](#tab/pipelines+github) ++For CI/CD workflows, you configure the number of engine instances in the [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository. ++1. Open the YAML test configuration file for your load test in your editor of choice. ++1. Configure the number of test engine instances in the `engineInstances` setting. ++ The following example configures a load test that runs across 10 parallel test engine instances. ++ ```yaml + version: v0.1 + testId: SampleTestCICD + displayName: Sample test from CI/CD + testPlan: SampleTest.jmx + description: Load test website home page + engineInstances: 10 + ``` ++1. Save the YAML configuration file, and commit the changes to source control. +++ ## Monitor engine instance metrics -To make sure that the test engine instances themselves aren't a performance bottleneck, you can monitor resource metrics of the test engine instance. A high resource usage for a test instance might negatively influence the results of the load test. +To make sure that the test engine instances, themselves aren't a performance bottleneck, you can monitor resource metrics of the test engine instance. A high resource usage for a test instance might negatively influence the results of the load test. Azure Load Testing reports four resource metrics for each instance: To view the engine resource metrics: ### Troubleshoot unhealthy engine instances -If one or multiple instances show a high resource usage, it could impact the test results. To resolve the issue, try one or more of the following steps: +If one or multiple instances show a high resource usage, it could affect the test results. To resolve the issue, try one or more of the following steps: - Reduce the number of threads (virtual users) per test engine. To achieve a target number of virtual users, you might increase the number of engine instances for the load test. - Ensure that your script is effective, with no redundant code. -- If the engine health status is unknown, re-run the test.+- If the engine health status is unknown, rerun the test. ## Next steps |
load-testing | How To Test Private Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-test-private-endpoint.md | Follow these steps to [update the subnet settings](/azure/virtual-network/virtua ### Starting the load test fails with `Management Lock is enabled on Resource Group of VNET (ALTVNET015)` If there is a lock on the resource group that contains the virtual network, the service can't inject the test engine virtual machines in your virtual network. Remove the management lock before running the load test. Learn how to [configure locks in the Azure portal](/azure/azure-resource-manager/management/lock-resources?tabs=json#configure-locks).- ++### Starting the load test fails with `Insufficient public IP address quota in VNET subscription (ALTVNET016)` ++When you start the load test, Azure Load Testing injects the following Azure resources in the virtual network that contains the application endpoint: ++- The test engine virtual machines. These VMs invoke your application endpoint during the load test. +- A public IP address. +- A network security group (NSG). +- An Azure Load Balancer. ++Ensure that you have quota for at least one public IP address available in your subscription to use in the load test. ++### Starting the load test fails with `Subnet with name "AzureFirewallSubnet" cannot be used for load testing (ALTVNET017)` ++The subnet *AzureFirewallSubnet* is reserved and you can't use it for Azure Load Testing. Select another subnet for your load test. + ## Next steps - Learn more about the [scenarios for deploying Azure Load Testing in a virtual network](./concept-azure-load-testing-vnet-injection.md). |
logic-apps | Add Artifacts Integration Service Environment Ise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/add-artifacts-integration-service-environment-ise.md | After you create an [integration service environment (ISE)](../logic-apps/connec * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* The ISE that you created to run your logic apps. If you don't have an ISE, [create an ISE first](../logic-apps/connect-virtual-network-vnet-isolated-environment.md). +* The ISE that you created to run your Consumption logic app workflows * To create, add, or update resources that are deployed to an ISE, you need to be assigned the Owner or Contributor role on that ISE, or you have permissions inherited through the Azure subscription or Azure resource group associated with the ISE. For people who don't have owner, contributor, or inherited permissions, they can be assigned the Integration Service Environment Contributor role or Integration Service Environment Developer role. For more information, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)? |
logic-apps | Connect Virtual Network Vnet Isolated Environment Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md | Last updated 11/04/2022 > On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), > which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard > logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) +> Logic Apps and provide the same capabilities plus more. For example Standard workflows support using private +> endpoints for inbound traffic so that your workflows can communicate privately and securely with virtual +> networks. Standard workflows also support virtual network integration for outbound traffic. For more information, +> review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). -Sometimes, your logic app workflows need access to protected resources, such as virtual machines (VMs) and other systems or services, that are inside or connected to an Azure virtual network. To directly access these resources from workflows that usually run in multi-tenant Azure Logic Apps, you can create and run your logic apps in an *integration service environment* (ISE) instead. An ISE is actually an instance of Azure Logic Apps that runs separately on dedicated resources, apart from the global multi-tenant Azure environment, and doesn't [store, process, or replicate data outside the region where you deploy the ISE](https://azure.microsoft.com/global-infrastructure/data-residency#select-geography). +Since November 1, 2022, the capability to create new ISE resources is no longer available, which also means that capability to set up your own encryption keys, known as "Bring Your Own Key" (BYOK), during ISE creation using the Logic Apps REST API is also no longer available. However, ISE resources existing before this date are supported through August 31, 2024. -For example, some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md)) for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, partner services, or customer services that are hosted on Azure. If your logic app workflows require access to virtual networks that use private endpoints, you have these options: +For more information, see the following resources: -* If you want to develop workflows using the **Logic App (Consumption)** resource type, and your workflows need to use private endpoints, you *must* create, deploy, and run your logic apps in an ISE. For more information, review [Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment.md). +- [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) +- [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) +- [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) +- [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) +- [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) +- [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) -* If you want to develop workflows using the **Logic App (Standard)** resource type, and your workflows need to use private endpoints, you don't need an ISE. Instead, your workflows can communicate privately and securely with virtual networks by using private endpoints for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). +This overview provides more information about [how an ISE works with a virtual network](#how-ise-works), the [benefits to using an ISE](#benefits), the [differences between the dedicated and multi-tenant Logic Apps service](#difference), and how you can directly access resources that are inside or connected your Azure virtual network. -For more information, review the [differences between multi-tenant Azure Logic Apps and integration service environments](logic-apps-overview.md#resource-environment-differences). +<a name="how-ise-works"></a> ## How an ISE works with a virtual network -When you create an ISE, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location for those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/). --![Select integration service environment](./media/connect-virtual-network-vnet-isolated-environment-overview/select-logic-app-integration-service-environment.png) --For more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". For more information, review [Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps](../logic-apps/customer-managed-keys-integration-service-environment.md). +At ISE creation, you select the Azure virtual network where you want Azure to *inject* or deploy your ISE. When you create logic apps and integration accounts that need access to this virtual network, you can select your ISE as the host location for those logic apps and integration accounts. Inside the ISE, logic apps run on dedicated resources separately from others in the multi-tenant Azure Logic Apps environment. Data in an ISE stays in the [same region where you create and deploy that ISE](https://azure.microsoft.com/global-infrastructure/data-residency/). -This overview provides more information about [why you'd want to use an ISE](#benefits), the [differences between the dedicated and multi-tenant Logic Apps service](#difference), and how you can directly access resources that are inside or connected your Azure virtual network. +![Screenshot shows Azure portal with integration service environment selected.](./media/connect-virtual-network-vnet-isolated-environment-overview/select-logic-app-integration-service-environment.png) <a name="benefits"></a> ## Why use an ISE -Running logic apps in your own separate dedicated instance helps reduce the impact that other Azure tenants might have on your apps' performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits: +Running logic app workflows in your own separate dedicated instance helps reduce the impact that other Azure tenants might have on your apps' performance, also known as the ["noisy neighbors" effect](https://en.wikipedia.org/wiki/Cloud_computing_issues#Performance_interference_and_noisy_neighbors). An ISE also provides these benefits: * Direct access to resources that are inside or connected to your virtual network When you create and run logic apps in an ISE, you get the same user experiences ## Access to on-premises systems -Logic apps that run inside an ISE can directly access on-premises systems and data sources that are inside or connected to an Azure virtual network by using these items:<p> +Logic app workflows that run inside an ISE can directly access on-premises systems and data sources that are inside or connected to an Azure virtual network by using these items:<p> * The HTTP trigger or action, which displays the **CORE** label Logic apps that run inside an ISE can directly access on-premises systems and da To access on-premises systems and data sources that don't have ISE connectors, are outside your virtual network, or aren't connected to your virtual network, you still have to use the on-premises data gateway. Logic apps within an ISE can continue using connectors that don't have the **CORE** or **ISE** label. Those connectors run in the multi-tenant Logic Apps service, rather than in your ISE. +<a name="data-at-rest"></a> ++## Encrypted data at rest ++By default, Azure Storage uses Microsoft-managed keys to encrypt your data. Azure Logic Apps relies on Azure Storage to store and automatically [encrypt data at rest](../storage/common/storage-service-encryption.md). This encryption protects your data and helps you meet your organizational security and compliance commitments. For more information about how Azure Storage encryption works, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md) and [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md). ++For more control over the encryption keys used by Azure Storage, ISE supports using and managing your own key using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". However, this capability is available *only when you create your ISE*, not afterwards. You can't disable this key after your ISE is created. Currently, no support exists for rotating a customer-managed key for an ISE. ++* Customer-managed key support for an ISE is available only in the following regions: ++ * Azure: West US 2, East US, and South Central US. ++ * Azure Government: Arizona, Virginia, and Texas. ++* The key vault that stores your customer-managed key must exist in the same Azure region as your ISE. ++* To support customer-managed keys, your ISE requires that you enable either the [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials. ++* You must give your key vault access to your ISE's managed identity, but the timing depends on which managed identity that you use. ++ * **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE. Otherwise, ISE creation fails, and you get a permissions error. ++ * **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE + <a name="ise-level"></a> ## ISE SKUs When you create your ISE, you can select the Developer SKU or Premium SKU. This > [!IMPORTANT] > This SKU has no service-level agreement (SLA), scale up capability, - > or redundancy during recycling, which means that you might experience delays or downtime. Backend updates might intermittently interrupt service. + > or redundancy during recycling, which means that you might experience + > delays or downtime. Backend updates might intermittently interrupt service. For capacity and limits information, see [ISE limits in Azure Logic Apps](logic-apps-limits-and-config.md#integration-service-environment-ise). To learn how billing works for ISEs, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). When you create your ISE, you can select the Developer SKU or Premium SKU. This ## ISE endpoint access -When you create your ISE, you can choose to use either internal or external access endpoints. Your selection determines whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. These endpoints also affect the way that you can access the inputs and outputs from your logic apps' runs history. +During ISE creation, you can choose to use either internal or external access endpoints. Your selection determines whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. These endpoints also affect the way that you can access the inputs and outputs from your logic apps' runs history. > [!IMPORTANT] > You can select the access endpoint only during ISE creation and can't change this option later. -* **Internal**: Private endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic apps' runs history *only from inside your virtual network*. +* **Internal**: Private endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic app workflow run history *only from inside your virtual network*. > [!IMPORTANT] > If you need to use these webhook-based triggers, and the service is outside your virtual network and When you create your ISE, you can choose to use either internal or external acce > * SAP (multi-tenant version) > > Also, make sure that you have network connectivity between the private endpoints and the computer from - > where you want to access the run history. Otherwise, when you try to view your logic app's run history, + > where you want to access the run history. Otherwise, when you try to view your workflow's run history, > you get an error that says "Unexpected error. Failed to fetch". >- > ![Azure Storage action error resulting from inability to send traffic through firewall](./media/connect-virtual-network-vnet-isolated-environment-overview/integration-service-environment-error.png) + > ![Screenshot shows Azure portal and Azure Storage action error resulting from inability to send traffic through firewall.](./media/connect-virtual-network-vnet-isolated-environment-overview/integration-service-environment-error.png) > > For example, your client computer can exist inside the ISE's virtual network or inside a virtual network that's connected to the ISE's virtual network through peering or a virtual private network. -* **External**: Public endpoints permit calls to logic apps in your ISE where you can view and access inputs and outputs from logic apps' runs history *from outside your virtual network*. If you use network security groups (NSGs), make sure they're set up with inbound rules to allow access to the run history's inputs and outputs. For more information, see [Enable access for ISE](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#enable-access). +* **External**: Public endpoints permit calls to logic app workflows in your ISE where you can view and access inputs and outputs from logic apps' runs history *from outside your virtual network*. If you use network security groups (NSGs), make sure they're set up with inbound rules to allow access to the run history's inputs and outputs. To determine whether your ISE uses an internal or external access endpoint, on your ISE's menu, under **Settings**, select **Properties**, and find the **Access endpoint** property: -![Find ISE access endpoint](./media/connect-virtual-network-vnet-isolated-environment-overview/find-ise-access-endpoint.png) +![Screenshot shows Azure portal, ISE menu, with the options selected for Settings, Properties, and Access endpoint.](./media/connect-virtual-network-vnet-isolated-environment-overview/find-ise-access-endpoint.png) <a name="pricing-model"></a> ## Pricing model -Logic apps, built-in triggers, built-in actions, and connectors that run in your ISE use a fixed pricing plan that differs from the consumption-based pricing plan. For more information, see [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). +Logic apps, built-in triggers, built-in actions, and connectors that run in your ISE use a fixed pricing plan that differs from the Consumption pricing plan. For more information, see [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). <a name="create-integration-account-environment"></a> ## Integration accounts with ISE -You can use integration accounts with logic apps inside an integration service environment (ISE). However, those integration accounts must use the *same ISE* as the linked logic apps. Logic apps in an ISE can reference only those integration accounts that are in the same ISE. When you create an integration account, you can select your ISE as the location for your integration account. To learn how pricing and billing work for integration accounts with an ISE, see the [Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For limits information, see [Integration account limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). +You can use integration accounts with logic apps inside an integration service environment (ISE). However, those integration accounts must use the *same ISE* as the linked logic apps. Logic apps in an ISE can reference only those integration accounts that are in the same ISE. When you create an integration account, you can select your ISE as the location for your integration account. To learn how pricing and billing work for integration accounts with an ISE, see the [Azure Logic Apps pricing model](../logic-apps/logic-apps-pricing.md#ise-pricing). For pricing rates, see [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/). For limits information, see [Integration account limits](../logic-apps/logic-apps-limits-and-config.md#integration-account-limits). ## Next steps -* [Connect to Azure virtual networks from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment.md) -* Learn more about [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) -* Learn about [virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) +* [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) |
logic-apps | Connect Virtual Network Vnet Isolated Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connect-virtual-network-vnet-isolated-environment.md | - Title: Connect to Azure virtual networks with an ISE -description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) from Azure Logic Apps. --- Previously updated : 11/04/2022---# Connect to Azure virtual networks from Azure Logic Apps using an integration service environment (ISE) --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --For scenarios where Consumption logic app resources and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), create an [*integration service environment* (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). An ISE is an environment that uses dedicated storage and other resources that are kept separate from the "global" multi-tenant Azure Logic Apps. This separation also reduces any impact that other Azure tenants might have on your apps' performance. An ISE also provides you with your own static IP addresses. These IP addresses are separate from the static IP addresses that are shared by the logic apps in the public, multi-tenant service. --When you create an ISE, Azure *injects* that ISE into your Azure virtual network, which then deploys Azure Logic Apps into your virtual network. When you create a logic app or integration account, select your ISE as their location. Your logic app or integration account can then directly access resources, such as virtual machines (VMs), servers, systems, and services, in your virtual network. --![Select integration service environment](./media/connect-virtual-network-vnet-isolated-environment/select-logic-app-integration-service-environment.png) --> [!IMPORTANT] -> For logic apps and integration accounts to work together in an ISE, both must use the *same ISE* as their location. --An ISE has increased limits on: --* Run duration -* Storage retention -* Throughput -* HTTP request and response timeouts -* Message sizes -* Custom connector requests --For more information, review [Limits and configuration for Azure Logic Apps](logic-apps-limits-and-config.md). To learn more about ISEs, review [Access to Azure Virtual Network resources from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment-overview.md). --This article shows you how to complete these tasks by using the Azure portal: --* Enable access for your ISE. -* Create your ISE. -* Add extra capacity to your ISE. --You can also create an ISE by using the [sample Azure Resource Manager quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/integration-service-environment) or by using the Azure Logic Apps REST API, including setting up customer-managed keys: --* [Create an integration service environment (ISE) by using the Azure Logic Apps REST API](create-integration-service-environment-rest-api.md) -* [Set up customer-managed keys to encrypt data at rest for ISEs](customer-managed-keys-integration-service-environment.md) --## Prerequisites --* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- > [!IMPORTANT] - > Logic app workflows, built-in triggers, built-in actions, and connectors that run in your ISE use a pricing plan - > that differs from the Consumption pricing plan. To learn how pricing and billing work for ISEs, review the - > [Azure Apps pricing model](logic-apps-pricing.md#ise-pricing). - > For pricing rates, review [Azure Apps pricing](logic-apps-pricing.md). --* An [Azure virtual network](../virtual-network/virtual-networks-overview.md) that has four *empty* subnets, which are required for creating and deploying resources in your ISE and are used by these internal and hidden components: -- * Azure Logic Apps Compute - * Internal App Service Environment (connectors) - * Internal API Management (connectors) - * Internal Redis for caching and performance - - You can create the subnets in advance or when you create your ISE so that you can create the subnets at the same time. However, before you create your subnets, make sure that you review the [subnet requirements](#create-subnet). -- * The Developer ISE SKU uses three subnets, but you still have to create four subnets. The fourth subnet doesn't incur any extra cost. -- * Make sure that your virtual network [enables access for your ISE](#enable-access) so that your ISE can work correctly and stay accessible. -- * If you use a [network virtual appliance (NVA)](../virtual-network/virtual-networks-udr-overview.md#user-defined), make sure that you don't enable TLS/SSL termination or change the outbound TLS/SSL traffic. Also, make sure that you don't enable inspection for traffic that originates from your ISE's subnet. For more information, review [Virtual network traffic routing](../virtual-network/virtual-networks-udr-overview.md). -- * If you want to use custom Domain Name System (DNS) servers for your Azure virtual network, [set up those servers by following these steps](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md) before you deploy your ISE to your virtual network. For more information about managing DNS server settings, review [Create, change, or delete a virtual network](../virtual-network/manage-virtual-network.md#change-dns-servers). -- > [!NOTE] - > If you change your DNS server or DNS server settings, you must restart your ISE so that the ISE can pick up those changes. For more information, review [Restart your ISE](ise-manage-integration-service-environment.md#restart-ISE). --<a name="enable-access"></a> --## Enable access for ISE --When you use an ISE with an Azure virtual network, a common setup problem is having one or more blocked ports. The connectors that you use for creating connections between your ISE and destination systems might also have their own port requirements. For example, if you communicate with an FTP system by using the FTP connector, the port that you use on your FTP system needs to be available, for example, port 21 for sending commands. --To make sure that your ISE is accessible and that the logic apps in that ISE can communicate across each subnet in your virtual network, [open the ports described in this table for each subnet](#network-ports-for-ise). If any required ports are unavailable, your ISE won't work correctly. --* If you have multiple ISE instances that need access to other endpoints that have IP restrictions, deploy an [Azure Firewall](../firewall/overview.md) or a [network virtual appliance](../virtual-network/virtual-networks-overview.md#filter-network-traffic) into your virtual network and route outbound traffic through that firewall or network virtual appliance. You can then [set up a single, outbound, public, static, and predictable IP address](connect-virtual-network-vnet-set-up-single-ip-address.md) that all the ISE instances in your virtual network can use to communicate with destination systems. That way, you don't have to set up extra firewall openings at those destination systems for each ISE. -- > [!NOTE] - > You can use this approach for a single ISE when your scenario requires limiting the - > number of IP addresses that need access. Consider whether the extra costs for - > the firewall or virtual network appliance make sense for your scenario. Learn more about - > [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/). --* If you created a new Azure virtual network and subnets without any constraints, you don't need to set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) in your virtual network to control traffic across subnets. --* For an existing virtual network, you can *optionally* set up [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md#network-security-groups) to [filter network traffic across subnets](../virtual-network/tutorial-filter-network-traffic.md). If you want to go this route, or if you're already using NSGs, make sure that you [open the ports described in this table](#network-ports-for-ise) for those NSGs. -- When you set up [NSG security rules](../virtual-network/network-security-groups-overview.md#security-rules), you need to use *both* the **TCP** and **UDP** protocols, or you can select **Any** instead so you don't have to create separate rules for each protocol. NSG security rules describe the ports that you must open for the IP addresses that need access to those ports. Make sure that any firewalls, routers, or other items that exist between these endpoints also keep those ports accessible to those IP addresses. --* If you set up forced tunneling through your firewall to redirect Internet-bound traffic, review the [forced tunneling requirements](#forced-tunneling). --<a name="network-ports-for-ise"></a> --### Network ports used by your ISE --This table describes the ports that your ISE requires to be accessible and the purpose for those ports. To help reduce complexity when you set up security rules, the table uses [service tags](../virtual-network/service-tags-overview.md) that represent groups of IP address prefixes for a specific Azure service. Where noted, *internal ISE* and *external ISE* refer to the [access endpoint that's selected during ISE creation](connect-virtual-network-vnet-isolated-environment.md#create-environment). For more information, review [Endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). --> [!IMPORTANT] -> -> For all rules, make sure that you set source ports to `*` because source ports are ephemeral. --#### Inbound security rules --| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes | -|--|-||--||-| -| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network. | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. | -| * | 443 | Internal ISE: <br>**VirtualNetwork** <br><br>External ISE: **Internet** or see **Notes** | **VirtualNetwork** | - Communication to your logic app <br><br>- Runs history for your logic app | Rather than use the **Internet** service tag, you can specify the source IP address for these items: <br><br>- The computer or service that calls any request triggers or webhooks in your logic app <br><br>- The computer or service from where you want to access logic app runs history <br><br>**Important**: Closing or blocking this port prevents calls to logic apps that have request triggers or webhooks. You're also prevented from accessing inputs and outputs for each step in runs history. However, you're not prevented from accessing logic app runs history. | -| * | 454 | **LogicAppsManagement** |**VirtualNetwork** | Azure Logic Apps designer - dynamic properties| Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicAppsManagement** service tag won't work. Instead, you have to provide the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) for Azure Government. | -| * | 454 | **LogicApps** | **VirtualNetwork** | Network health check | Requests come from the Azure Logic Apps access endpoint's [inbound IP addresses](logic-apps-limits-and-config.md#inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#outbound) for that region. <br><br>**Important**: If you're working with Azure Government cloud, the **LogicApps** service tag won't work. Instead, you have to provide both the Azure Logic Apps [inbound IP addresses](logic-apps-limits-and-config.md#azure-government-inbound) and [outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. | -| * | 454 | **AzureConnectors** | **VirtualNetwork** | Connector deployment | Required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. <br><br>**Important**: If you're working with Azure Government cloud, the **AzureConnectors** service tag won't work. Instead, you have to provide the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#azure-government-outbound) for Azure Government. | -| * | 454, 455 | **AppServiceManagement** | **VirtualNetwork** | App Service Management dependency || -| * | Internal ISE: 454 <br><br>External ISE: 443 | **AzureTrafficManager** | **VirtualNetwork** | Communication from Azure Traffic Manager || -| * | 3443 | **APIManagement** | **VirtualNetwork** | Connector policy deployment <br><br>API Management - management endpoint | For connector policy deployment, port access is required to deploy and update connectors. Closing or blocking this port causes ISE deployments to fail and prevents connector updates and fixes. | -| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). | --#### Outbound security rules --| Source ports | Destination ports | Source service tag or IP addresses | Destination service tag or IP addresses | Purpose | Notes | -|--|-||--||-| -| * | * | Address space for the virtual network with ISE subnets | Address space for the virtual network with ISE subnets | Intersubnet communication within virtual network | Required for traffic to flow *between* the subnets in your virtual network. <br><br>**Important**: For traffic to flow between the *components* in each subnet, make sure that you open all the ports within each subnet. | -| * | 443, 80 | **VirtualNetwork** | Internet | Communication from your logic app | This rule is required for Secure Socket Layer (SSL) certificate verification. This check is for various internal and external sites, which is the reason that the Internet is required as the destination. | -| * | Varies based on destination | **VirtualNetwork** | Varies based on destination | Communication from your logic app | Destination ports vary based on the endpoints for the external services with which your logic app needs to communicate. <br><br>For example, the destination port is port 25 for an SMTP service, port 22 for an SFTP service, and so on. | -| * | 80, 443 | **VirtualNetwork** | **AzureActiveDirectory** | Azure Active Directory || -| * | 80, 443, 445 | **VirtualNetwork** | **Storage** | Azure Storage dependency || -| * | 443 | **VirtualNetwork** | **AppService** | Connection management || -| * | 443 | **VirtualNetwork** | **AzureMonitor** | Publish diagnostic logs & metrics || -| * | 1433 | **VirtualNetwork** | **SQL** | Azure SQL dependency || -| * | 1886 | **VirtualNetwork** | **AzureMonitor** | Azure Resource Health | Required for publishing health status to Resource Health. | -| * | 5672 | **VirtualNetwork** | **EventHub** | Dependency from Log to Event Hubs policy and monitoring agent || -| * | 6379 - 6383, plus see **Notes** | **VirtualNetwork** | **VirtualNetwork** | Access Azure Cache for Redis Instances between Role Instances | For ISE to work with Azure Cache for Redis, you must open these [outbound and inbound ports described by the Azure Cache for Redis FAQ](../azure-cache-for-redis/cache-how-to-premium-vnet.md#outbound-port-requirements). | -| * | 53 | **VirtualNetwork** | IP addresses for any custom Domain Name System (DNS) servers on your virtual network | DNS name resolution | Required only when you use custom DNS servers on your virtual network | --In addition, you need to add outbound rules for [App Service Environment (ASE)](../app-service/environment/intro.md): --* If you use Azure Firewall, you need to set up your firewall with the App Service Environment (ASE) [fully qualified domain name (FQDN) tag](../firewall/fqdn-tags.md#current-fqdn-tags), which permits outbound access to ASE platform traffic. --* If you use a firewall appliance other than Azure Firewall, you need to set up your firewall with *all* the rules listed in the [firewall integration dependencies](../app-service/environment/firewall-integration.md#dependencies) that are required for App Service Environment. --<a name="forced-tunneling"></a> --#### Forced tunneling requirements --If you set up or use [forced tunneling](../firewall/forced-tunneling.md) through your firewall, you have to permit extra external dependencies for your ISE. Forced tunneling lets you redirect Internet-bound traffic to a designated next hop, such as your virtual private network (VPN) or to a virtual appliance, rather than to the Internet so that you can inspect and audit outbound network traffic. --If you don't permit access for these dependencies, your ISE deployment fails and your deployed ISE stops working. --* User-defined routes -- To prevent asymmetric routing, you must define a route for each and every IP address that's listed below with **Internet** as the next hop. -- * [Azure Logic Apps inbound and outbound addresses for the ISE region](logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags) - * [Azure IP addresses for connectors in the ISE region, available in this download file](https://www.microsoft.com/download/details.aspx?id=56519) - * [App Service Environment management addresses](../app-service/environment/management-addresses.md) - * [Azure Traffic Manager management addresses](https://azuretrafficmanagerdata.blob.core.windows.net/probes/azure/probe-ip-ranges.json) - * [Azure API Management Control Plane IP addresses](../api-management/virtual-network-reference.md#control-plane-ip-addresses) --* Service endpoints -- You need to enable service endpoints for Azure SQL, Storage, Service Bus, KeyVault, and Event Hubs because you can't send traffic through a firewall to these services. --* Other inbound and outbound dependencies -- Your firewall *must* allow the following inbound and outbound dependencies: -- * [Azure App Service Dependencies](../app-service/environment/firewall-integration.md#deploying-your-ase-behind-a-firewall) - * [Azure Cache Service Dependencies](../azure-cache-for-redis/cache-how-to-premium-vnet.md#what-are-some-common-misconfiguration-issues-with-azure-cache-for-redis-and-virtual-networks) - * [Azure API Management Dependencies](../api-management/virtual-network-reference.md) --<a name="create-environment"></a> --## Create your ISE --1. In the [Azure portal](https://portal.azure.com), in the main Azure search box, enter `integration service environments` as your filter, and select **Integration Service Environments**. -- ![Find and select "Integration Service Environments"](./media/connect-virtual-network-vnet-isolated-environment/find-integration-service-environment.png) --1. On the **Integration Service Environments** pane, select **Add**. -- ![Select "Add" to create integration service environment](./media/connect-virtual-network-vnet-isolated-environment/add-integration-service-environment.png) --1. Provide these details for your environment, and then select **Review + create**, for example: -- ![Provide environment details](./media/connect-virtual-network-vnet-isolated-environment/integration-service-environment-details.png) -- | Property | Required | Value | Description | - |-|-|-|-| - | **Subscription** | Yes | <*Azure-subscription-name*> | The Azure subscription to use for your environment | - | **Resource group** | Yes | <*Azure-resource-group-name*> | A new or existing Azure resource group where you want to create your environment | - | **Integration service environment name** | Yes | <*environment-name*> | Your ISE name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), and periods (`.`). | - | **Location** | Yes | <*Azure-datacenter-region*> | The Azure datacenter region where to deploy your environment | - | **SKU** | Yes | **Premium** or **Developer (No SLA)** | The ISE SKU to create and use. For differences between these SKUs, review [ISE SKUs](connect-virtual-network-vnet-isolated-environment-overview.md#ise-level). <p><p>**Important**: This option is available only at ISE creation and can't be changed later. | - | **Additional capacity** | Premium: <br>Yes <p><p>Developer: <br>Not applicable | Premium: <br>0 to 10 <p><p>Developer: <br>Not applicable | The number of extra processing units to use for this ISE resource. To add capacity after creation, review [Add ISE capacity](ise-manage-integration-service-environment.md#add-capacity). | - | **Access endpoint** | Yes | **Internal** or **External** | The type of access endpoints to use for your ISE. These endpoints determine whether request or webhook triggers on logic apps in your ISE can receive calls from outside your virtual network. <p><p>For example, if you want to use the following webhook-based triggers, make sure that you select **External**: <p><p>- Azure DevOps <br>- Azure Event Grid <br>- Common Data Service <br>- Office 365 <br>- SAP (ISE version) <p><p>Your selection also affects the way that you can view and access inputs and outputs in your logic app runs history. For more information, review [ISE endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access). <p><p>**Important**: You can select the access endpoint only during ISE creation and can't change this option later. | - | **Virtual network** | Yes | <*Azure-virtual-network-name*> | The Azure virtual network where you want to inject your environment so logic apps in that environment can access your virtual network. If you don't have a network, [create an Azure virtual network first](../virtual-network/quick-create-portal.md). <p><p>**Important**: You can *only* perform this injection when you create your ISE. | - | **Subnets** | Yes | <*subnet-resource-list*> | Regardless you use ISE Premium or Developer, your virtual network requires four *empty* subnets for creating and deploying resources in your ISE. These subnets are used by internal Azure Logic Apps components, such as connectors and caching for performance. <p>**Important**: Make sure that you [review the subnet requirements before continuing with these steps to create your subnets](#create-subnet). | - ||||| -- <a name="create-subnet"></a> -- **Create subnets** -- Whether you plan to use ISE Premium or Developer, make sure that your virtual network has four *empty* subnets. These subnets are used for creating and deploying resources in your ISE and are used by internal Azure Logic Apps components, such as connectors and caching for performance. You *can't* change these subnet addresses after you create your environment. If you create and deploy your ISE through the Azure portal, make sure that you don't delegate these subnets to any Azure services. However, if you create and deploy your ISE through the REST API, Azure PowerShell, or an Azure Resource Manager template, you need to [delegate](../virtual-network/manage-subnet-delegation.md) one empty subnet to `Microsoft.integrationServiceEnvironment`. For more information, review [Add a subnet delegation](../virtual-network/manage-subnet-delegation.md). -- Each subnet needs to meet these requirements: -- * Uses a name that starts with either an alphabetic character or an underscore (no numbers), and doesn't use these characters: `<`, `>`, `%`, `&`, `\\`, `?`, `/`. -- * Uses the [Classless Inter-Domain Routing (CIDR) format](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing). -- > [!IMPORTANT] - > - > Don't use the following IP address spaces for your virtual network or subnets because they aren't resolvable by Azure Logic Apps:<p> - > - > * 0.0.0.0/8 - > * 100.64.0.0/10 - > * 127.0.0.0/8 - > * 168.63.129.16/32 - > * 169.254.169.254/32 -- * Uses a `/27` in the address space because each subnet requires 32 addresses. For example, `10.0.0.0/27` has 32 addresses because 2<sup>(32-27)</sup> is 2<sup>5</sup> or 32. More addresses won't provide extra benefits. To learn more about calculating addresses, review [IPv4 CIDR blocks](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#IPv4_CIDR_blocks). -- * If you use [ExpressRoute](../expressroute/expressroute-introduction.md), you have to [create a route table](../virtual-network/manage-route-table.md) that has the following route and link that table with each subnet that's used by your ISE: -- **Name**: <*route-name*><br> - **Address prefix**: 0.0.0.0/0<br> - **Next hop**: Internet -- 1. Under the **Subnets** list, select **Manage subnet configuration**. -- ![Manage subnet configuration](./media/connect-virtual-network-vnet-isolated-environment/manage-subnet-configuration.png) -- 1. On the **Subnets** pane, select **Subnet**. -- ![Add four empty subnets](./media/connect-virtual-network-vnet-isolated-environment/add-empty-subnets.png) -- 1. On the **Add subnet** pane, provide this information. -- * **Name**: The name for your subnet - * **Address range (CIDR block)**: Your subnet's range in your virtual network and in CIDR format -- ![Add subnet details](./media/connect-virtual-network-vnet-isolated-environment/provide-subnet-details.png) -- 1. When you're done, select **OK**. -- 1. Repeat these steps for three more subnets. -- > [!NOTE] - > If the subnets you try to create aren't valid, the Azure portal shows a message, - > but doesn't block your progress. -- For more information about creating subnets, review [Add a virtual network subnet](../virtual-network/virtual-network-manage-subnet.md). --1. After Azure successfully validates your ISE information, select **Create**, for example: -- ![After successful validation, select "Create"](./media/connect-virtual-network-vnet-isolated-environment/ise-validation-success.png) -- Azure starts deploying your environment, which usually takes within two hours to finish. Occasionally, deployment might take up to four hours. To check deployment status, on your Azure toolbar, select the notifications icon, which opens the notifications pane. -- ![Check deployment status](./media/connect-virtual-network-vnet-isolated-environment/environment-deployment-status.png) -- If deployment finishes successfully, Azure shows this notification: -- ![Deployment succeeded](./media/connect-virtual-network-vnet-isolated-environment/deployment-success-message.png) -- Otherwise, follow the Azure portal instructions for troubleshooting deployment. -- > [!NOTE] - > If deployment fails or you delete your ISE, Azure might take up to an hour, - > or possibly longer in rare cases, before releasing your subnets. So, you might - > have to wait before you can reuse those subnets in another ISE. - > - > If you delete your virtual network, Azure generally takes up to two hours - > before releasing up your subnets, but this operation might take longer. - > When deleting virtual networks, make sure that no resources are still connected. - > For more information, review [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network). --1. To view your environment, select **Go to resource** if Azure doesn't automatically go to your environment after deployment finishes. --1. For an ISE that has *external* endpoint access, you need to create a network security group (NSG), if you don't have one already. You need to add an inbound security rule to the NSG to allow traffic from managed connector outbound IP addresses. To set up this rule, follow these steps: -- 1. On your ISE menu, under **Settings**, select **Properties**. -- 1. Under **Connector outgoing IP addresses**, copy the public IP address ranges, which also appear in this article, [Limits and configuration - Outbound IP addresses](logic-apps-limits-and-config.md#outbound). -- 1. Create a network security group, if you don't have one already. - - 1. Based on the following information, add an inbound security rule for the public outbound IP addresses that you copied. For more information, review [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md#create-a-network-security-group). -- | Purpose | Source service tag or IP addresses | Source ports | Destination service tag or IP addresses | Destination ports | Notes | - |||--|--|-|-| - | Permit traffic from connector outbound IP addresses | <*connector-public-outbound-IP-addresses*> | * | Address space for the virtual network with ISE subnets | * | | - ||||||| --1. To check the network health for your ISE, review [Manage your integration service environment](ise-manage-integration-service-environment.md#check-network-health). -- > [!CAUTION] - > If your ISE's network becomes unhealthy, the internal App Service Environment (ASE) that's used by your ISE can also become unhealthy. - > If the ASE is unhealthy for more than seven days, the ASE is suspended. To resolve this state, check your virtual network setup. - > Resolve any problems that you find, and then restart your ISE. Otherwise, after 90 days, the suspended ASE is deleted, and your - > ISE becomes unusable. So, make sure that you keep your ISE healthy to permit the necessary traffic. - > - > For more information, review these topics: - > - > * [Azure App Service diagnostics overview](../app-service/overview-diagnostics.md) - > * [Message logging for Azure App Service Environment](../app-service/environment/using-an-ase.md#logging) --1. To start creating logic apps and other artifacts in your ISE, review [Add resources to integration service environments](add-artifacts-integration-service-environment-ise.md). -- > [!IMPORTANT] - > After you create your ISE, managed ISE connectors become available for you to use, but they don't automatically appear - > in the connector picker on the Logic App Designer. Before you can use these ISE connectors, you have to manually - > [add and deploy these connectors to your ISE](add-artifacts-integration-service-environment-ise.md#add-ise-connectors-environment) - > so that they appear in the Logic App Designer. --## Next steps --* [Add resources to integration service environments](add-artifacts-integration-service-environment-ise.md) -* [Manage integration service environments](ise-manage-integration-service-environment.md#check-network-health) -* Learn more about [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) -* Learn about [virtual network integration for Azure services](../virtual-network/virtual-network-for-azure-services.md) |
logic-apps | Create Integration Service Environment Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-integration-service-environment-rest-api.md | - Title: Create integration service environments (ISEs) with Logic Apps REST API -description: Create an integration service environment (ISE) to access Azure virtual networks (VNETs) using the Azure Logic Apps REST API. --- Previously updated : 11/04/2022---# Create an integration service environment (ISE) by using the Logic Apps REST API --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --For scenarios where your logic apps and integration accounts need access to an [Azure virtual network](../virtual-network/virtual-networks-overview.md), you can create an [*integration service environment* (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) by using the Logic Apps REST API. To learn more about ISEs, see [Access to Azure Virtual Network resources from Azure Logic Apps](connect-virtual-network-vnet-isolated-environment-overview.md). --This article shows you how to create an ISE by using the Logic Apps REST API in general. Optionally, you can also enable a [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) on your ISE, but only by using the Logic Apps REST API at this time. This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials. --For more information about other ways to create an ISE, see these articles: --* [Create an ISE by using the Azure portal](../logic-apps/connect-virtual-network-vnet-isolated-environment.md) -* [Create an ISE by using the sample Azure Resource Manager quickstart template](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.logic/integration-service-environment) -* [Create an ISE that supports using customer-managed keys for encrypting data at rest](customer-managed-keys-integration-service-environment.md) --## Prerequisites --* The same [prerequisites](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#prerequisites) and [access requirements](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#enable-access) as when you create an ISE in the Azure portal --* Any additional resources that you want to use with your ISE so that you can include their information in the ISE definition, for example: -- * To enable self-signed certificate support, you need to include information about that certificate in the ISE definition. -- * To enable the user-assigned managed identity, you need to create that identity in advance and include the `objectId`, `principalId` and `clientId` properties and their values in the ISE definition. For more information, see [Create a user-assigned managed identity in the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity). --* A tool that you can use to create your ISE by calling the Logic Apps REST API with an HTTPS PUT request. For example, you can use [Postman](https://www.getpostman.com/downloads/), or you can build a logic app that performs this task. --## Create the ISE --To create your ISE by calling the Logic Apps REST API, make this HTTPS PUT request: --`PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` --> [!IMPORTANT] -> The Logic Apps REST API 2019-05-01 version requires that you make your own HTTP PUT request for ISE connectors. --Deployment usually takes within two hours to finish. Occasionally, deployment might take up to four hours. To check deployment status, in the [Azure portal](https://portal.azure.com), on your Azure toolbar, select the notifications icon, which opens the notifications pane. --> [!NOTE] -> If deployment fails or you delete your ISE, Azure might take up to an hour before releasing your subnets. -> This delay means you might have to wait before reusing those subnets in another ISE. -> -> If you delete your virtual network, Azure generally takes up to two hours -> before releasing up your subnets, but this operation might take longer. -> When deleting virtual networks, make sure that no resources are still connected. -> See [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network). --## Request header --In the request header, include these properties: --* `Content-type`: Set this property value to `application/json`. --* `Authorization`: Set this property value to the bearer token for the customer who has access to the Azure subscription or resource group that you want to use. --<a name="request-body"></a> --## Request body --In the request body, provide the resource definition to use for creating your ISE, including information for additional capabilities that you want to enable on your ISE, for example: --* To create an ISE that permits using a self-signed certificate and certificate issued by Enterprise Certificate Authority that's installed at the `TrustedRoot` location, include the `certificates` object inside the ISE definition's `properties` section, as this article later describes. --* To create an ISE that uses a system-assigned or user-assigned managed identity, include the `identity` object with the managed identity type and other required information in the ISE definition, as this article later describes. --* To create an ISE that uses customer-managed keys and Azure Key Vault to encrypt data at rest, include the [information that enables customer-managed key support](customer-managed-keys-integration-service-environment.md). You can set up customer-managed keys *only at creation*, not afterwards. --### Request body syntax --Here is the request body syntax, which describes the properties to use when you create your ISE: --```json -{ - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}", - "name": "{ISE-name}", - "type": "Microsoft.Logic/integrationServiceEnvironments", - "location": "{Azure-region}", - "sku": { - "name": "Premium", - "capacity": 1 - }, - // Include the `identity` object to enable the system-assigned identity or user-assigned identity - "identity": { - "type": <"SystemAssigned" | "UserAssigned">, - // When type is "UserAssigned", include the following "userAssignedIdentities" object: - "userAssignedIdentities": { - "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{user-assigned-managed-identity-object-ID}": { - "principalId": "{principal-ID}", - "clientId": "{client-ID}" - } - } - }, - "properties": { - "networkConfiguration": { - "accessEndpoint": { - // Your ISE can use the "External" or "Internal" endpoint. This example uses "External". - "type": "External" - }, - "subnets": [ - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-1}", - }, - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-2}", - }, - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-3}", - }, - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-4}", - } - ] - }, - // Include `certificates` object to enable self-signed certificate and the certificate issued by Enterprise Certificate Authority - "certificates": { - "testCertificate": { - "publicCertificate": "{base64-encoded-certificate}", - "kind": "TrustedRoot" - } - } - } -} -``` --### Request body example --This example request body shows the sample values: --```json -{ - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Logic/integrationServiceEnvironments/Fabrikam-ISE", - "name": "Fabrikam-ISE", - "type": "Microsoft.Logic/integrationServiceEnvironments", - "location": "WestUS2", - "identity": { - "type": "UserAssigned", - "userAssignedIdentities": { - "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/*********************************": { - "principalId": "*********************************", - "clientId": "*********************************" - } - } - }, - "sku": { - "name": "Premium", - "capacity": 1 - }, - "properties": { - "networkConfiguration": { - "accessEndpoint": { - // Your ISE can use the "External" or "Internal" endpoint. This example uses "External". - "type": "External" - }, - "subnets": [ - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-1", - }, - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-2", - }, - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-3", - }, - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-4", - } - ] - }, - "certificates": { - "testCertificate": { - "publicCertificate": "LS0tLS1CRUdJTiBDRV...", - "kind": "TrustedRoot" - } - } - } -} -``` --## Add custom root certificates --You often use an ISE to connect to custom services on your virtual network or on premises. These custom services are often protected by a certificate that's issued by custom root certificate authority, such as an Enterprise Certificate Authority or a self-signed certificate. For more information about using self-signed certificates, see [Secure access and data - Access for outbound calls to other services and systems](../logic-apps/logic-apps-securing-a-logic-app.md#secure-outbound-requests). For your ISE to successfully connect to these services through Transport Layer Security (TLS), your ISE needs access to these root certificates. --#### Considerations for adding custom root certificates --Before you update your ISE with a custom trusted root certificate, review these considerations: --* Make sure that you upload the root certificate *and* all the intermediate certificates. The maximum number of certificates is 20. --* The subject name on the certificate must match the host name for the target endpoint that you want to call from Azure Logic Apps. --* Uploading root certificates is a replacement operation where the latest upload overwrites previous uploads. For example, if you send a request that uploads one certificate, and then send another request to upload another certificate, your ISE uses only the second certificate. If you need to use both certificates, add them together in the same request. --* Uploading root certificates is an asynchronous operation that might take some time. To check the status or result, you can send a `GET` request by using the same URI. The response message has a `provisioningState` field that returns the `InProgress` value when the upload operation is still working. When `provisioningState` value is `Succeeded`, the upload operation is complete. --#### Request syntax --To update your ISE with a custom trusted root certificate, send the following HTTPS PATCH request to the [Azure Resource Manager URL, which differs based on your Azure environment](../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane), for example: --| Environment | Azure Resource Manager URL | -|-|-| -| Azure global (multi-tenant) | `PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` | -| Azure Government | `PATCH https://management.usgovcloudapi.net/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` | -| Microsoft Azure operated by 21Vianet | `PATCH https://management.chinacloudapi.cn/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` | -||| --#### Request body syntax for adding custom root certificates --Here is the request body syntax, which describes the properties to use when you add root certificates: --```json -{ - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}", - "name": "{ISE-name}", - "type": "Microsoft.Logic/integrationServiceEnvironments", - "location": "{Azure-region}", - "properties": { - "certificates": { - "testCertificate1": { - "publicCertificate": "{base64-encoded-certificate}", - "kind": "TrustedRoot" - }, - "testCertificate2": { - "publicCertificate": "{base64-encoded-certificate}", - "kind": "TrustedRoot" - } - } - } -} -``` --## Next steps --* [Add resources to integration service environments](../logic-apps/add-artifacts-integration-service-environment-ise.md) -* [Manage integration service environments](../logic-apps/ise-manage-integration-service-environment.md#check-network-health) |
logic-apps | Customer Managed Keys Integration Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/customer-managed-keys-integration-service-environment.md | - Title: Set up customer-managed keys to encrypt data at rest in ISEs -description: Create and manage your own encryption keys to secure data at rest for integration service environments (ISEs) in Azure Logic Apps. --- Previously updated : 11/04/2022---# Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps --> [!IMPORTANT] -> -> On August 31, 2024, the ISE resource will retire, due to its dependency on Azure Cloud Services (classic), -> which retires at the same time. Before the retirement date, export any logic apps from your ISE to Standard -> logic apps so that you can avoid service disruption. Standard logic app workflows run in single-tenant Azure -> Logic Apps and provide the same capabilities plus more. -> -> Starting November 1, 2022, you can no longer create new ISE resources. However, ISE resources existing -> before this date are supported through August 31, 2024. For more information, see the following resources: -> -> - [ISE Retirement - what you need to know](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/ise-retirement-what-you-need-to-know/ba-p/3645220) -> - [Single-tenant versus multi-tenant and integration service environment for Azure Logic Apps](single-tenant-overview-compare.md) -> - [Azure Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) -> - [Export ISE workflows to a Standard logic app](export-from-ise-to-standard-logic-app.md) -> - [Integration Services Environment will be retired on 31 August 2024 - transition to Logic Apps Standard](https://azure.microsoft.com/updates/integration-services-environment-will-be-retired-on-31-august-2024-transition-to-logic-apps-standard/) -> - [Cloud Services (classic) deployment model is retiring on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/) --Azure Logic Apps relies on Azure Storage to store and automatically [encrypt data at rest](../storage/common/storage-service-encryption.md). This encryption protects your data and helps you meet your organizational security and compliance commitments. By default, Azure Storage uses Microsoft-managed keys to encrypt your data. For more information about how Azure Storage encryption works, see [Azure Storage encryption for data at rest](../storage/common/storage-service-encryption.md) and [Azure Data Encryption-at-Rest](../security/fundamentals/encryption-atrest.md). --When you create an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) for hosting your logic apps, and you want more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". With this capability, Azure Storage automatically enables [double encryption or *infrastructure encryption* using platform-managed keys](../security/fundamentals/double-encryption.md) for your key. To learn more, see [Doubly encrypt data with infrastructure encryption](../storage/common/storage-service-encryption.md#doubly-encrypt-data-with-infrastructure-encryption). --This topic shows how to set up and specify your own encryption key to use when you create your ISE by using the Logic Apps REST API. For the general steps to create an ISE through Logic Apps REST API, see [Create an integration service environment (ISE) by using the Logic Apps REST API](../logic-apps/create-integration-service-environment-rest-api.md). --## Considerations --* At this time, customer-managed key support for an ISE is available only in the following regions: -- * Azure: West US 2, East US, and South Central US. -- * Azure Government: Arizona, Virginia, and Texas. --* You can specify a customer-managed key *only when you create your ISE*, not afterwards. You can't disable this key after your ISE is created. Currently, no support exists for rotating a customer-managed key for an ISE. --* The key vault that stores your customer-managed key must exist in the same Azure region as your ISE. --* To support customer-managed keys, your ISE requires that you enable either the [system-assigned or user-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). This identity lets your ISE authenticate access to secured resources, such as virtual machines and other systems or services, that are in or connected to an Azure virtual network. That way, you don't have to sign in with your credentials. --* Currently, to create an ISE that supports customer-managed keys and has either managed identity type enabled, you have to call the Logic Apps REST API by using an HTTPS PUT request. --* You must [give key vault access to your ISE's managed identity](#identity-access-to-key-vault), but the timing depends on which managed identity that you use. -- * **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE, you must [give key vault access to your ISE's managed identity](#identity-access-to-key-vault). Otherwise, ISE creation fails, and you get a permissions error. -- * **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE, [give key vault access to your ISE's managed identity](#identity-access-to-key-vault). --## Prerequisites --* The same [prerequisites](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#prerequisites) and [requirements to enable access for your ISE](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#enable-access) as when you create an ISE in the Azure portal --* An Azure key vault that has the **Soft Delete** and **Do Not Purge** properties enabled -- For more information about enabling these properties, see [Azure Key Vault soft-delete overview](../key-vault/general/soft-delete-overview.md) and [Configure customer-managed keys with Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md). If you're new to [Azure Key Vault](../key-vault/general/overview.md), learn how to create a key vault using [Azure portal](../key-vault/general/quick-create-portal.md), [Azure CLI](../key-vault/general/quick-create-cli.md), or [Azure PowerShell](../key-vault/general/quick-create-powershell.md). --* In your key vault, a key that's created with these property values: -- | Property | Value | - |-|-| - | **Key Type** | RSA | - | **RSA Key Size** | 2048 | - | **Enabled** | Yes | - ||| -- ![Create your customer-managed encryption key](./media/customer-managed-keys-integration-service-environment/create-customer-managed-key-for-encryption.png) -- For more information, see [Configure customer-managed keys with Azure Key Vault](../storage/common/customer-managed-keys-configure-key-vault.md) or the Azure PowerShell command, [Add-AzKeyVaultKey](/powershell/module/az.keyvault/add-azkeyvaultkey). --* A tool that you can use to create your ISE by calling the Logic Apps REST API with an HTTPS PUT request. For example, you can use [Postman](https://www.getpostman.com/downloads/), or you can build a logic app that performs this task. --<a name="enable-support-key-managed-identity"></a> --## Create ISE with key vault and managed identity support --To create your ISE by calling the Logic Apps REST API, make this HTTPS PUT request: --`PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Logic/integrationServiceEnvironments/{integrationServiceEnvironmentName}?api-version=2019-05-01` --> [!IMPORTANT] -> The Logic Apps REST API 2019-05-01 version requires that you make your own HTTPS PUT request for ISE connectors. --Deployment usually takes within two hours to finish. Occasionally, deployment might take up to four hours. To check deployment status, in the [Azure portal](https://portal.azure.com), on your Azure toolbar, select the notifications icon, which opens the notifications pane. --> [!NOTE] -> If deployment fails or you delete your ISE, Azure might take up to an hour -> before releasing your subnets. This delay means means you might have to wait -> before reusing those subnets in another ISE. -> -> If you delete your virtual network, Azure generally takes up to two hours -> before releasing up your subnets, but this operation might take longer. -> When deleting virtual networks, make sure that no resources are still connected. -> See [Delete virtual network](../virtual-network/manage-virtual-network.md#delete-a-virtual-network). --### Request header --In the request header, include these properties: --* `Content-type`: Set this property value to `application/json`. --* `Authorization`: Set this property value to the bearer token for the customer who has access to the Azure subscription or resource group that you want to use. --### Request body --In the request body, enable support for these additional items by providing their information in your ISE definition: --* The managed identity that your ISE uses to access your key vault -* Your key vault and the customer-managed key that you want to use --#### Request body syntax --Here is the request body syntax, which describes the properties to use when you create your ISE: --```json -{ - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Logic/integrationServiceEnvironments/{ISE-name}", - "name": "{ISE-name}", - "type": "Microsoft.Logic/integrationServiceEnvironments", - "location": "{Azure-region}", - "sku": { - "name": "Premium", - "capacity": 1 - }, - "identity": { - "type": <"SystemAssigned" | "UserAssigned">, - // When type is "UserAssigned", include the following "userAssignedIdentities" object: - "userAssignedIdentities": { - "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{user-assigned-managed-identity-object-ID}": { - "principalId": "{principal-ID}", - "clientId": "{client-ID}" - } - } - }, - "properties": { - "networkConfiguration": { - "accessEndpoint": { - // Your ISE can use the "External" or "Internal" endpoint. This example uses "External". - "type": "External" - }, - "subnets": [ - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-1}", - }, - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-2}", - }, - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-3}", - }, - { - "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.Network/virtualNetworks/{virtual-network-name}/subnets/{subnet-4}", - } - ] - }, - "encryptionConfiguration": { - "encryptionKeyReference": { - "keyVault": { - "id": "subscriptions/{Azure-subscription-ID}/resourceGroups/{Azure-resource-group}/providers/Microsoft.KeyVault/vaults/{key-vault-name}", - }, - "keyName": "{customer-managed-key-name}", - "keyVersion": "{key-version-number}" - } - } - } -} -``` --#### Request body example --This example request body shows the sample values: --```json -{ - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Logic/integrationServiceEnvironments/Fabrikam-ISE", - "name": "Fabrikam-ISE", - "type": "Microsoft.Logic/integrationServiceEnvironments", - "location": "WestUS2", - "identity": { - "type": "UserAssigned", - "userAssignedIdentities": { - "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.ManagedIdentity/userAssignedIdentities/*********************************": { - "principalId": "*********************************", - "clientId": "*********************************" - } - } - }, - "sku": { - "name": "Premium", - "capacity": 1 - }, - "properties": { - "networkConfiguration": { - "accessEndpoint": { - // Your ISE can use the "External" or "Internal" endpoint. This example uses "External". - "type": "External" - }, - "subnets": [ - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-1", - }, - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-2", - }, - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-3", - }, - { - "id": "/subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.Network/virtualNetworks/Fabrikam-VNET/subnets/subnet-4", - } - ] - }, - "encryptionConfiguration": { - "encryptionKeyReference": { - "keyVault": { - "id": "subscriptions/********************/resourceGroups/Fabrikam-RG/providers/Microsoft.KeyVault/vaults/FabrikamKeyVault", - }, - "keyName": "Fabrikam-Encryption-Key", - "keyVersion": "********************" - } - } - } -} -``` --<a name="identity-access-to-key-vault"></a> --## Grant access to your key vault --Although the timing differs based on the managed identity that you use, you must [give key vault access to your ISE's managed identity](#identity-access-to-key-vault). --* **System-assigned managed identity**: Within *30 minutes after* you send the HTTPS PUT request that creates your ISE, you must add an access policy to your key vault for your ISE's system-assigned managed identity. Otherwise, creation for your ISE fails, and you get a permissions error. --* **User-assigned managed identity**: Before you send the HTTPS PUT request that creates your ISE, add an access policy to your key vault for your ISE's user-assigned managed identity. --For this task, you can use either the Azure PowerShell [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) command, or you can follow these steps in the Azure portal: --1. In the [Azure portal](https://portal.azure.com), open your Azure key vault. --1. On your key vault menu, select **Access policies** > **Add Access Policy**, for example: -- ![Add access policy for system-assigned managed identity](./media/customer-managed-keys-integration-service-environment/add-ise-access-policy-key-vault.png) --1. After the **Add access policy** pane opens, follow these steps: -- 1. Select these options: -- | Setting | Values | - ||--| - | **Configure from template (optional) list** | Key Management | - | **Key permissions** | - **Key Management Operations**: Get, List <p><p>- **Cryptographic Operations**: Unwrap Key, Wrap Key | - ||| -- ![Select "Key Management" > "Key permissions"](./media/customer-managed-keys-integration-service-environment/select-key-permissions.png) -- 1. For **Select principal**, select **None selected**. After the **Principal** pane opens, in the search box, find and select your ISE. When you're done, choose **Select** > **Add**. -- ![Select your ISE to use as the principal](./media/customer-managed-keys-integration-service-environment/select-service-principal-ise.png) -- 1. When you're finished with the **Access policies** pane, select **Save**. --For more information, see [How to authenticate to Key Vault](../key-vault/general/authentication.md) and [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). --## Next steps --* Learn more about [Azure Key Vault](../key-vault/general/overview.md) |
logic-apps | Edit App Settings Host Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md | The following settings work only for workflows that start with a recurrence-base | Setting | Default value | Description | |||-|-| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. The minimum value for this setting is 7 days. | +| `Runtime.Backend.FlowRunTimeout` | `90.00:00:00` <br>(90 days) | Sets the amount of time a workflow can continue running before forcing a timeout. The minimum value for this setting is 7 days. <br><br>**Important**: Make sure this value is less than or equal to the value for the app setting named `Workflows.RuntimeConfiguration.RetentionInDays`. Otherwise, run histories can get deleted before the associated jobs are complete. | | `Runtime.FlowMaintenanceJob.RetentionCooldownInterval` | `7.00:00:00` <br>(7 days) | Sets the amount of time in days as the interval between when to check for and delete run history that you no longer want to keep. | <a name="run-actions"></a> |
logic-apps | Ise Manage Integration Service Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/ise-manage-integration-service-environment.md | If you change your DNS server or DNS server settings, you have to restart your I 1. On the ISE menu, select **Overview**. On the Overview toolbar, **Restart**. - ![Restart integration service environment](./media/connect-virtual-network-vnet-isolated-environment/restart-integration-service-environment.png) - <a name="delete-ise"></a> ## Delete ISE |
logic-apps | Logic Apps Control Flow Conditional Statement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-control-flow-conditional-statement.md | Title: Add conditions to workflows -description: Create conditions that control actions in workflows in Azure Logic Apps. +description: Create conditions that control workflow actions in Azure Logic Apps. ms.suite: integration Last updated 08/08/2023 [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)] -To specify a condition that returns either true or false and have your workflow run either one action path or another based on the result, add the **Control** action named **Condition** to your workflow. You can nest conditions inside each other. +When you want to set up a condition that returns true or false and have the result determine whether your workflow runs one path of actions or another, add the **Control** action named **Condition** to your workflow. You can also nest conditions inside each other. For example, suppose you have a workflow that sends too many emails when new items appear on a website's RSS feed. You can add the **Condition** action to send email only when the new item includes a specific word. > [!NOTE] >-> To specify more than two paths from which your workflow can choose or if the condition criteria isn't restricted -> to only true or false, use a [*switch statement* instead](logic-apps-control-flow-switch-statement.md). +> If you want to specify more than two paths from which your workflow can choose +> or condition criteria that's not restricted to only true or false, use a +> [*switch action* instead](logic-apps-control-flow-switch-statement.md). -This how-to guide shows how to add a condition to your workflow and use the result to help your workflow choose from two action paths. +This guide shows how to add a condition to your workflow and use the result to help your workflow choose between two action paths. ## Prerequisites This how-to guide shows how to add a condition to your workflow and use the resu ### [Consumption](#tab/consumption) -1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. 1. [Follow these general steps to add the **Condition** action to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). -1. In the **Condition** action, follow these steps build your condition: +1. In the **Condition** action, follow these steps to build your condition: - 1. In the left **Choose a value** box, specify the first value or field that you want to compare. + 1. In the left-side box named **Choose a value**, enter the first value or field that you want to compare. - When you select inside the left box, the dynamic content list opens so that you can select outputs from previous steps in your workflow. + When you select inside the **Choose a value** box, the dynamic content list opens automatically. From this list, you can select outputs from previous steps in your workflow. This example selects the RSS trigger output named **Feed summary**. ![Screenshot shows Azure portal, Consumption workflow designer. RSS trigger, and Condition action with criteria construction.](./media/logic-apps-control-flow-conditional-statement/edit-condition-consumption.png) - 1. From the middle list, select the operation to perform. + 1. Open the middle list, select the operation to perform. This example selects **contains**. - 1. In the right **Choose a value** box, specify the value or field that you want to compare with the first. + 1. In the right-side box named **Choose a value**, enter the value or field that you want to compare with the first. This example specifies the following string: **Microsoft** - The following example shows the complete condition: + The complete condition now looks like the following example: ![Screenshot shows the Consumption workflow and the complete condition criteria.](./media/logic-apps-control-flow-conditional-statement/complete-condition-consumption.png) - - To add another row to your condition, open the **Add** menu, and select **Add row**. + - To add another row to your condition, from the **Add** menu, select **Add row**. - - To add a group with subconditions, open the **Add** menu, and select **Add group**. + - To add a group with subconditions, from the **Add** menu, select **Add group**. - To group existing rows, select the checkboxes for those rows, select the ellipses (...) button for any row, and then select **Make group**. -1. In the **True** and **False** action paths, add the actions to run based on whether the condition is true or false, respectively, for example: +1. In the **True** and **False** action paths, add the actions that you want to run, based on whether the condition is true or false respectively, for example: ![Screenshot shows the Consumption workflow and the condition with true and false paths.](./media/logic-apps-control-flow-conditional-statement/condition-true-false-path-consumption.png) This how-to guide shows how to add a condition to your workflow and use the resu ### [Standard](#tab/standard) -1. In the [Azure portal](https://portal.azure.com), open your Standard logic app workflow in the designer. +1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. 1. [Follow these general steps to add the **Condition** action to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). -1. On the designer, select the **Condition** action to open the information pane and follow these steps build your condition: +1. On the designer, select the **Condition** action to open the information pane. Follow these steps to build your condition: ++ 1. In the left-side box named **Choose a value**, enter the first value or field that you want to compare. ++ After you select inside the **Choose a value** box, the options to open the dynamic content list (lightning icon) or expression editor (formula icon) appear. ++ 1. Select the lightning icon to open the dynamic content list. - 1. In the left **Choose a value** box, specify the first value or field that you want to compare. When you select inside the left box, select the lightning button that appears to open the dynamic content list so that you can select outputs from previous steps in your workflow. + From this list, you can select outputs from previous steps in your workflow. ![Screenshot shows Azure portal, Standard workflow designer, RSS trigger, and Condition action with information pane open, and dynamic content button selected.](./media/logic-apps-control-flow-conditional-statement/open-dynamic-content-standard.png) This how-to guide shows how to add a condition to your workflow and use the resu This example selects **contains**. - 1. In the right **Choose a value** box, specify the value or field that you want to compare with the first. + 1. In the right-side box named **Choose a value**, enter the value or field that you want to compare with the first. This example specifies the following string: **Microsoft** This how-to guide shows how to add a condition to your workflow and use the resu ![Screenshot shows the Standard workflow and the complete condition criteria.](./media/logic-apps-control-flow-conditional-statement/complete-condition-standard.png) - - To add another row to your condition, open the **New item** menu, and select **Add Row**. + - To add another row to your condition, from the **New item** menu, select **Add Row**. - - To add a group with subconditions, open the **New item** menu, and select **Add Group**. + - To add a group with subconditions, from the **New item** menu, select **Add Group**. - To group existing rows, select the checkboxes for those rows, select the ellipses (...) button for any row, and then select **Make Group**. -1. In the **True** and **False** action paths, add the actions to run based on whether the condition is true or false, respectively, for example: +1. In the **True** and **False** action paths, add the actions to run, based on whether the condition is true or false respectively, for example: ![Screenshot shows the Standard workflow and the condition with true and false paths.](./media/logic-apps-control-flow-conditional-statement/condition-true-false-path-standard.png) This workflow now sends mail only when the new items in the RSS feed meet your c ## JSON definition -The following shows the high-level code definition behind the **Condition** action, but for the full definition, see [If action - Schema reference guide for trigger and action types in Azure Logic Apps](logic-apps-workflow-actions-triggers.md#if-action). +The following code shows the high-level JSON definition for the **Condition** action. For the full definition, see [If action - Schema reference guide for trigger and action types in Azure Logic Apps](logic-apps-workflow-actions-triggers.md#if-action). ``` json "actions": { The following shows the high-level code definition behind the **Condition** acti * [Run steps based on different values (switch actions)](logic-apps-control-flow-switch-statement.md) * [Run and repeat steps (loops)](logic-apps-control-flow-loops.md) * [Run or merge parallel steps (branches)](logic-apps-control-flow-branches.md)-* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md) +* [Run steps based on grouped action status (scopes)](logic-apps-control-flow-run-steps-group-scopes.md) |
logic-apps | Logic Apps Enterprise Integration As2 Mdn Acknowledgment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-mdn-acknowledgment.md | Title: AS2 MDN acknowledgments description: Learn about Message Disposition Notification (MDN) acknowledgments for AS2 messages in Azure Logic Apps. ms.suite: integration-- Previously updated : 08/23/2022 Last updated : 08/15/2023 # MDN acknowledgments for AS2 messages in Azure Logic Apps |
logic-apps | Logic Apps Enterprise Integration As2 Message Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2-message-settings.md | Last updated 08/23/2022 This reference describes the properties that you can set in an AS2 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you. -<a name="AS2-incoming-messages"></a> +<a name="as2-inbound-messages"></a> ## AS2 Receive settings -![Select "Receive Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png) +![Screenshot shows Azure portal and AS2 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/receive-settings.png) | Property | Required | Description | |-|-|-| | **Override message properties** | No | Overrides the properties on incoming messages with your property settings. |-| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). | -| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). | +| **Message should be signed** | No | Specifies whether all incoming messages must be digitally signed. If you require signing, from the **Certificate** list, select an existing guest partner public certificate for validating the signature on the messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). | +| **Message should be encrypted** | No | Specifies whether all incoming messages must be encrypted. Non-encrypted messages are rejected. If you require encryption, from the **Certificate** list, select an existing host partner private certificate for decrypting incoming messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). | | **Message should be compressed** | No | Specifies whether all incoming messages must be compressed. Non-compressed messages are rejected. | | **Disallow Message ID duplicates** | No | Specifies whether to allow messages with duplicate IDs. If you disallow duplicate IDs, select the number of days between checks. You can also choose whether to suspend duplicates. | | **MDN Text** | No | Specifies the default message disposition notification (MDN) that you want sent to the message sender. |-| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. | +| **Send MDN** | No | Specifies whether to send synchronous MDNs for received messages. | | **Send signed MDN** | No | Specifies whether to send signed MDNs for received messages. If you require signing, from the **MIC Algorithm** list, select the algorithm to use for signing messages. | | **Send asynchronous MDN** | No | Specifies whether to send MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. |-|||| -<a name="AS2-outgoing-messages"></a> +<a name="as2-outbound-messages"></a> ## AS2 Send settings -![Select "Send Settings"](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png) +![Screenshot shows Azure portal and AS2 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-as2-message-settings/send-settings.png) | Property | Required | Description | |-|-|-|-| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <p>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). | -| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <p>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](../logic-apps/logic-apps-enterprise-integration-certificates.md). | +| **Enable message signing** | No | Specifies whether all outgoing messages must be digitally signed. If you require signing, select these values: <br><br>- From the **Signing Algorithm** list, select the algorithm to use for signing messages. <br>- From the **Certificate** list, select an existing host partner private certificate for signing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). | +| **Enable message encryption** | No | Specifies whether all outgoing messages must be encrypted. If you require encryption, select these values: <br><br>- From the **Encryption Algorithm** list, select the guest partner public certificate algorithm to use for encrypting messages. <br>- From the **Certificate** list, select an existing guest partner public certificate for encrypting outgoing messages. If you don't have a certificate, learn more about [adding certificates](logic-apps-enterprise-integration-certificates.md). | | **Enable message compression** | No | Specifies whether all outgoing messages must be compressed. | | **Unfold HTTP headers** | No | Puts the HTTP `content-type` header onto a single line. | | **Transmit file name in MIME header** | No | Specifies whether to include the file name in the MIME header. | This reference describes the properties that you can set in an AS2 agreement for | **Request asynchronous MDN** | No | Specifies whether to receive MDNs asynchronously. If you select asynchronous MDNs, in the **URL** box, specify the URL for where to send the MDNs. | | **Enable NRR** | No | Specifies whether to require non-repudiation receipt (NRR). This communication attribute provides evidence that the data was received as addressed. | | **SHA2 Algorithm format** | No | Specifies the MIC algorithm format to use for signing in the headers for the outgoing AS2 messages or MDN |-|||| ## Next steps -[Exchange AS2 messages](../logic-apps/logic-apps-enterprise-integration-as2.md) +[Exchange AS2 messages](logic-apps-enterprise-integration-as2.md) |
logic-apps | Logic Apps Enterprise Integration As2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md | To send and receive AS2 messages in workflows that you create using Azure Logic Except for tracking capabilities, the **AS2 (v2)** connector provides the same capabilities as the original **AS2** connector, runs natively with the Azure Logic Apps runtime, and offers significant performance improvements in message size, throughput, and latency. Unlike the original **AS2** connector, the **AS2 (v2)** connector doesn't require that you create a connection to your integration account. Instead, as described in the prerequisites, make sure that you link your integration account to the logic app resource where you plan to use the connector. -This article shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this article use the [Request](../connectors/connectors-native-reqres.md) trigger. +This how-to guide shows how to add the AS2 encoding and decoding actions to an existing logic app workflow. The **AS2 (v2)** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md). ## Connector technical reference The **AS2 (v2)** connector has no triggers. The following table describes the ac * An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) to define and store artifacts for use in enterprise integration and B2B workflows. - > [!IMPORTANT] - > - > To work together, both your integration account and logic app resource must exist in the same Azure subscription and Azure region. + * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region. -* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario. + * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the AS2 operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **AS2Identity** for this scenario. -* An [AS2 agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. + * Defines an [AS2 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [AS2 message settings](logic-apps-enterprise-integration-as2-message-settings.md). * Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**. +1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). -1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**. --1. From the actions list, select the action named **AS2 Encode**. -- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-consumption.png) --1. In the action information box, provide the following information. +1. In the action information box, provide the following information: | Property | Required | Description | |-|-|-| Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**. --1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 encode**. --1. From the actions list, select the action named **Encode to AS2 message**. -- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-consumption.png) +1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). 1. When prompted to create a connection to your integration account, provide the following information: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **Insert a new step** (plus sign), and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 encode**. --1. From the actions list, select the action named **AS2 Encode**. -- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Encode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-encode-built-in-standard.png) +1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). 1. In the action information pane, provide the following information: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the **AS2** action, select **Insert a new step** (plus sign), and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 encode**. --1. From the actions list, select the action named **Encode to AS2 message**. -- ![Screenshot showing the Azure portal, workflow designer for Standard, and "Encode to AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-encode-as2-message-managed-standard.png) +1. In the designer, [follow these general steps to add the **AS2** action named **Encode to AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). 1. When prompted to create a connection to your integration account, provide the following information: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the **AS2 (v2)** action, select **New step**. --1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2**. --1. From the actions list, select the action named **AS2 Decode**. -- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "AS2 Decode" action selected.](media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-consumption.png) +1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). 1. In the action information box, provide the following information: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the **AS2** action, select **New step**. --1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **as2 decode**. --1. From the actions list, select the action named **Decode AS2 message**. -- ![Screenshot showing the Azure portal, designer for Consumption workflow, and "Decode AS2 message" action selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-consumption.png) +1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). 1. When prompted to create a connection to your integration account, provide the following information: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **as2 decode**. --1. From the actions list, select the action named **AS2 Decode**. -- ![Screenshot showing the Azure portal, designer for Standard workflow, and "AS2 Decode" action selected.](./media/logic-apps-enterprise-integration-as2/select-as2-v2-decode-built-in-standard.png) +1. In the designer, [follow these general steps to add the **AS2 (v2)** action named **AS2 Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). 1. In the action information pane, provide the following information: Select the tab for either Consumption or Standard logic app workflows: 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the AS2 action, select **Insert a new step** (plus sign), and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **as2 decode**. --1. From the actions list, select the action named **Decode AS2 message**. -- ![Screenshot showing the Azure portal, designer for Standard workflow, and "Decode AS2 message" operation selected.](./media/logic-apps-enterprise-integration-as2/select-decode-as2-message-managed-standard.png) +1. In the designer, [follow these general steps to add the **AS2** action named **Decode AS2 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). 1. When prompted to create a connection to your integration account, provide the following information: |
logic-apps | Logic Apps Enterprise Integration Edifact Contrl Acknowledgment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-contrl-acknowledgment.md | |
logic-apps | Logic Apps Enterprise Integration Edifact Message Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact-message-settings.md | Last updated 08/20/2022 This reference describes the properties that you can set in an EDIFACT agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you. -<a name="EDIFACT-inbound-messages"></a> +<a name="edifact-inbound-messages"></a> -## EDIFACT Receive Settings +## EDIFACT Receive settings -![Screenshot showing Azure portal, EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png) +![Screenshot showing Azure portal and EDIFACT agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-receive-settings.png) ### Identifiers This reference describes the properties that you can set in an EDIFACT agreement |-|-| | **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. |-||| ### Acknowledgments | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | Return a technical (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send Settings. |-| **Acknowledgement (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. | -||| +| **Acknowledgment (CONTRL)** | Return a functional (CONTRL) acknowledgment to the interchange sender, based on the agreement's Send settings. | <a name="receive-settings-schemas"></a> This reference describes the properties that you can set in an EDIFACT agreement | **UNH2.5 (Associated Assigned Code)** | The assigned code that is alphanumeric and is 1-6 characters. | | **UNG2.1 (App Sender ID)** |Enter an alphanumeric value with a minimum of one character and a maximum of 35 characters. | | **UNG2.2 (App Sender Code Qualifier)** |Enter an alphanumeric value, with a maximum of four characters. |-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource | -||| +| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource | ### Control Numbers This reference describes the properties that you can set in an EDIFACT agreement | **Check for duplicate UNB5 every (days)** | If you chose to disallow duplicate interchange control numbers, you can specify the number of days between running the check. | | **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers (UNG5). | | **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers (UNH1). |-| **EDIFACT Acknowledgement Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. | -||| +| **EDIFACT Acknowledgment Control Number** | Assign the transaction set reference numbers to use in an acknowledgment by entering a value for the prefix, a range of reference numbers, and a suffix. | ### Validation After you finish setting up a validation row, the next row automatically appears | **Extended Validation** | If the data type isn't EDI, validation runs on the data element requirement and allowed repetition, enumerations, and data element length validation (min and max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove the leading or trailing zero and space characters. |-| **Trailing Separator Policy** | Generate trailing separators. <p> - **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The received interchange must have trailing delimiters and separators. | -||| +| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the received interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The received interchange must have trailing delimiters and separators. | ### Internal Settings After you finish setting up a validation row, the next row automatically appears | **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. | | **Preserve Interchange - suspend transaction sets on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, while continuing to process all other transaction sets. | | **Preserve Interchange - suspend interchange on error** | Keep the interchange intact, create an XML document for the entire batched interchange. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. |-||| -<a name="EDIFACT-outbound-messages"></a> +<a name="edifact-outbound-messages"></a> -## EDIFACT Send Settings +## EDIFACT Send settings -![Screenshot showing Azure portal, EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png) +![Screenshot showing Azure portal and EDIFACT agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-edifact-message-settings/edifact-send-settings.png) ### Identifiers After you finish setting up a validation row, the next row automatically appears | **UNB6.1 (Recipient Reference Password)** | An alphanumeric value that is 1-14 characters. | | **UNB6.2 (Recipient Reference Qualifier)** | An alphanumeric value that is 1-2 characters. | | **UNB7 (Application Reference ID)** | An alphanumeric value that is 1-14 characters. |-||| ### Acknowledgment | Property | Description | |-|-| | **Receipt of Message (CONTRL)** | The host partner that sends the message requests a technical (CONTRL) acknowledgment from the guest partner. |-| **Acknowledgement (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. | +| **Acknowledgment (CONTRL)** | The host partner that sends the message expects requests a functional (CONTRL) acknowledgment from the guest partner. | | **Generate SG1/SG4 loop for accepted transaction sets** | If you chose to request a functional acknowledgment, this setting forces the generation of SG1/SG4 loops in the functional acknowledgments for accepted transaction sets. |-||| ### Schemas After you finish setting up a validation row, the next row automatically appears | **UNH2.1 (Type)** | The transaction set type. | | **UNH2.2 (Version)** | The message version number. | | **UNH2.3 (Release)** | The message release number. |-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource | -||| +| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br>- Standard: Your logic app resource | ### Envelopes After you finish setting up an envelope row, the next row automatically appears. | **UNB10 (Communication Agreement)** | An alphanumeric value that is 1-40 characters. | | **UNB11 (Test Indicator)** | Indicate that the generated interchange is test data. | | **Apply UNA Segment (Service String Advice)** | Generate a UNA segment for the interchange to send. |-| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <p>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>- **UNG1**: An alphanumeric value that is 1-6 characters. <p>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <p>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <p>- **UNG6**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <p>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <p>- **UNG8**: An alphanumeric value that is 1-14 characters. | -||| +| **Apply UNG Segments (Function Group Header)** | Create grouping segments in the functional group header for messages sent to the guest partner. The following values are used to create the UNG segments: <br><br>- **Schema**: The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>- **UNG1**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG2.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG2.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG3.1**: An alphanumeric value that is 1-35 characters. <br><br>- **UNG3.2**: An alphanumeric value that is 1-4 characters. <br><br>- **UNG6**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.1**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.2**: An alphanumeric value that is 1-3 characters. <br><br>- **UNG7.3**: An alphanumeric value that is 1-6 characters. <br><br>- **UNG8**: An alphanumeric value that is 1-14 characters. | ### Character Sets and Separators Other than the character set, you can specify a different set of delimiters to u | Property | Description | |-|-| | **UNB1.1 (System Identifier)** | The EDIFACT character set to apply to the outbound interchange. |-| **Schema** | The previously uploaded schema that you want to use in from either resource type: <p>- Consumption: An integration account linked to your logic app. <p>- Standard: Your logic app resource <p>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. | +| **Schema** | The previously uploaded schema that you want to use in from either resource type: <br><br>- Consumption: An integration account linked to your logic app. <br><br>- Standard: Your logic app resource <br><br>For the selected schema, select the separators set that you want to use, based on the following separator descriptions. After you finish setting up a schema row, the next row automatically appears. | | **Input Type** | The input type for the message. | | **Component Separator** | A single character to use for separating composite data elements. | | **Data Element Separator** | A single character to use for separating simple data elements within composite data elements. | Other than the character set, you can specify a different set of delimiters to u | **UNA5 (Repetition Separator)** | A value to use for the repetition separator that separates segments that repeat within a transaction set. | | **Segment Terminator** | A single character that indicates the end in an EDI segment. | | **Suffix** | The character to use with the segment identifier. If you designate a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you have to designate a suffix. |-||| ### Control Numbers Other than the character set, you can specify a different set of delimiters to u | **UNB5 (Interchange Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate an outbound interchange. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message, while the prefix and suffix stay the same. | | **UNG5 (Group Control Number)** | A prefix, a range of values to use as the interchange control number, and a suffix. These values are used to generate the group control number. The control number is required, but the prefix and suffix are optional. The control number is incremented for each new message until the maximum value is reached, while the prefix and suffix stay the same. | | **UNH1 (Message Header Reference Number)** | A prefix, a range of values for the interchange control number, and a suffix. These values are used to generate the message header reference number. The reference number is required, but the prefix and suffix are optional. The prefix and suffix are optional, while the reference number is required. The reference number is incremented for each new message, while the prefix and suffix stay the same. |-||| ### Validation After you finish setting up a validation row, the next row automatically appears | **Extended Validation** | If the data type isn't EDI, run validation on the data element requirement and allowed repetition, enumerations, and data element length validation (min/max). | | **Allow Leading/Trailing Zeroes** | Keep any extra leading or trailing zero and space characters. Don't remove these characters. | | **Trim Leading/Trailing Zeroes** | Remove leading or trailing zero characters. |-| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The sent interchange must have trailing delimiters and separators. | -||| +| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the sent interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The sent interchange must have trailing delimiters and separators. | ## Next steps -[Exchange EDIFACT messages](../logic-apps/logic-apps-enterprise-integration-edifact.md) +[Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md) |
logic-apps | Logic Apps Enterprise Integration Edifact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact.md | -To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides triggers and actions that support and manage EDIFACT communication. +To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides operations that support and manage EDIFACT communication. -This article shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. Although you can use any trigger to start your workflow, the examples use the [Request](../connectors/connectors-native-reqres.md) trigger. For more information about the **EDIFACT** connector's triggers, actions, and limits version, review the [connector's reference page](/connectors/edifact/) as documented by the connector's Swagger file. +This how-to guide shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. The **EDIFACT** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md). -![Overview screenshot showing the "Decode EDIFACT message" operation with the message decoding properties.](./media/logic-apps-enterprise-integration-edifact/overview-edifact-message-consumption.png) +## Connector technical reference -## EDIFACT encoding and decoding +The **EDIFACT** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **EDIFACT** connector, see the following documentation: -The following sections describe the tasks that you can complete using the EDIFACT encoding and decoding actions. +* [Connector reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file ++* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) ++ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits). ++The following sections provide more information about the tasks that you can complete using the EDIFACT encoding and decoding actions. ### Encode to EDIFACT message action +This action performs the following tasks: + * Resolve the agreement by matching the sender qualifier & identifier and receiver qualifier and identifier. * Serialize the Electronic Data Interchange (EDI), which converts XML-encoded messages into EDI transaction sets in the interchange. The following sections describe the tasks that you can complete using the EDIFAC ### Decode EDIFACT message action +This action performs the following tasks: + * Validate the envelope against the trading partner agreement. * Resolve the agreement by matching the sender qualifier and identifier along with the receiver qualifier and identifier. The following sections describe the tasks that you can complete using the EDIFAC * A functional acknowledgment that acknowledges the acceptance or rejection for the received interchange or group. -## Connector reference --For technical information about the **EDIFACT** connector, review the [connector's reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file. Also, review the [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) for workflows running in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, or the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits). - ## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements: - * Is associated with the same Azure subscription as your logic app resource. -- * Exists in the same location or Azure region as your logic app resource. + * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region. - * When you use the [**Logic App (Consumption)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your logic app resource doesn't need a link to your integration account. However, you still need this account to store artifacts, such as partners, agreements, and certificates, along with using the EDIFACT, [X12](logic-apps-enterprise-integration-x12.md), or [AS2](logic-apps-enterprise-integration-as2.md) operations. Your integration account still has to meet other requirements, such as using the same Azure subscription and existing in the same location as your logic app resource. + * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **EDIFACT** operation used in your workflow. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario. - * When you use the [**Logic App (Standard)** resource type](logic-apps-overview.md#resource-environment-differences) and the **EDIFACT** operations, your workflow requires a connection to your integration account that you create directly from your workflow when you add the AS2 operation. + * Defines an [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). -* At least two [trading partners](logic-apps-enterprise-integration-partners.md) in your integration account. The definitions for both partners must use the same *business identity* qualifier, which is **ZZZ - Mutually Defined** for this scenario. + > [!IMPORTANT] + > + > The EDIFACT connector supports only UTF-8 characters. If your output contains + > unexpected characters, check that your EDIFACT messages use the UTF-8 character set. -* An [EDIFACT agreement](logic-apps-enterprise-integration-agreements.md) in your integration account between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. +* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account: - > [!IMPORTANT] - > The EDIFACT connector supports only UTF-8 characters. If your output contains - > unexpected characters, check that your EDIFACT messages use the UTF-8 character set. + | Logic app workflow | Link required? | + |--|-| + | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. | + | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **EDIFACT** operation to your workflow. | * The logic app resource and workflow where you want to use the EDIFACT operations. For technical information about the **EDIFACT** connector, review the [connector 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**. --1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. For this example, select the action named **Encode to EDIFACT message by agreement name**. -- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" action selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-consumption.png) +1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). > [!NOTE]- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to - > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by - > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from - > the trigger or a preceding action. + > + > If you want to use **Encode to EDIFACT message by identities** action instead, + > you later have to provide different values, such as the **Sender identifier** + > and **Receiver identifier** that's specified by your EDIFACT agreement. + > You also have to specify the **XML message to encode**, which can be the output + > from the trigger or a preceding action. -1. When prompted to create a connection to your integration account, provide the following information: +1. When prompted, provide the following connection information for your integration account: | Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |- |||| For example: For technical information about the **EDIFACT** connector, review the [connector 1. When you're done, select **Create**. -1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation: +1. In the EDIFACT action information box, provide the following property values: | Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |- |||| For example, the XML message payload can be the **Body** content output from the Request trigger: For technical information about the **EDIFACT** connector, review the [connector 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Encode to EDIFACT message by agreement name**. -- ![Screenshot showing the Azure portal, workflow designer, and "Encode to EDIFACT message by agreement name" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-encode-edifact-message-standard.png) +1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). > [!NOTE]- > You can choose to select the **Encode to EDIFACT message by identities** action instead, but you later have to - > provide different values, such as the **Sender identifier** and **Receiver identifier** that's specified by - > your EDIFACT agreement. You also have to specify the **XML message to encode**, which can be the output from - > the trigger or a preceding action. + > + > If you want to use **Encode to EDIFACT message by identities** action instead, + > you later have to provide different values, such as the **Sender identifier** + > and **Receiver identifier** that's specified by your EDIFACT agreement. + > You also have to specify the **XML message to encode**, which can be the output + > from the trigger or a preceding action. -1. When prompted to create a connection to your integration account, provide the following information: +1. When prompted, provide the following connection information for your integration account: | Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |- |||| For example: For technical information about the **EDIFACT** connector, review the [connector 1. When you're done, select **Create**. -1. After the EDIFACT details pane appears on the designer, provide information for the following properties: +1. In the EDIFACT action information box, provide the following property values: | Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |- |||| For example, the message payload is the **Body** content output from the Request trigger: For technical information about the **EDIFACT** connector, review the [connector 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **New step**. +1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). -1. Under the **Choose an operation** search box, select **All**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**. --1. When prompted to create a connection to your integration account, provide the following information: +1. When prompted, provide the following connection information for your integration account: | Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |- |||| For example: For technical information about the **EDIFACT** connector, review the [connector 1. When you're done, select **Create**. -1. After the EDIFACT operation appears on the designer, provide information for the following properties specific to this operation: +1. In the EDIFACT action information box, provide the following property values: | Property | Required | Description | |-|-|-| | **EDIFACT flat file message to decode** | Yes | The XML flat file message to decode. | | Other parameters | No | This operation includes the following other parameters: <p>- **Component separator** <br>- **Data element separator** <br>- **Release indicator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br>- **Payload character set** <br>- **Segment terminator suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |- |||| For example, the XML message payload to decode can be the **Body** content output from the Request trigger: For technical information about the **EDIFACT** connector, review the [connector 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -1. On the designer, under the trigger or action where you want to add the EDIFACT action, select **Insert a new step** (plus sign), and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter `edifact encode`. Select the action named **Decode EDIFACT message**. -- ![Screenshot showing the Azure portal, workflow designer, and "Decode EDIFACT message" operation selected.](./media/logic-apps-enterprise-integration-edifact/select-decode-edifact-message-standard.png) +1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). -1. When prompted to create a connection to your integration account, provide the following information: +1. When prompted, provide the following connection information for your integration account: | Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection | | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |- |||| For example: For technical information about the **EDIFACT** connector, review the [connector 1. When you're done, select **Create**. -1. After the EDIFACT details pane appears on the designer, provide information for the following properties: +1. In the EDIFACT action information box, provide the following property values: | Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement | | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |- |||| For example, the message payload is the **Body** content output from the Request trigger: For technical information about the **EDIFACT** connector, review the [connector ## Handle UNH2.5 segments in EDIFACT documents -In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`: +In an EDIFACT document, the [UNH2.5 segment](logic-apps-enterprise-integration-edifact-message-settings.md#receive-settings-schemas) is used for schema lookup. For example, in this sample EDIFACT message, the UNH field is `EAN008`: `UNH+SSDD1+ORDERS:D:03B:UN:EAN008` To handle an EDIFACT document or process an EDIFACT message that has a UN2.5 seg For example, suppose the schema root name for the sample UNH field is `EFACT_D03B_ORDERS_EAN008`. For each `D03B_ORDERS` that has a different UNH2.5 segment, you have to deploy an individual schema. -1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, which is based on whether you're working with the **Logic App (Consumption)** or **Logic App (Standard)** resource type respectively. +1. In the [Azure portal](https://portal.azure.com), add the schema to your integration account resource or logic app resource, based on whether you have a Consumption or Standard logicapp workflow respectively. 1. Whether you're using the EDIFACT decoding or encoding action, upload your schema and set up the schema settings in your EDIFACT agreement's **Receive Settings** or **Send Settings** sections respectively. |
logic-apps | Logic Apps Enterprise Integration Liquid Transform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md | For more information, review the following documentation: <a name="create-template"></a> -## Step 1 - Create the template +## Step 1: Create the template Before you can perform a Liquid transformation in your logic app workflow, you must first create a Liquid template that defines the mapping that you want. Before you can perform a Liquid transformation in your logic app workflow, you m <a name="upload-template"></a> -## Step 2 - Upload Liquid template +## Step 2: Upload Liquid template After you create your Liquid template, you now have to upload the template based on the following scenario: After you create your Liquid template, you now have to upload the template based After your map file finishes uploading, the map appears in the **Maps** list. On your integration account's **Overview** page, under **Artifacts**, your uploaded map also appears. -## Step 3 - Add the Liquid transformation action +## Step 3: Add the Liquid transformation action The following steps show how to add a Liquid transformation action for Consumption and Standard logic app workflows. |
logic-apps | Logic Apps Enterprise Integration X12 997 Acknowledgment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-997-acknowledgment.md | The optional AK3 segment reports errors in a data segment and identifies the loc ||-| | AK301 | Mandatory, identifies the segment in error with the X12 segment ID, for example, NM1. | | AK302 | Mandatory, identifies the segment count of the segment in error. The ST segment is `1`, and each segment increments the segment count by one. |-| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by an Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. | +| AK303 | Mandatory, identifies a bounded loop, which is a loop surrounded by a Loop Start (LS) segment and a Loop End (LE) segment. AK303 contains the values of the LS and LE segments that bound the segment in error. | | AK304 | Optional, specifies the code for the error in the data segment. Although AK304 is optional, the element is required when an error exists for the identified segment. For AK304 error codes, review [997 ACK error codes - Data Segment Note](#997-ack-error-codes). | ||| |
logic-apps | Logic Apps Enterprise Integration X12 Decode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-decode.md | - Title: Decode X12 messages -description: Validate EDI and generate acknowledgements with X12 message decoder in Azure Logic Apps with Enterprise Integration Pack. ----- Previously updated : 01/27/2017---# Decode X12 messages in Azure Logic Apps with Enterprise Integration Pack --With the Decode X12 message connector, you can validate the envelope against a trading partner agreement, validate EDI and partner-specific properties, split interchanges into transactions sets or preserve entire interchanges, and generate acknowledgments for processed transactions. -To use this connector, you must add the connector to an existing trigger in your logic app. --## Before you start --Here's the items you need: --* An Azure account; you can create a [free account](https://azure.microsoft.com/free) -* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) -that's already defined and associated with your Azure subscription. -You must have an integration account to use the Decode X12 message connector. -* At least two [partners](logic-apps-enterprise-integration-partners.md) -that are already defined in your integration account -* An [X12 agreement](logic-apps-enterprise-integration-x12.md) -that's already defined in your integration account --## Decode X12 messages --1. Create a logic app workflow. For more information, see the following documentation: -- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md) -- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md) --2. The Decode X12 message connector doesn't have triggers, -so you must add a trigger for starting your logic app, like a Request trigger. -In the Logic App Designer, add a trigger, and then add an action to your logic app. --3. In the search box, enter "x12" for your filter. -Select **X12 - Decode X12 message**. - - ![Search for "x12"](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage1.png) --3. If you didn't previously create any connections to your integration account, -you're prompted to create that connection now. Name your connection, -and select the integration account that you want to connect. -- ![Provide integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage4.png) -- Properties with an asterisk are required. -- | Property | Details | - | | | - | Connection Name * |Enter any name for your connection. | - | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. | --5. When you're done, your connection details should look similar to this example. -To finish creating your connection, choose **Create**. - - ![integration account connection details](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage5.png) --6. After your connection is created, as shown in this example, -select the X12 flat file message to decode. -- ![integration account connection created](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage6.png) -- For example: -- ![Select X12 flat file message for decoding](media/logic-apps-enterprise-integration-x12-decode/x12decodeimage7.png) -- > [!NOTE] - > The actual message content or payload for the message array, good or bad, - > is base64 encoded. So, you must specify an expression that processes this content. - > Here is an example that processes the content as XML that you can - > enter in code view - > or by using expression builder in the designer. - > ``` json - > "content": "@xml(base64ToBinary(item()?['Payload']))" - > ``` - > ![Content example](media/logic-apps-enterprise-integration-x12-decode/content-example.png) - > ---## X12 Decode details --The X12 Decode connector performs these tasks: --* Validates the envelope against trading partner agreement -* Validates EDI and partner-specific properties - * EDI structural validation, and extended schema validation - * Validation of the structure of the interchange envelope. - * Schema validation of the envelope against the control schema. - * Schema validation of the transaction-set data elements against the message schema. - * EDI validation performed on transaction-set data elements -* Verifies that the interchange, group, and transaction set control numbers are not duplicates - * Checks the interchange control number against previously received interchanges. - * Checks the group control number against other group control numbers in the interchange. - * Checks the transaction set control number against other transaction set control numbers in that group. -* Splits the interchange into transaction sets, or preserves the entire interchange: - * Split Interchange as transaction sets - suspend transaction sets on error: - Splits interchange into transaction sets and parses each transaction set. - The X12 Decode action outputs only those transaction sets - that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`. - * Split Interchange as transaction sets - suspend interchange on error: - Splits interchange into transaction sets and parses each transaction set. - If one or more transaction sets in the interchange fail validation, - the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`. - * Preserve Interchange - suspend transaction sets on error: - Preserve the interchange and process the entire batched interchange. - The X12 Decode action outputs only those transaction sets that fail validation to `badMessages`, - and outputs the remaining transactions sets to `goodMessages`. - * Preserve Interchange - suspend interchange on error: - Preserve the interchange and process the entire batched interchange. - If one or more transaction sets in the interchange fail validation, - the X12 Decode action outputs all the transaction sets in that interchange to `badMessages`. -* Generates a Technical and/or Functional acknowledgment (if configured). - * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver. - * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document --## View the swagger -See the [swagger details](/connectors/x12/). --## Next steps -[Learn more about the Enterprise Integration Pack](../logic-apps/logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack") - |
logic-apps | Logic Apps Enterprise Integration X12 Encode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-encode.md | - Title: Encode X12 messages -description: Validate EDI and convert XML-encoded messages with X12 message encoder in Azure Logic Apps with Enterprise Integration Pack. ----- Previously updated : 01/27/2017---# Encode X12 messages in Azure Logic Apps with Enterprise Integration Pack --With the Encode X12 message connector, you can validate EDI and partner-specific properties, -convert XML-encoded messages into EDI transaction sets in the interchange, -and request a Technical Acknowledgement, Functional Acknowledgment, or both. -To use this connector, you must add the connector to an existing trigger in your logic app. --## Before you start --Here's the items you need: --* An Azure account; you can create a [free account](https://azure.microsoft.com/free) -* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) -that's already defined and associated with your Azure subscription. -You must have an integration account to use the Encode X12 message connector. -* At least two [partners](logic-apps-enterprise-integration-partners.md) -that are already defined in your integration account -* An [X12 agreement](logic-apps-enterprise-integration-x12.md) -that's already defined in your integration account --## Encode X12 messages --1. Create a logic app workflow. For more information, see the following documentation: -- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md) -- * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md) --2. The Encode X12 message connector doesn't have triggers, -so you must add a trigger for starting your logic app, like a Request trigger. -In the Logic App Designer, add a trigger, and then add an action to your logic app. --3. In the search box, enter "x12" for your filter. -Select either **X12 - Encode to X12 message by agreement name** -or **X12 - Encode to X12 message by identities**. - - ![Search for "x12"](./media/logic-apps-enterprise-integration-x12-encode/x12decodeimage1.png) --3. If you didn't previously create any connections to your integration account, -you're prompted to create that connection now. Name your connection, -and select the integration account that you want to connect. - - ![integration account connection](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage1.png) -- Properties with an asterisk are required. -- | Property | Details | - | | | - | Connection Name * |Enter any name for your connection. | - | Integration Account * |Enter a name for your integration account. Make sure that your integration account and logic app are in the same Azure location. | --5. When you're done, your connection details should look similar to this example. -To finish creating your connection, choose **Create**. -- ![integration account connection created](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage2.png) -- Your connection is now created. -- ![integration account connection details](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage3.png) --#### Encode X12 messages by agreement name --If you chose to encode X12 messages by agreement name, -open the **Name of X12 agreement** list, -enter or select your existing X12 agreement. Enter the XML message to encode. --![Enter X12 agreement name and XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage4.png) --#### Encode X12 messages by identities --If you choose to encode X12 messages by identities, enter the sender identifier, -sender qualifier, receiver identifier, and receiver qualifier as -configured in your X12 agreement. Select the XML message to encode. - -![Provide identities for sender and receiver, select XML message to encode](./media/logic-apps-enterprise-integration-x12-encode/x12encodeimage5.png) --## X12 Encode details --The X12 Encode connector performs these tasks: --* Agreement resolution by matching sender and receiver context properties. -* Serializes the EDI interchange, converting XML-encoded messages into EDI transaction sets in the interchange. -* Applies transaction set header and trailer segments -* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange -* Replaces separators in the payload data -* Validates EDI and partner-specific properties - * Schema validation of the transaction-set data elements against the message Schema - * EDI validation performed on transaction-set data elements. - * Extended validation performed on transaction-set data elements -* Requests a Technical and/or Functional acknowledgment (if configured). - * A Technical Acknowledgment generates as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver - * A Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document --## View the swagger -See the [swagger details](/connectors/x12/). --## Next steps -[Learn more about the Enterprise Integration Pack](logic-apps-enterprise-integration-overview.md "Learn about Enterprise Integration Pack") - |
logic-apps | Logic Apps Enterprise Integration X12 Message Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12-message-settings.md | + + Title: X12 message settings +description: Reference guide for X12 message settings in agreements for Azure Logic Apps with Enterprise Integration Pack. ++ms.suite: integration ++++ Last updated : 08/15/2023+++# Reference for X12 message settings in agreements for Azure Logic Apps +++This reference describes the properties that you can set in an X12 agreement for specifying how to handle messages between [trading partners](logic-apps-enterprise-integration-partners.md). Set up these properties based on your agreement with the partner that exchanges messages with you. ++<a name="x12-inbound-messages"></a> ++## X12 Receive Settings ++![Screenshot showing Azure portal and X12 agreement settings for inbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-receive-settings.png) ++<a name="inbound-identifiers"></a> ++### Identifiers ++| Property | Description | +|-|-| +| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. | +| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | +| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. | +| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | ++<a name="inbound-acknowledgment"></a> ++### Acknowledgment ++| Property | Description | +|-|-| +| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. | +| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <br><br>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. | ++<a name="inbound-schemas"></a> ++### Schemas ++For this section, select a [schema](logic-apps-enterprise-integration-schemas.md) from your [integration account](logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears. ++| Property | Description | +|-|-| +| **Version** | The X12 version for the schema | +| **Transaction Type (ST01)** | The transaction type | +| **Sender Application (GS02)** | The sender application | +| **Schema** | The schema file that you want to use | ++<a name="inbound-envelopes"></a> ++### Envelopes ++| Property | Description | +|-|-| +| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. | ++<a name="inbound-control-numbers"></a> ++### Control Numbers ++| Property | Description | +|-|-| +| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <br><br><br><br>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. | +| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. | +| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. | ++<a name="inbound-validations"></a> ++### Validations ++The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears. ++| Property | Description | +|-|-| +| **Message Type** | The EDI message type | +| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. | +| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). | +| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. | +| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. | +| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. | ++<a name="inbound-internal-settings"></a> ++### Internal Settings ++| Property | Description | +|-|-| +| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. | +| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. | +| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. | +| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. | +| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. | +| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. | ++<a name="x12-outbound-settings"></a> ++## X12 Send settings ++![Screenshot showing Azure portal and X12 agreement settings for outbound messages.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-send-settings.png) ++<a name="outbound-identifiers"></a> ++### Identifiers ++| Property | Description | +|-|-| +| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA2** property. | +| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | +| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <br><br>**Note**: If you select other values, specify a value for the **ISA4** property. | +| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | ++<a name="outbound-acknowledgment"></a> ++### Acknowledgment ++| Property | Description | +|-|-| +| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. | +| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <br><br>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. | ++<a name="outbound-schemas"></a> ++### Schemas ++For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears. ++| Property | Description | +|-|-| +| **Version** | The X12 version for the schema | +| **Transaction Type (ST01)** | The transaction type for the schema | +| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. | ++<a name="outbound-envelopes"></a> ++### Envelopes ++| Property | Description | +|-|-| +| **ISA11 Usage** | The separator to use in a transaction set: <br><br>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <br><br>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. | ++<a name="outbound-control-version-number"></a> ++#### Control Version Number ++For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears. ++| Property | Description | +|-|-| +| **Control Version Number (ISA12)** | The version of the X12 standard | +| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data | +| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. | +| **GS1** | Optional, select the functional code. | +| **GS2** | Optional, specify the application sender. | +| **GS3** | Optional, specify the application receiver. | +| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. | +| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. | +| **GS7** | Optional, select a value for the responsible agency. | +| **GS8** | Optional, specify the schema document version. | ++<a name="outbound-control-numbers"></a> ++### Control Numbers ++| Property | Description | +|-|-| +| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 | +| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 | +| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <br><br>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value | ++<a name="outbound-character-sets-separators"></a> ++### Character Sets and Separators ++The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears. ++> [!TIP] +> +> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character. ++| Property | Description | +|-|-| +| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. | +| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. | +| **Input Type** | The input type for the character set | +| **Component Separator** | A single character that separates composite data elements | +| **Data Element Separator** | A single character that separates simple data elements within composite data | +| **Replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message | +| **Segment Terminator** | A single character that indicates the end of an EDI segment | +| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. | ++<a name="outbound-validation"></a> ++### Validation ++The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears. ++| Property | Description | +|-|-| +| **Message Type** | The EDI message type | +| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. | +| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). | +| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. | +| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. | +| **Trailing Separator Policy** | Generate trailing separators. <br><br>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <br><br>- **Optional**: Send interchanges with or without trailing delimiters and separators. <br><br>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. | ++<a name="hipaa-schemas"></a> ++## HIPAA schemas and message types ++When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message: ++`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."` ++This table lists the affected message types, any variants, and the document version numbers that map to those message types: ++| Message type or variant | Description | Document version number (GS8) | +|-|--|-| +| 277 | Health Care Information Status Notification | 005010X212 | +| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 | +| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 | +| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 | ++You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid. ++To specify these document version numbers and message types, follow these steps: ++1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use. ++ For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead. ++ To update your schema, follow these steps: ++ 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema). ++ 1. In your agreement's message settings, select the revised schema. ++1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number. ++ For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values: ++ ```json + "schemaReferences": [ + { + "messageId": "837", + "schemaVersion": "00501", + "schemaName": "X12_00501_837" + } + ] + ``` ++ In this `schemaReferences` section, add another entry that has these values: ++ * `"messageId": "837_P"` + * `"schemaVersion": "00501"` + * `"schemaName": "X12_00501_837_P"` ++ When you're done, your `schemaReferences` section looks like this: ++ ```json + "schemaReferences": [ + { + "messageId": "837", + "schemaVersion": "00501", + "schemaName": "X12_00501_837" + }, + { + "messageId": "837_P", + "schemaVersion": "00501", + "schemaName": "X12_00501_837_P" + } + ] + ``` ++1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values. ++ ![Screenshot shows X12 agreement settings to disable validation for all message types or each message type.](./media/logic-apps-enterprise-integration-x12-message-settings/x12-disable-validation.png) ++## Next steps ++[Exchange X12 messages](logic-apps-enterprise-integration-x12.md) |
logic-apps | Logic Apps Enterprise Integration X12 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-x12.md | Title: Exchange X12 messages for B2B integration -description: Send, receive, and process X12 messages when building B2B enterprise integration solutions with Azure Logic Apps and the Enterprise Integration Pack. + Title: Exchange X12 messages in B2B workflows +description: Exchange X12 messages between partners by creating workflows with Azure Logic Apps and Enterprise Integration Pack. ms.suite: integration -+ Previously updated : 08/20/2022 Last updated : 08/15/2023 -# Exchange X12 messages for B2B enterprise integration using Azure Logic Apps and Enterprise Integration Pack +# Exchange X12 messages using workflows in Azure Logic Apps [!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)] -In Azure Logic Apps, you can create workflows that work with X12 messages by using **X12** operations. These operations include triggers and actions that you can use in your workflow to handle X12 communication. You can add X12 triggers and actions in the same way as any other trigger and action in a workflow, but you need to meet extra prerequisites before you can use X12 operations. +To send and receive X12 messages in workflows that you create using Azure Logic Apps, use the **X12** connector, which provides operations that support and manage X12 communication. -This article describes the requirements and settings for using X12 triggers and actions in your workflow. If you're looking for EDIFACT messages instead, review [Exchange EDIFACT messages](logic-apps-enterprise-integration-edifact.md). If you're new to logic apps, see [What is Azure Logic Apps](logic-apps-overview.md) and the following documentation: +This how-to guide shows how to add the X12 encoding and decoding actions to an existing logic app workflow. The **X12** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md). -* [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md) +## Connector technical reference -* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md) +The **X12** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **X12** connector, see the following documentation: ++* [Connector reference page](/connectors/x12/), which describes the triggers, actions, and limits as documented by the connector's Swagger file ++* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) ++ For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits). ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* A logic app resource and workflow where you want to use an X12 trigger or action. To use an X12 trigger, you need a blank workflow. To use an X12 action, you need a workflow that has an existing trigger. +* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements: -* An [integration account](logic-apps-enterprise-integration-create-integration-account.md) that's linked to your logic app resource. Both your logic app and integration account have to use the same Azure subscription and exist in the same Azure region or location. + * Both your integration account and logic app resource must exist in the same Azure subscription and Azure region. - Your integration account also need to include the following B2B artifacts: + * Defines at least two [trading partners](logic-apps-enterprise-integration-partners.md) that participate in the **X12** operation used in your workflow. The definitions for both partners must use the same X12 business identity qualifier. - * At least two [trading partners](logic-apps-enterprise-integration-partners.md) that use the X12 identity qualifier. -- * An X12 [agreement](logic-apps-enterprise-integration-agreements.md) defined between your trading partners. For information about settings to use when receiving and sending messages, review [Receive Settings](#receive-settings) and [Send Settings](#send-settings). + * Defines an [X12 agreement](logic-apps-enterprise-integration-agreements.md) between the trading partners that participate in your workflow. Each agreement requires a host partner and a guest partner. The content in the messages between you and the other partner must match the agreement type. For information about agreement settings to use when receiving and sending messages, see [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). > [!IMPORTANT]+ > > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, you have to add a - > `schemaReferences` section to your agreement. For more information, review [HIPAA schemas and message types](#hipaa-schemas). + > `schemaReferences` section to your agreement. For more information, see [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas). - * The [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation. + * Defines the [schemas](logic-apps-enterprise-integration-schemas.md) to use for XML validation. > [!IMPORTANT]- > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](#hipaa-schemas). --## Connector reference --For more technical information about this connector, such as triggers, actions, and limits as described by the connector's Swagger file, see the [connector's reference page](/connectors/x12/). --> [!NOTE] -> For logic apps in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), -> this connector's ISE-labeled version uses the [B2B message limits for ISE](../logic-apps/logic-apps-limits-and-config.md#b2b-protocol-limits). --<a name="receive-settings"></a> --## Receive Settings --After you set the properties in your trading partner agreement, you can configure how this agreement identifies and handles inbound messages that you receive from your partner through this agreement. --1. Under **Add**, select **Receive Settings**. --1. Based on the agreement with the partner that exchanges messages with you, set the properties in the **Receive Settings** pane, which is organized into the following sections: -- * [Identifiers](#inbound-identifiers) - * [Acknowledgement](#inbound-acknowledgement) - * [Schemas](#inbound-schemas) - * [Envelopes](#inbound-envelopes) - * [Control Numbers](#inbound-control-numbers) - * [Validations](#inbound-validations) - * [Internal Settings](#inbound-internal-settings) --1. When you're done, make sure to save your settings by selecting **OK**. --<a name="inbound-identifiers"></a> --### Receive Settings - Identifiers --![Identifier properties for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-identifiers.png) --| Property | Description | -|-|-| -| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. | -| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | -| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. | -| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | -||| + > + > If you're working with Health Insurance Portability and Accountability Act (HIPAA) schemas, make sure to review [HIPAA schemas and message types](logic-apps-enterprise-integration-x12-message-settings.md#hipaa-schemas). -<a name="inbound-acknowledgement"></a> +* Based on whether you're working on a Consumption or Standard logic app workflow, your logic app resource might require a link to your integration account: -### Receive Settings - Acknowledgement + | Logic app workflow | Link required? | + |--|-| + | Consumption | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. | + | Standard | Connection to integration account required, but no link required. You can create the connection when you add the **X12** operation to your workflow. | -![Acknowledgement for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-acknowledgement.png) +* The logic app resource and workflow where you want to use the X12 operations. -| Property | Description | -|-|-| -| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. | -| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. <p>For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgments. <p>To enable generation of AK2 loops in functional acknowledgments for accepted transaction sets, select **Include AK2 / IK2 Loop**. | + For more information, see the following documentation: -<a name="inbound-schemas"></a> + * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md) -### Receive Settings - Schemas + * [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md) -![Schemas for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-schemas.png) +<a name="encode"></a> -For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01) and Sender Application (GS02). The EDI Receive Pipeline disassembles the incoming message by matching the values and schema that you set in this section with the values for ST01 and GS02 in the incoming message and with the schema of the incoming message. After you complete each row, a new empty row automatically appears. +## Encode X12 messages -| Property | Description | -|-|-| -| **Version** | The X12 version for the schema | -| **Transaction Type (ST01)** | The transaction type | -| **Sender Application (GS02)** | The sender application | -| **Schema** | The schema file that you want to use | -||| +The **Encode to X12 message** operation performs the following tasks: -<a name="inbound-envelopes"></a> +* Resolves the agreement by matching sender and receiver context properties. +* Serializes the EDI interchange and converts XML-encoded messages into EDI transaction sets in the interchange. +* Applies transaction set header and trailer segments. +* Generates an interchange control number, a group control number, and a transaction set control number for each outgoing interchange. +* Replaces separators in the payload data. +* Validates EDI and partner-specific properties. + * Schema validation of transaction-set data elements against the message schema. + * EDI validation on transaction-set data elements. + * Extended validation on transaction-set data elements. +* Requests a Technical and Functional Acknowledgment, if configured. + * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver. + * Generates a Functional Acknowledgment generates as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document. -### Receive Settings - Envelopes +### [Consumption](#tab/consumption) -![Separators to use in transaction sets for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-envelopes.png) +1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -| Property | Description | -|-|-| -| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the incoming document in the EDI Receive Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. | -||| +1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). -<a name="inbound-control-numbers"></a> + > [!NOTE] + > + > If you want to use **Encode to X12 message by identities** action instead, + > you later have to provide different values, such as the **Sender identifier** + > and **Receiver identifier** that's specified by your X12 agreement. + > You also have to specify the **XML message to encode**, which can be the output + > from the trigger or a preceding action. -### Receive Settings - Control Numbers +1. When prompted, provide the following connection information for your integration account: -![Handling control number duplicates for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-control-numbers.png) + | Property | Required | Description | + |-|-|-| + | **Connection name** | Yes | A name for the connection | + | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. | -| Property | Description | -|-|-| -| **Disallow Interchange control number duplicates** | Block duplicate interchanges. Check the interchange control number (ISA13) for the received interchange control number. If a match is detected, the EDI Receive Pipeline doesn't process the interchange. <p><p>To specify the number of days to perform the check, enter a value for the **Check for duplicate ISA13 every (days)** property. | -| **Disallow Group control number duplicates** | Block interchanges that have duplicate group control numbers. | -| **Disallow Transaction set control number duplicates** | Block interchanges that have duplicate transaction set control numbers. | -||| + For example: -<a name="inbound-validations"></a> + ![Screenshot showing Consumption workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-consumption.png) -### Receive Settings - Validations +1. When you're done, select **Create**. -![Validations for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-validations.png) +1. In the X12 action information box, provide the following property values: -The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears. + | Property | Required | Description | + |-|-|-| + | **Name of X12 agreement** | Yes | The X12 agreement to use. | + | **XML message to encode** | Yes | The XML message to encode | + | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). | -| Property | Description | -|-|-| -| **Message Type** | The EDI message type | -| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. | -| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). | -| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. | -| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. | -| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the inbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Accept interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The inbound interchange must have trailing delimiters and separators. | -||| + For example, you can use the **Body** content output from the Request trigger as the XML message payload: -<a name="inbound-internal-settings"></a> + ![Screenshot showing Consumption workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-consumption.png) -### Receive Settings - Internal Settings +### [Standard](#tab/standard) -![Internal settings for inbound messages](./media/logic-apps-enterprise-integration-x12/x12-receive-settings-internal-settings.png) +1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -| Property | Description | -|-|-| -| **Convert implied decimal format Nn to a base 10 numeric value** | Convert an EDI number that is specified with the format "Nn" into a base-10 numeric value. | -| **Create empty XML tags if trailing separators are allowed** | Have the interchange sender include empty XML tags for trailing separators. | -| **Split Interchange as transaction sets - suspend transaction sets on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope to the transaction set. Suspend only the transactions where the validation fails. | -| **Split Interchange as transaction sets - suspend interchange on error** | Parse each transaction set that's in an interchange into a separate XML document by applying the appropriate envelope. Suspend the entire interchange when one or more transaction sets in the interchange fail validation. | -| **Preserve Interchange - suspend transaction sets on error** | Leave the interchange intact and create an XML document for the entire batched interchange. Suspend only the transaction sets that fail validation, but continue to process all other transaction sets. | -| **Preserve Interchange - suspend interchange on error** |Leaves the interchange intact, creates an XML document for the entire batched interchange. Suspends the entire interchange when one or more transaction sets in the interchange fail validation. | -||| +1. In the designer, [follow these general steps to add the **X12** action named **Encode to X12 message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). -<a name="send-settings"></a> + > [!NOTE] + > + > If you want to use **Encode to X12 message by identities** action instead, + > you later have to provide different values, such as the **Sender identifier** + > and **Receiver identifier** that's specified by your X12 agreement. + > You also have to specify the **XML message to encode**, which can be the output + > from the trigger or a preceding action. -## Send Settings +1. When prompted, provide the following connection information for your integration account: -After you set the agreement properties, you can configure how this agreement identifies and handles outbound messages that you send to your partner through this agreement. + | Property | Required | Description | + |-|-|-| + | **Connection Name** | Yes | A name for the connection | + | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. | + | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. | + | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios | -1. Under **Add**, select **Send Settings**. + For example: -1. Configure these properties based on your agreement with the partner that exchanges messages with you. For property descriptions, see the tables in this section. + ![Screenshot showing Standard workflow and connection information for action named Encode to X12 message by agreement name.](./media/logic-apps-enterprise-integration-x12/create-x12-encode-connection-standard.png) - The **Send Settings** are organized into these sections: +1. When you're done, select **Create**. - * [Identifiers](#outbound-identifiers) - * [Acknowledgement](#outbound-acknowledgement) - * [Schemas](#outbound-schemas) - * [Envelopes](#outbound-envelopes) - * [Control Version Number](#outbound-control-version-number) - * [Control Numbers](#outbound-control-numbers) - * [Character Sets and Separators](#outbound-character-sets-separators) - * [Validation](#outbound-validation) +1. In the X12 action information box, provide the following property values: -1. When you're done, make sure to save your settings by selecting **OK**. + | Property | Required | Description | + |-|-|-| + | **Name Of X12 Agreement** | Yes | The X12 agreement to use. | + | **XML Message To Encode** | Yes | The XML message to encode | + | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Component separator** <br>- **Replacement character** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Control Version Number** <br>- **Application Sender Identifier/Code GS02** <br>- **Application Receiver Identifier/Code GS03** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). | -<a name="outbound-identifiers"></a> + For example, you can use the **Body** content output from the Request trigger as the XML message payload: -### Send Settings - Identifiers + ![Screenshot showing Standard workflow, action named Encode to X12 message by agreement name, and action properties.](./media/logic-apps-enterprise-integration-x12/encode-x12-message-agreement-standard.png) -![Identifier properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-identifiers.png) --| Property | Description | -|-|-| -| **ISA1 (Authorization Qualifier)** | The Authorization Qualifier value that you want to use. The default value is **00 - No Authorization Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA2** property. | -| **ISA2** | The Authorization Information value to use when the **ISA1** property is not **00 - No Authorization Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | -| **ISA3 (Security Qualifier)** | The Security Qualifier value that you want to use. The default value is **00 - No Security Information Present**. <p>**Note**: If you select other values, specify a value for the **ISA4** property. | -| **ISA4** | The Security Information value to use when the **ISA3** property is not **00 - No Security Information Present**. This property value must have a minimum of one alphanumeric character and a maximum of 10. | -||| --<a name="outbound-acknowledgement"></a> --### Send Settings - Acknowledgement --![Acknowledgement properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-acknowledgement.png) --| Property | Description | -|-|-| -| **TA1 Expected** | Return a technical acknowledgment (TA1) to the interchange sender. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgment from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. | -| **FA Expected** | Return a functional acknowledgment (FA) to the interchange sender. For the **FA Version** property, based on the schema version, select the 997 or 999 acknowledgements. <p>This setting specifies that the host partner, who is sending the message, requests an acknowledgement from the guest partner in the agreement. These acknowledgments are expected by the host partner based on the agreement's Receive Settings. | -||| --<a name="outbound-schemas"></a> --### Send Settings - Schemas --![Schemas for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-schemas.png) --For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each transaction type (ST01). After you complete each row, a new empty row automatically appears. --| Property | Description | -|-|-| -| **Version** | The X12 version for the schema | -| **Transaction Type (ST01)** | The transaction type for the schema | -| **Schema** | The schema file that you want to use. If you select the schema first, the version and transaction type are automatically set. | -||| --<a name="outbound-envelopes"></a> --### Send Settings - Envelopes --![Separators in a transaction set to use for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-envelopes.png) --| Property | Description | -|-|-| -| **ISA11 Usage** | The separator to use in a transaction set: <p>- **Standard Identifier**: Use a period (.) for decimal notation, rather than the decimal notation of the outbound document in the EDI Send Pipeline. <p>- **Repetition Separator**: Specify the separator for repeated occurrences of a simple data element or a repeated data structure. For example, usually the carat (^) is used as the repetition separator. For HIPAA schemas, you can only use the carat. | -||| --<a name="outbound-control-version-number"></a> --### Send Settings - Control Version Number + -![Control version number for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-version-number.png) +<a name="decode"></a> -For this section, select a [schema](../logic-apps/logic-apps-enterprise-integration-schemas.md) from your [integration account](./logic-apps-enterprise-integration-create-integration-account.md) for each interchange. After you complete each row, a new empty row automatically appears. +## Decode X12 messages -| Property | Description | -|-|-| -| **Control Version Number (ISA12)** | The version of the X12 standard | -| **Usage Indicator (ISA15)** | The context of an interchange, which is either **Test** data, **Information** data, or **Production** data | -| **Schema** | The schema to use for generating the GS and ST segments for an X12-encoded interchange that's sent to the EDI Send Pipeline. | -| **GS1** | Optional, select the functional code. | -| **GS2** | Optional, specify the application sender. | -| **GS3** | Optional, specify the application receiver. | -| **GS4** | Optional, select **CCYYMMDD** or **YYMMDD**. | -| **GS5** | Optional, select **HHMM**, **HHMMSS**, or **HHMMSSdd**. | -| **GS7** | Optional, select a value for the responsible agency. | -| **GS8** | Optional, specify the schema document version. | -||| +The **Decode X12 message** operation performs the following tasks: -<a name="outbound-control-numbers"></a> +* Validates the envelope against trading partner agreement. -### Send Settings - Control Numbers +* Validates EDI and partner-specific properties. -![Control numbers for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-control-numbers.png) + * EDI structural validation and extended schema validation + * Interchange envelope structural validation + * Schema validation of the envelope against the control schema + * Schema validation of the transaction set data elements against the message schema + * EDI validation on transaction-set data elements -| Property | Description | -|-|-| -| **Interchange Control Number (ISA13)** | The range of values for the interchange control number, which can have a minimum of value 1 and a maximum value of 999999999 | -| **Group Control Number (GS06)** | The range of values for the group control number, which can have a minimum value of 1 and a maximum value of 999999999 | -| **Transaction Set Control Number (ST02)** | The range of values for the transaction set control number, which can have a minimum value of 1 and a maximum value of 999999999 <p>- **Prefix**: Optional, an alphanumeric value <br>- **Suffix**: Optional, an alphanumeric value | -||| +* Verifies that the interchange, group, and transaction set control numbers aren't duplicates. -<a name="outbound-character-sets-separators"></a> + * Checks the interchange control number against previously received interchanges. + * Checks the group control number against other group control numbers in the interchange. + * Checks the transaction set control number against other transaction set control numbers in that group. -### Send Settings - Character Sets and Separators +* Splits an interchange into transaction sets, or preserves the entire interchange: -![Delimiters for message types in outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-character-sets-separators.png) + * Split the interchange into transaction sets or suspend transaction sets on error: Parse each transaction set. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`. -The **Default** row shows the character set that's used as delimiters for a message schema. If you don't want to use the **Default** character set, you can enter a different set of delimiters for each message type. After you complete each row, a new empty row automatically appears. + * Split the interchange into transaction sets or suspend interchange on error: Parse each transaction set. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`. -> [!TIP] -> To provide special character values, edit the agreement as JSON and provide the ASCII value for the special character. + * Preserve the interchange or suspend transaction sets on error: Preserve the interchange and process the entire batched interchange. The X12 decode action outputs only those transaction sets failing validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`. -| Property | Description | -|-|-| -| **Character Set to be used** | The X12 character set, which is either **Basic**, **Extended**, or **UTF8**. | -| **Schema** | The schema that you want to use. After you select the schema, select the character set that you want to use, based on the separator descriptions below. | -| **Input Type** | The input type for the character set | -| **Component Separator** | A single character that separates composite data elements | -| **Data Element Separator** | A single character that separates simple data elements within composite data | -| **replacement Character Separator** | A replacement character that replaces all separator characters in the payload data when generating the outbound X12 message | -| **Segment Terminator** | A single character that indicates the end of an EDI segment | -| **Suffix** | The character to use with the segment identifier. If you specify a suffix, the segment terminator data element can be empty. If the segment terminator is left empty, you must designate a suffix. | -||| + * Preserve the interchange or suspend interchange on error: Preserve the interchange and process the entire batched interchange. If one or more transaction sets in the interchange fail validation, the X12 decode action outputs all the transaction sets in that interchange to `badMessages`. -<a name="outbound-validation"></a> +* Generates a Technical and Functional Acknowledgment, if configured. -### Send Settings - Validation + * Generates a Technical Acknowledgment as a result of header validation. The technical acknowledgment reports the status of the processing of an interchange header and trailer by the address receiver. + * Generates a Functional Acknowledgment as a result of body validation. The functional acknowledgment reports each error encountered while processing the received document. -![Validation properties for outbound messages](./media/logic-apps-enterprise-integration-x12/x12-send-settings-validation.png) +### [Consumption](#tab/consumption) -The **Default** row shows the validation rules that are used for an EDI message type. If you want to define different rules, select each box where you want the rule set to **true**. After you complete each row, a new empty row automatically appears. +1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. -| Property | Description | -|-|-| -| **Message Type** | The EDI message type | -| **EDI Validation** | Perform EDI validation on data types as defined by the schema's EDI properties, length restrictions, empty data elements, and trailing separators. | -| **Extended Validation** | If the data type isn't EDI, validation is on the data element requirement and allowed repetition, enumerations, and data element length validation (min or max). | -| **Allow Leading/Trailing Zeroes** | Keep any additional leading or trailing zero and space characters. Don't remove these characters. | -| **Trim Leading/Trailing Zeroes** | Remove any leading or trailing zero and space characters. | -| **Trailing Separator Policy** | Generate trailing separators. <p>- **Not Allowed**: Prohibit trailing delimiters and separators in the outbound interchange. If the interchange has trailing delimiters and separators, the interchange is declared not valid. <p>- **Optional**: Send interchanges with or without trailing delimiters and separators. <p>- **Mandatory**: The outbound interchange must have trailing delimiters and separators. | -||| +1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action). -<a name="hipaa-schemas"></a> +1. When prompted, provide the following connection information for your integration account: -## HIPAA schemas and message types + | Property | Required | Description | + |-|-|-| + | **Connection name** | Yes | A name for the connection | + | **Integration Account** | Yes | From the list of available integration accounts, select the account to use. | -When you work with HIPAA schemas and the 277 or 837 message types, you need to perform a few extra steps. The [document version numbers (GS8)](#outbound-control-version-number) for these message types have more than nine characters, for example, "005010X222A1". Also, some document version numbers map to variant message types. If you don't reference the correct message type in your schema and in your agreement, you get this error message: + For example: -`"The message has an unknown document type and did not resolve to any of the existing schemas configured in the agreement."` + ![Screenshot showing Consumption workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-consumption.png) -This table lists the affected message types, any variants, and the document version numbers that map to those message types: +1. When you're done, select **Create**. -| Message type or variant | Description | Document version number (GS8) | -|-|--|-| -| 277 | Health Care Information Status Notification | 005010X212 | -| 837_I | Health Care Claim Institutional | 004010X096A1 <br>005010X223A1 <br>005010X223A2 | -| 837_D | Health Care Claim Dental | 004010X097A1 <br>005010X224A1 <br>005010X224A2 | -| 837_P | Health Care Claim Professional | 004010X098A1 <br>005010X222 <br>005010X222A1 | -||| +1. In the X12 action information box, provide the following property values: -You also need to disable EDI validation when you use these document version numbers because they result in an error that the character length is invalid. + | Property | Required | Description | + |-|-|-| + | **X12 flat file message to decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** | + | Other parameters | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). | -To specify these document version numbers and message types, follow these steps: + For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression: -1. In your HIPAA schema, replace the current message type with the variant message type for the document version number that you want to use. + ![Screenshot showing Consumption workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-consumption.png) - For example, suppose you want to use document version number `005010X222A1` with the `837` message type. In your schema, replace each `"X12_00501_837"` value with the `"X12_00501_837_P"` value instead. +### [Standard](#tab/standard) - To update your schema, follow these steps: +1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. - 1. In the Azure portal, go to your integration account. Find and download your schema. Replace the message type and rename the schema file, and upload your revised schema to your integration account. For more information, see [Edit a schema](logic-apps-enterprise-integration-schemas.md#edit-schema). +1. In the designer, [follow these general steps to add the **X12** action named **Decode X12 message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action). - 1. In your agreement's message settings, select the revised schema. +1. When prompted, provide the following connection information for your integration account: -1. In your agreement's `schemaReferences` object, add another entry that specifies the variant message type that matches your document version number. + | Property | Required | Description | + |-|-|-| + | **Connection Name** | Yes | A name for the connection | + | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. | + | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br><br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. | + | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios | - For example, suppose you want to use document version number `005010X222A1` for the `837` message type. Your agreement has a `schemaReferences` section with these properties and values: + For example: - ```json - "schemaReferences": [ - { - "messageId": "837", - "schemaVersion": "00501", - "schemaName": "X12_00501_837" - } - ] - ``` + ![Screenshot showing Standard workflow and connection information for action named Decode X12 message.](./media/logic-apps-enterprise-integration-x12/create-x12-decode-connection-standard.png) - In this `schemaReferences` section, add another entry that has these values: +1. When you're done, select **Create**. - * `"messageId": "837_P"` - * `"schemaVersion": "00501"` - * `"schemaName": "X12_00501_837_P"` +1. In the X12 action information box, provide the following property values: - When you're done, your `schemaReferences` section looks like this: + | Property | Required | Description | + |-|-|-| + | **X12 Flat File Message To Decode** | Yes | The X12 message in flat file format to decode <br><br>**Note**: The XML message payload or content for the message array, good or bad, is base64 encoded. So, you must use an expression that processes this content. For example, the following expression processes the message content as XML: <br><br>**`xml(base64ToBinary(item()?['Body']))`** | + | **Advanced parameters** | No | This operation includes the following other parameters: <br><br>- **Preserve Interchange** <br>- **Suspend Interchange on Error** <br><br>For more information, review [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md). | - ```json - "schemaReferences": [ - { - "messageId": "837", - "schemaVersion": "00501", - "schemaName": "X12_00501_837" - }, - { - "messageId": "837_P", - "schemaVersion": "00501", - "schemaName": "X12_00501_837_P" - } - ] - ``` + For example, you can use the **Body** content output from the Request trigger as the XML message payload, but you must first preprocess this content using an expression: -1. In your agreement's message settings, disable EDI validation by clearing the **EDI Validation** checkbox either for each message type or for all message types if you're using the **Default** values. + ![Screenshot showing Standard workflow, action named Decode X12 message, and action properties.](./media/logic-apps-enterprise-integration-x12/decode-x12-message-standard.png) - ![Disable validation for all message types or each message type](./media/logic-apps-enterprise-integration-x12/x12-disable-validation.png) + ## Next steps * [X12 TA1 technical acknowledgments and error codes](logic-apps-enterprise-integration-x12-ta1-acknowledgment.md) * [X12 997 functional acknowledgments and error codes](logic-apps-enterprise-integration-x12-997-acknowledgment.md)-* [Managed connectors for Azure Logic Apps](../connectors/managed.md) -* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md) +* [X12 message settings](logic-apps-enterprise-integration-x12-message-settings.md) |
logic-apps | Logic Apps Limits And Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md | Before you set up your firewall with IP addresses, review these considerations: * [Firewall permissions for single tenant logic apps - Azure portal](create-single-tenant-workflows-azure-portal.md#firewall-setup) * [Firewall permissions for single tenant logic apps - Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md#firewall-setup) -* For Consumption logic app workflows that run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), make sure that you [open these ports too](../logic-apps/connect-virtual-network-vnet-isolated-environment.md#network-ports-for-ise). - * If your logic apps have problems accessing Azure storage accounts that use [firewalls and firewall rules](../storage/common/storage-network-security.md), you have [various other options to enable access](../connectors/connectors-create-api-azureblobstorage.md#access-storage-accounts-behind-firewalls). For example, logic apps can't directly access storage accounts that use firewall rules and exist in the same region. However, if you permit the [outbound IP addresses for managed connectors in your region](/connectors/common/outbound-ip-addresses), your logic apps can access storage accounts that are in a different region except when you use the Azure Table Storage or Azure Queue Storage connectors. To access your Table Storage or Queue Storage, you can use the HTTP trigger and actions instead. For other options, see [Access storage accounts behind firewalls](../connectors/connectors-create-api-azureblobstorage.md#access-storage-accounts-behind-firewalls). |
logic-apps | Logic Apps Securing A Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md | In a Standard logic app workflow that starts with the Request trigger (but not a * An inbound call to the request endpoint can use only one authorization scheme, either Azure AD OAuth or [Shared Access Signature (SAS)](#sas). Although using one scheme doesn't disable the other scheme, using both schemes at the same time causes an error because Azure Logic Apps doesn't know which scheme to choose. - To enable Azure AD OAuth so that this option is the only way to call the request endpoint, use the following steps: -- 1. To enable the capability to check the OAuth access token, [follow the steps to include 'Authorization' header in the Request or HTTP webhook trigger outputs](#include-auth-header). -- > [!NOTE] - > - > This step makes the `Authorization` header visible in the workflow's run history - > and in the trigger's outputs. -- 1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer. -- 1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**. -- 1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter the following expression, and select **Done**. -- `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')` -- > [!NOTE] - > If you call the trigger endpoint without the correct authorization, - > the run history just shows the trigger as `Skipped` without any - > message that the trigger condition has failed. --* Only [Bearer-type](../active-directory/develop/active-directory-v2-protocols.md#tokens) authorization schemes are supported for Azure AD OAuth access tokens, which means that the `Authorization` header for the access token must specify the `Bearer` type. +* Azure Logic Apps supports either the [bearer type](../active-directory/develop/active-directory-v2-protocols.md#tokens) or [proof-of-possession type (Consumption logic app only)](/entra/msal/dotnet/advanced/proof-of-possession-tokens) authorization schemes for Azure AD OAuth access tokens. However, the `Authorization` header for the access token must specify either the `Bearer` type or `PoP` type. For more information about how to get and use a PoP token, see [Get a Proof of Possession (PoP) token](#get-pop). * Your logic app resource is limited to a maximum number of authorization policies. Each authorization policy also has a maximum number of [claims](../active-directory/develop/developer-glossary.md#claim). For more information, review [Limits and configuration for Azure Logic Apps](../logic-apps/logic-apps-limits-and-config.md#authentication-limits). In a Standard logic app workflow that starts with the Request trigger (but not a } ``` +#### Enable Azure AD OAuth as the only option to call a request endpoint ++1. Set up your Request or HTTP webhook trigger with the capability to check the OAuth access token by [following the steps to include the 'Authorization' header in the Request or HTTP webhook trigger outputs](#include-auth-header). ++ > [!NOTE] + > + > This step makes the `Authorization` header visible in the + > workflow's run history and in the trigger's outputs. ++1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app workflow in the designer. ++1. On the trigger, in the upper right corner, select the ellipses (**...**) button, and then select **Settings**. ++1. Under **Trigger Conditions**, select **Add**. In the trigger condition box, enter either of the following expressions, based on the token type you want to use, and select **Done**. ++ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'Bearer')` ++ -or- ++ `@startsWith(triggerOutputs()?['headers']?['Authorization'], 'PoP')` ++If you call the trigger endpoint without the correct authorization, the run history just shows the trigger as `Skipped` without any message that the trigger condition has failed. ++<a name="get-pop"></a> ++#### Get a Proof-of-Possession (PoP) token ++The Microsoft Authentication Library (MSAL) libraries provide PoP tokens for you to use. If the logic app workflow that you want to call requires a PoP token, you can get this token using the MSAL libraries. The following samples show how to acquire PoP tokens: ++* [A .NET Core daemon console application calling a protected Web API with its own identity](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/tree/master/2-Call-OwnApi) ++* [SignedHttpRequest aka PoP (Proof of Possession)](https://github.com/AzureAD/azure-activedirectory-identitymodel-extensions-for-dotnet/wiki/SignedHttpRequest-aka-PoP-(Proof-of-Possession)) ++To use the PoP token with your Consumption logic app, follow the next section to [set up Azure AD OAuth](#enable-azure-ad-inbound). + <a name="enable-azure-ad-inbound"></a> #### Enable Azure AD OAuth for your Consumption logic app resource In the [Azure portal](https://portal.azure.com), add one or more authorization p 1. On the logic app menu, under **Settings**, select **Authorization**. After the Authorization pane opens, select **Add policy**. - ![Select "Authorization" > "Add policy"](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png) + ![Screenshot that shows Azure portal, Consumption logic app menu, Authorization page, and selected button to add policy.](./media/logic-apps-securing-a-logic-app/add-azure-active-directory-authorization-policies.png) 1. Provide information about the authorization policy by specifying the [claim types](../active-directory/develop/developer-glossary.md#claim) and values that your logic app expects in the access token presented by each inbound call to the Request trigger: - ![Provide information for authorization policy](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png) + ![Screenshot that shows Azure portal, Consumption logic app Authorization page, and information for authorization policy.](./media/logic-apps-securing-a-logic-app/set-up-authorization-policy.png) | Property | Required | Type | Description | |-|-||-| | **Policy name** | Yes | String | The name that you want to use for the authorization policy |- | **Claims** | Yes | String | The claim types and values that your workflow accepts from inbound calls. Here are the available claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. | + | **Policy type** | Yes | String | Either **AAD** for bearer type tokens or **AADPOP** for Proof-of-Possession type tokens. | + | **Claims** | Yes | String | A key-value pair that specifies the claim type and value that the workflow's Request trigger expects in the access token presented by each inbound call to the trigger. You can add any standard claim you want by selecting **Add standard claim**. To add a claim that's specific to a PoP token, select **Add custom claim**. <br><br>Available standard claim types: <br><br>- **Issuer** <br>- **Audience** <br>- **Subject** <br>- **JWT ID** (JSON Web Token identifier) <br><br>Requirements: <br><br>- At a minimum, the **Claims** list must include the **Issuer** claim, which has a value that starts with `https://sts.windows.net/` or `https://login.microsoftonline.com/` as the Azure AD issuer ID. <br><br>- Each claim must be a single string value, not an array of values. For example, you can have a claim with **Role** as the type and **Developer** as the value. You can't have a claim that has **Role** as the type and the values set to **Developer** and **Program Manager**. <br><br>- The claim value is limited to a [maximum number of characters](logic-apps-limits-and-config.md#authentication-limits). <br><br>For more information about these claim types, review [Claims in Azure AD security tokens](../active-directory/develop/security-tokens.md#json-web-tokens-and-claims). You can also specify your own claim type and value. | ++ The following example shows the information for a PoP token: ++ ![Screenshot that shows Azure portal, Consumption logic app Authorization page, and information for a proof-of-possession policy.](./media/logic-apps-securing-a-logic-app/pop-policy-example.png) 1. To add another claim, select from these options: In the [Azure portal](https://portal.azure.com), add one or more authorization p 1. To include the `Authorization` header from the access token in the request-based trigger outputs, review [Include 'Authorization' header in request and HTTP webhook trigger outputs](#include-auth-header). -Workflow properties such as policies don't appear in your logic app's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource +Workflow properties such as policies don't appear in your workflow's code view in the Azure portal. To access your policies programmatically, call the following API through Azure Resource <a name="define-authorization-policy-template"></a> This list includes information about TLS/SSL self-signed certificates: If you want to use client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type instead, you still have to complete a few extra steps for this authentication type. Otherwise, the call fails. For more information, review [Client certificate or Azure Active Directory Open Authentication (Azure AD OAuth) with the "Certificate" credential type for single-tenant Azure Logic Apps](../connectors/connectors-native-http.md#client-certificate-authentication). -* For logic app workflows in an [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md), the HTTP connector permits self-signed certificates for TLS/SSL handshakes. However, you must first [enable self-signed certificate support](../logic-apps/create-integration-service-environment-rest-api.md#request-body) for an existing ISE or new ISE by using the Azure Logic Apps REST API, and install the public certificate at the `TrustedRoot` location. - Here are more ways that you can help secure endpoints that handle calls sent from your logic app workflows: * [Add authentication to outbound requests](#add-authentication-outbound). You can use Azure Logic Apps in [Azure Government](../azure-government/documenta * [Virtual machine isolation in Azure](../virtual-machines/isolation.md) * [Deploy dedicated Azure services into virtual networks](../virtual-network/virtual-network-for-azure-services.md) -* Based on whether you have Consumption or Standard logic apps, you have these options: +* Based on whether you have Consumption or Standard logic app workflows, you have these options: - * For Standard logic apps, you can privately and securely communicate between logic app workflows and an Azure virtual network by setting up private endpoints for inbound traffic and use virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). + * Standard logic app workflows can privately and securely communicate with an Azure virtual network through private endpoints that you set up for inbound traffic and virtual network integration for outbound traffic. For more information, review [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). - * For Consumption logic apps, you can create and deploy those logic apps in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md). That way, your logic apps run on dedicated resources and can access resources protected by an Azure virtual network. For more control over the encryption keys used by Azure Storage, you can set up, use, and manage your own key by using [Azure Key Vault](../key-vault/general/overview.md). This capability is also known as "Bring Your Own Key" (BYOK), and your key is called a "customer-managed key". For more information, review [Set up customer-managed keys to encrypt data at rest for integration service environments (ISEs) in Azure Logic Apps](../logic-apps/customer-managed-keys-integration-service-environment.md). + * Consumption logic app workflows can run in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md) where they can use dedicated resources and access resources protected by an Azure virtual network. However, the ISE resource retires on August 31, 2024, due to its dependency on Azure Cloud Services (classic), which retires at the same time. - > [!IMPORTANT] - > Some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md)) - > for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, - > partner services, or customer services that are hosted on Azure. - > - > If your workflows need access to virtual networks that use private endpoints, and you want to develop those workflows - > using the **Logic App (Consumption)** resource type, you *must create and run your logic apps in an ISE*. However, - > if you want to develop those workflows using the **Logic App (Standard)** resource type, *you don't need an ISE*. - > Instead, your workflows can communicate privately and securely with virtual networks by using private endpoints - > for inbound traffic and virtual network integration for outbound traffic. For more information, review + > [!IMPORTANT] + > Some Azure virtual networks use private endpoints ([Azure Private Link](../private-link/private-link-overview.md)) + > for providing access to Azure PaaS services, such as Azure Storage, Azure Cosmos DB, or Azure SQL Database, + > partner services, or customer services that are hosted on Azure. + > + > If you want to create Consumption logic app workflows that need access to virtual networks with private endpoints, + > you *must create and run your Consumption workflows in an ISE*. Or, you can create Standard workflows instead, + > which don't need an ISE. Instead, your workflows can communicate privately and securely with virtual networks + > by using private endpoints for inbound traffic and virtual network integration for outbound traffic. For more information, see > [Secure traffic between virtual networks and single-tenant Azure Logic Apps using private endpoints](secure-single-tenant-workflow-virtual-network-private-endpoint.md). For more information about isolation, review the following documentation: |
logic-apps | Logic Apps Using File Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-file-connector.md | - Title: Connect to on-premises file systems -description: Connect to file systems on premises from workflows in Azure Logic Apps using the File System connector. --- Previously updated : 11/08/2022---# Connect to on-premises file systems from workflows in Azure Logic Apps ---This how-to guide shows how to access an on-premises file share from a workflow in Azure Logic Apps by using the File System connector. You can then create automated workflows that run when triggered by events in your file share or in other systems and run actions to manage your files. The connector provides the following capabilities: --- Create, get, append, update, and delete files.-- List files in folders or root folders.-- Get file content and metadata.--In this article, the example scenarios demonstrate the following tasks: --- Trigger a workflow when a file is created or added to a file share, and then send an email.-- Trigger a workflow when copying a file from a Dropbox account to a file share, and then send an email.--## Connector technical reference --The File System connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences). --| Logic app | Environment | Connector version | -|--|-|-| -| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Standard** label. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | -| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Standard** label, and the ISE version, which has different message limits than the Standard class. For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | -| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector differs in the following ways: <br><br>- The built-in connector supports only Standard logic apps that run in an App Service Environment v3 with Windows plans only. <br><br>- The built-in version can connect directly to a file share and access Azure virtual networks by using a connection string without an on-premises data gateway. <br><br>For more information, review the following documentation: <br><br>- [File System managed connector reference](/connectors/filesystem/) <br>- [File System built-in connector reference](/azure/logic-apps/connectors/built-in/reference/filesystem/) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) | --## General limitations --- The File System connector currently supports only Windows file systems on Windows operating systems.-- Mapped network drives aren't supported.--## Prerequisites --* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). --* To connect to your file share, different requirements apply, based on your logic app and the hosting environment: -- - Consumption logic app workflows -- - In multi-tenant Azure Logic Apps, you need to meet the following requirements, if you haven't already: - - 1. [Install the on-premises data gateway on a local computer](logic-apps-gateway-install.md). -- The File System managed connector requires that your gateway installation and file system server must exist in the same Windows domain. -- 1. [Create an on-premises data gateway resource in Azure](logic-apps-gateway-connection.md). -- 1. After you add a File System managed connector trigger or action to your workflow, select the data gateway resource that you previously created so you can connect to your file system. -- - In an ISE, you don't need the on-premises data gateway. Instead, you can use the ISE-versioned File System connector. -- - Standard logic app workflows -- You can use the File System built-in connector or managed connector. -- * To use the File System managed connector, follow the same requirements as a Consumption logic app workflow in multi-tenant Azure Logic Apps. -- * To use the File System built-in connector, your Standard logic app workflow must run in App Service Environment v3, but doesn't require the data gateway resource. --* Access to the computer that has the file system you want to use. For example, if you install the data gateway on the same computer as your file system, you need the account credentials for that computer. --* To follow the example scenario in this how-to guide, you need an email account from a provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review other supported email connectors](/connectors/connector-reference/connector-reference-logicapps-connectors). This example uses the Office 365 Outlook connector with a work or school account. If you use another email account, the overall steps are the same, but your UI might slightly differ. -- > [!IMPORTANT] - > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic apps. - > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can - > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application). - > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md). --* For the example File System action scenario, you need a [Dropbox account](https://www.dropbox.com/), which you can sign up for free. --* The logic app workflow where you want to access your file share. To start your workflow with a File System trigger, you have to start with a blank workflow. To add a File System action, start your workflow with any trigger. --<a name="add-file-system-trigger"></a> --## Add a File System trigger --### [Consumption](#tab/consumption) --1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. --1. On the designer, under the search box, select **Standard**. In the search box, enter **file system**. --1. From the triggers list, select the [File System trigger](/connectors/filesystem/#triggers) that you want. This example continues with the trigger named **When a file is created**. -- ![Screenshot showing Azure portal, designer for Consumption logic app workflow, search box with "file system", and File System trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-consumption.png) --1. In the connection information box, provide the following information as required: -- | Property | Required | Value | Description | - |-|-|-|-| - | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | - | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | - | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | - | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | - | **Password** | Yes | <*password*> | The password for the computer where you have your file system | - | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | -- The following example shows the connection information for the File System managed connector trigger: -- ![Screenshot showing Consumption workflow designer and connection information for File System managed connector trigger.](media/logic-apps-using-file-connector/file-system-connection-consumption.png) -- The following example shows the connection information for the File System ISE-based trigger: -- ![Screenshot showing Consumption workflow designer and connection information for File System ISE-based connector trigger.](media/logic-apps-using-file-connector/file-system-connection-ise.png) --1. When you're done, select **Create**. -- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. --1. Continue building your workflow. -- 1. Provide the required information for your trigger. -- For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check. -- ![Screenshot showing Consumption workflow designer and the "When a file is created" trigger.](media/logic-apps-using-file-connector/trigger-file-system-when-file-created-consumption.png) -- 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address. -- ![Screenshot showing Consumption workflow designer, managed connector "When a file is created" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-file-system-send-email-consumption.png) -- > [!TIP] - > - > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes. - > When the dynamic content list appears, select from the available outputs. --1. Save your logic app. Test your workflow by uploading a file and triggering the workflow. -- If successful, your workflow sends an email about the new file. --### [Standard](#tab/standard) --#### Built-in connector trigger --These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only. --1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. --1. On the designer, under the search box, select **Built-in**. In the search box, enter **file system**. --1. From the triggers list, select the [File System trigger](/azure/logic-apps/connectors/built-in/reference/filesystem/#triggers) that you want. This example continues with the trigger named **When a file is added**. -- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and "When a file is added" selected.](media/logic-apps-using-file-connector/select-file-system-trigger-built-in-standard.png) --1. In the connection information box, provide the following information as required: -- | Property | Required | Value | Description | - |-|-|-|-| - | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | - | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | - | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** | - | **Password** | Yes | <*password*> | The password for the computer where you have your file system | -- The following example shows the connection information for the File System built-in connector trigger: -- ![Screenshot showing Standard workflow designer and connection information for File System built-in connector trigger.](media/logic-apps-using-file-connector/trigger-file-system-connection-built-in-standard.png) --1. When you're done, select **Create**. -- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. --1. Continue building your workflow. -- 1. Provide the required information for your trigger. -- For this example, select the folder path on your file system server to check for a newly added file. Specify how often you want to check. -- ![Screenshot showing Standard workflow designer and "When a file is added" trigger information.](media/logic-apps-using-file-connector/trigger-when-file-added-built-in-standard.png) -- 1. To test your workflow, add an Outlook action that sends you an email when a file is added to the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address. -- ![Screenshot showing Standard workflow designer, managed connector "When a file is added" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-send-email-built-in-standard.png) -- > [!TIP] - > - > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes. - > When the dynamic content list appears, select from the available outputs. --1. Save your logic app. Test your workflow by uploading a file and triggering the workflow. -- If successful, your workflow sends an email about the new file. --#### Managed connector trigger --1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer. --1. On the designer, under the search box, select **Azure**. In the search box, enter **file system**. --1. From the triggers list, select the [File System trigger](/connectors/filesystem/#triggers/#triggers) that you want. This example continues with the trigger named **When a file is created**. -- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and the "When a file is created" trigger selected.](media/logic-apps-using-file-connector/select-file-system-trigger-managed-standard.png) --1. In the connection information box, provide the following information as required: -- | Property | Required | Value | Description | - |-|-|-|-| - | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | - | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | - | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | - | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | - | **Password** | Yes | <*password*> | The password for the computer where you have your file system | - | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | -- The following example shows the connection information for the File System managed connector trigger: -- ![Screenshot showing Standard workflow designer and connection information for File System managed connector trigger.](media/logic-apps-using-file-connector/trigger-file-system-connection-managed-standard.png) --1. When you're done, select **Create**. -- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected trigger. --1. Continue building your workflow. -- 1. Provide the required information for your trigger. -- For this example, select the folder path on your file system server to check for a newly created file. Specify the number of files to return and how often you want to check. -- ![Screenshot showing Standard workflow designer and "When a file is created" trigger information.](media/logic-apps-using-file-connector/trigger-when-file-created-managed-standard.png) -- 1. To test your workflow, add an Outlook action that sends you an email when a file is created on the file system in specified folder. Enter the email recipients, subject, and body. For testing, you can use your own email address. -- ![Screenshot showing Standard workflow designer, managed connector "When a file is created" trigger, and "Send an email" action.](media/logic-apps-using-file-connector/trigger-send-email-managed-standard.png) -- > [!TIP] - > - > To add outputs from previous steps in the workflow, click inside the trigger's edit boxes. - > When the dynamic content list appears, select from the available outputs. --1. Save your logic app. Test your workflow by uploading a file and triggering the workflow. -- If successful, your workflow sends an email about the new file. ----<a name="add-file-system-action"></a> --## Add a File System action --The example logic app workflow starts with the [Dropbox trigger](/connectors/dropbox/#triggers), but you can use any trigger that you want. --### [Consumption](#tab/consumption) --1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. --1. Find and select the [File System action](/connectors/filesystem/#actions) that you want to use. This example continues with the action named **Create file**. -- 1. Under the trigger or action where you want to add the File System action, select **New step**. -- Or, to add an action between existing steps, move your pointer over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**. --1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **file system**. --1. From the actions list, select the File System action named **Create file**. -- ![Screenshot showing Azure portal, designer for Consumption logic app workflow, search box with "file system", and "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-consumption.png) --1. In the connection information box, provide the following information as required: -- | Property | Required | Value | Description | - |-|-|-|-| - | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | - | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | - | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | - | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | - | **Password** | Yes | <*password*> | The password for the computer where you have your file system | - | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | -- The following example shows the connection information for the File System managed connector action: -- ![Screenshot showing connection information for File System managed connector action.](media/logic-apps-using-file-connector/file-system-connection-consumption.png) -- The following example shows the connection information for the File System ISE-based connector action: -- ![Screenshot showing connection information for File System ISE-based connector action.](media/logic-apps-using-file-connector/file-system-connection-ise.png) --1. When you're done, select **Create**. -- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. --1. Continue building your workflow. -- 1. Provide the required information for your action. -- For this example, select the folder path on your file system server to use, which is the root folder here. Enter the file name and content, based on the file uploaded to Dropbox. -- ![Screenshot showing Consumption workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-consumption.png) -- > [!TIP] - > - > To add outputs from previous steps in the workflow, click inside the action's edit boxes. - > When the dynamic content list appears, select from the available outputs. -- 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address. -- ![Screenshot showing Consumption workflow designer, managed connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-consumption.png) --1. Save your logic app. Test your workflow by uploading a file to Dropbox. -- If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. --### [Standard](#tab/standard) --#### Built-in connector action --These steps apply only to Standard logic apps in an App Service Environment v3 with Windows plans only. --1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. --1. Find and select the [File System action](/azure/logic-apps/connectors/built-in/reference/filesystem/#actions) that you want to use. This example continues with the action named **Create file**. -- 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**. -- Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**. -- 1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **file system**. -- 1. From the actions list, select the File System action named **Create file**. -- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and built-in connector "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-built-in-standard.png) --1. In the connection information box, provide the following information as required: -- | Property | Required | Value | Description | - |-|-|-|-| - | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | - | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | - | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** | - | **Password** | Yes | <*password*> | The password for the computer where you have your file system | -- The following example shows the connection information for the File System built-in connector action: -- ![Screenshot showing Standard workflow designer and connection information for File System built-in connector action.](media/logic-apps-using-file-connector/action-file-system-connection-built-in-standard.png) -- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. --1. Continue building your workflow. -- 1. Provide the required information for your action. For this example, follow these steps: -- 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder. -- 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**. -- 1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**. -- ![Screenshot showing Standard workflow designer and the File System built-in connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-built-in-standard.png) -- 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address. -- ![Screenshot showing Standard workflow designer, built-in connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-built-in-standard.png) --1. Save your logic app. Test your workflow by uploading a file to Dropbox. -- If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. --#### Managed connector action --1. In the [Azure portal](https://portal.azure.com), open your logic app workflow in the designer. --1. Find and select the [File System action](/connectors/filesystem/#actions) that you want to use. This example continues with the action named **Create file**. -- 1. Under the trigger or action where you want to add the action, select the plus sign (**+**), and then select **Add an action**. -- Or, to add an action between existing steps, select the plus sign (**+**) on the connecting arrow, and then select **Add an action**. -- 1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **file system**. -- 1. From the actions list, select the File System action named **Create file**. -- ![Screenshot showing Azure portal, designer for Standard logic app workflow, search box with "file system", and managed connector "Create file" action selected.](media/logic-apps-using-file-connector/select-file-system-action-managed-standard.png) --1. In the connection information box, provide the following information as required: -- | Property | Required | Value | Description | - |-|-|-|-| - | **Connection name** | Yes | <*connection-name*> | The name to use for your connection | - | **Root folder** | Yes | <*root-folder-name*> | The root folder for your file system, which is usually the main parent folder and is the folder used for the relative paths with all triggers that work on files. <br><br>For example, if you installed the on-premises data gateway, use the local folder on the computer with the data gateway installation. Or, use the folder for the network share where the computer can access that folder, for example, **`\\PublicShare\\MyFileSystem`**. | - | **Authentication Type** | No | <*auth-type*> | The type of authentication that your file system server uses, which is **Windows** | - | **Username** | Yes | <*domain-and-username*> | The domain and username for the computer where you have your file system. <br><br>For the managed File System connector, use one of the following values with the backslash (**`\`**): <br><br>- **<*domain*>\\<*username*>** <br>- **<*local-computer*>\\<*username*>** <br><br>For example, if your file system folder is on the same computer as the on-premises data gateway installation, you can use **<*local-computer*>\\<*username*>**. <br><br>- For the ISE-based File System connector, use the forward slash instead (**`/`**): <br><br>- **<*domain*>/<*username*>** <br>- **<*local-computer*>/<*username*>** | - | **Password** | Yes | <*password*> | The password for the computer where you have your file system | - | **gateway** | No | - <*Azure-subscription*> <br>- <*gateway-resource-name*> | This section applies only to the managed File System connector: <br><br>- **Subscription**: The Azure subscription associated with the data gateway resource <br>- **Connection Gateway**: The data gateway resource | -- The following example shows the connection information for the File System managed connector action: -- ![Screenshot showing connection information for File System managed connector action.](media/logic-apps-using-file-connector/action-file-system-connection-managed-standard.png) -- Azure Logic Apps creates and tests your connection, making sure that the connection works properly. If the connection is set up correctly, the setup options appear for your selected action. --1. Continue building your workflow. -- 1. Provide the required information for your action. For this example, follow these steps: -- 1. Enter path and name for the file that you want to create, including the file name extension. Make sure the path is relative to the root folder. -- 1. To specify the content from the file created on Dropbox, from the **Add a parameter** list, select **File content**. -- 1. Click inside the **File Content** parameter box. appears, click inside the edit box. From the dynamic content list that appears, in the **When a file is created** section, select **File Content**. -- ![Screenshot showing Standard workflow designer and the File System managed connector "Create file" action.](media/logic-apps-using-file-connector/action-file-system-create-file-managed-standard.png) -- 1. To test your workflow, add an Outlook action that sends you an email when the File System action creates a file. Enter the email recipients, subject, and body. For testing, you can use your own email address. -- ![Screenshot showing Standard workflow designer, managed connector "Create file" action, and "Send an email" action.](media/logic-apps-using-file-connector/action-file-system-send-email-managed-standard.png) --1. Save your logic app. Test your workflow by uploading a file to Dropbox. -- If successful, your workflow creates a file on your file system server, based on the uploaded file in DropBox, and sends an email about the created file. ----## Next steps --* [Managed connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) -* [Built-in connectors for Azure Logic Apps](../connectors/built-in.md) |
logic-apps | Logic Apps Using Sap Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-sap-connector.md | tags: connectors This multipart how-to guide shows how to access your SAP server from a workflow in Azure Logic Apps using the SAP connector. You can use the SAP connector's operations to create automated workflows that run when triggered by events in your SAP server or in other systems and run actions to manage resources on your SAP server. -Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps, but this connector is currently in preview and subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference). +Both Standard and Consumption logic app workflows offer the SAP *managed* connector that's hosted and run in multi-tenant Azure. Standard workflows also offer the SAP *built-in* connector that's hosted and run in single-tenant Azure Logic Apps. If you create and host a Consumption workflow in an integration service environment (ISE), you can also use the SAP connector's ISE-native version. For more information, see [Connector technical reference](#connector-technical-reference). ## SAP compatibility The SAP connector has different versions, based on [logic app type and host envi |--|-|-| | **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) | | **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label, and the ISE-native version, which appears in the designer with the **ISE** label and has different message limits than the managed connector. <br><br>**Note**: Make sure to use the ISE-native version, not the managed version. <br><br>For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [ISE message limits](../logic-apps/logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector (preview), which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) | +| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the connector gallery under **Runtime** > **Shared**, and the built-in connector, which appears in the connector gallery under **Runtime** > **In-App** and is [service provider-based](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). The built-in connector can directly access Azure virtual networks with a connection string without an on-premises data gateway. For more information, review the following documentation: <br><br>- [SAP managed connector reference](/connectors/sap/) <br>- [SAP built-in connector reference](/azure/logic-apps/connectors/built-in/reference/sap/) <br><br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) | ## Connector differences The SAP built-in connector significantly differs from the SAP managed connector The SAP built-in connector doesn't use the shared or global connector infrastructure, which means timeouts are longer at 5 minutes compared to the SAP managed connector (two minutes) and the SAP ISE-versioned connector (four minutes). Long-running requests work without you having to implement the [long-running webhook-based request action pattern](logic-apps-scenario-function-sb-trigger.md). -* By default, the preview SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md). +* By default, the SAP built-in connector operations are *stateless*. However, you can [enable stateful mode (affinity) for these operations](../connectors/enable-stateful-affinity-built-in-connectors.md). In stateful mode, the SAP built-in connector supports high availability and horizontal scale-out configurations. By comparison, the SAP managed connector has restrictions regarding the on-premises data gateway limited to a single instance for triggers and to clusters only in failover mode for actions. For more information, see [SAP managed connector - Known issues and limitations](#known-issues-limitations). Along with simple string and number inputs, the SAP connector accepts the follow 1. In the action named **\[BAPI] Call method in SAP**, disable the auto-commit feature. 1. Call the action named **\[BAPI] Commit transaction** instead. -### SAP built-in connector --The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code: - - ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). The preview SAP built-in connector trigger named **Register SAP RFC server for t > When you use a Premium-level ISE, use the ISE-native SAP connector, not the SAP managed connector, > which doesn't natively run in an ISE. For more information, review the [ISE prerequisites](#ise-prerequisites). -* By default, the preview SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). +* By default, the SAP built-in connector operations are *stateless*. To run these operations in stateful mode, see [Enable stateful mode for stateless built-in connectors](../connectors/enable-stateful-affinity-built-in-connectors.md). * To use either the SAP managed connector trigger named **When a message is received from SAP** or the SAP built-in trigger named **Register SAP RFC server for trigger**, complete the following tasks: The preview SAP built-in connector trigger named **Register SAP RFC server for t > In Standard workflows, the SAP built-in trigger named **Register SAP RFC server for trigger** uses the Azure > Functions trigger instead, and shows only the actual callbacks from SAP. + * For the SAP built-in connector trigger named **Register SAP RFC server for trigger**, you have to enable virtual network integration and private ports by following the article at [Enabling Service Bus and SAP built-in connectors for stateful Logic Apps in Standard](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/enabling-service-bus-and-sap-built-in-connectors-for-stateful/ba-p/3820381). You can also run the workflow in Visual Studio Code to fire the trigger locally. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code: + + - **WEBSITE_PRIVATE_IP**: Set this environment variable value to **127.0.0.1** as the localhost address. + - **WEBSITE_PRIVATE_PORTS**: Set this environment variable value to two free and usable ports on your local computer, separating the values with a comma (**,**), for example, **8080,8088**. + * The message content to send to your SAP server, such as a sample IDoc file. This content must be in XML format and include the namespace of the [SAP action](/connectors/sap/#actions) that you want to use. You can [send IDocs with a flat file schema by wrapping them in an XML envelope](sap-create-example-scenario-workflows.md#send-flat-file-idocs). <a name="network-prerequisites"></a> For a Consumption workflow in multi-tenant Azure Logic Apps, the SAP managed con <a name="single-tenant-prerequisites"></a> -For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway. +For a Standard workflow in single-tenant Azure Logic Apps, use the SAP *built-in* connector to directly access resources that are protected by an Azure virtual network. You can also use other built-in connectors that let workflows directly access on-premises resources without having to use the on-premises data gateway. For additional requirements regarding the SAP built-in connector trigger named **Register SAP RFC server for trigger**, see [Prerequisites](#prerequisites). 1. To use the SAP connector, you need to download the following files and have them read to upload to your Standard logic app resource. For more information, see [SAP NCo client library prerequisites](#sap-client-library-prerequisites): For a Standard workflow in single-tenant Azure Logic Apps, use the preview SAP * 1. In the **net472** folder, upload the assembly files larger than 4 MB. -#### SAP trigger requirements --The preview SAP built-in connector trigger named **Register SAP RFC server for trigger** is available in the Azure portal, but the trigger currently can't receive calls from SAP when deployed in Azure. To fire the trigger, you can run the workflow locally in Visual Studio Code. For Visual Studio Code setup requirements and more information, see [Create a Standard logic app workflow in single-tenant Azure Logic Apps using Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). You must also set up the following environment variables on the computer where you install Visual Studio Code: - - ### [ISE](#tab/ise) <a name="ise-prerequisites"></a> |
logic-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md | Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 ms.suite: integration |
logic-apps | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
machine-learning | Concept Automated Ml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automated-ml.md | See the [AutoML package](/python/api/azure-ai-ml/azure.ai.ml.automl) for changin With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](concept-onnx.md). -See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms). +See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX](how-to-configure-auto-train.md#supported-algorithms). The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](./how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html). |
machine-learning | Concept Automl Forecasting Calendar Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-calendar-features.md | This article focuses on the calendar-based features that AutoML creates to incre As a part of feature engineering, AutoML transforms datetime type columns provided in the training data into new columns of calendar-based features. These features can help regression models learn seasonal patterns at several cadences. AutoML can always create calendar features from the time index of the time series since this is a required column in the training data. Calendar features are also made from other columns with datetime type, if any are present. See the [how AutoML uses your data](./concept-automl-forecasting-methods.md#how-automl-uses-your-data) guide for more information on data requirements. -AutoML considers two categories of calendar features: standard features that are based entirely on date and time values and holiday features which are specific to a country or region of the world. We'll go over these features in the remainder of the article. +AutoML considers two categories of calendar features: standard features that are based entirely on date and time values and holiday features which are specific to a country or region of the world. We go over these features in the remainder of the article. ## Standard calendar features Th following table shows the full set of AutoML's standard calendar features alo | | -- | -- | |`year`|Numeric feature representing the calendar year |2011| |`year_iso`|Represents ISO year as defined in ISO 8601. ISO years start on the first week of year that has a Thursday. For example, if January 1 is a Friday, the ISO year begins on January 4. ISO years may differ from calendar years.|2010|-|`half`| Feature indicating whether the date is in the first or second half of the year. It is 1 if the date is prior to July 1 and 2 otherwise. +|`half`| Feature indicating whether the date is in the first or second half of the year. It's 1 if the date is prior to July 1 and 2 otherwise. |`quarter`|Numeric feature representing the quarter of the given date. It takes values 1, 2, 3, or 4 representing first, second, third, fourth quarter of calendar year.|1| |`month`|Numeric feature representing the calendar month. It takes values 1 through 12.|1| |`month_lbl`|String feature representing the name of month.|'January'| |`day`|Numeric feature representing the day of the month. It takes values from 1 through 31.|1| |`hour`|Numeric feature representing the hour of the day. It takes values 0 through 23.|0| |`minute`|Numeric feature representing the minute within the hour. It takes values 0 through 59.|25|-|`second`|Numeric feature representing the second of the given datetime. In the case where only date format is provided, then it is assumed as 0. It takes values 0 through 59.|30| -|`am_pm`|Numeric feature indicating whether the time is in the morning or evening. It is 0 for times before 12PM and 1 for times after 12PM. |0| +|`second`|Numeric feature representing the second of the given datetime. In the case where only date format is provided, then it's assumed as 0. It takes values 0 through 59.|30| +|`am_pm`|Numeric feature indicating whether the time is in the morning or evening. It's 0 for times before 12PM and 1 for times after 12PM. |0| |`am_pm_lbl`|String feature indicating whether the time is in the morning or evening.|'am'| |`hour12`|Numeric feature representing the hour of the day on a 12 hour clock. It takes values 0 through 12 for first half of the day and 1 through 11 for second half.|0| |`wday`|Numeric feature representing the day of the week. It takes values 0 through 6, where 0 corresponds to Monday. |5| Other datetime column | A reduced set consisting of `Year`, `Month`, `Day`, ## Holiday features -AutoML can optionally create features representing holidays from a specific country or region. These features are configured in AutoML using the `country_or_region_for_holidays` parameter which accepts an [ISO country code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes). +AutoML can optionally create features representing holidays from a specific country or region. These features are configured in AutoML using the `country_or_region_for_holidays` parameter, which accepts an [ISO country code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes). > [!NOTE] > Holiday features can only be made for time series with daily frequency. forecasting_job.set_forecast_settings( country_or_region_for_holidays='US' ) ```-The generated holiday features look like the following: +The generated holiday features look like the following output: <a name='output'><img src='./media/concept-automl-forecasting-calendar-features/sample_dataset_holiday_feature_generated.png' alt='sample_data_output' width=75%></img></a> |
machine-learning | Concept Automl Forecasting Methods | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md | Each Series in Own Group (1:1) | All Series in Single Group (N:1) -| -- Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster -More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb). +More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb). ## Next steps |
machine-learning | Concept Endpoints Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md | Visual Studio Code enables you to interactively debug endpoints. Optionally, you can secure communication with a managed online endpoint by using private endpoints. -You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created per deployment. +You can configure security for inbound scoring requests and outbound communications with the workspace and other services separately. Inbound communications use the private endpoint of the Azure Machine Learning workspace. Outbound communications use private endpoints created for the workspace's managed virtual network (preview). -For more information, see [Secure online endpoints](how-to-secure-online-endpoint.md). +For more information, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md). ## Managed online endpoints vs Kubernetes online endpoints The following table highlights the key differences between managed online endpoi | **Cluster sizing (scaling)** | [Managed manual and autoscale](how-to-autoscale-endpoints.md), supporting additional nodes provisioning | [Manual and autoscale](how-to-kubernetes-inference-routing-azureml-fe.md#autoscaling), supporting scaling the number of replicas within fixed cluster boundaries | | **Compute type** | Managed by the service | Customer-managed Kubernetes cluster (Kubernetes) | | **Managed identity** | [Supported](how-to-access-resources-from-endpoints-managed-identities.md) | Supported |-| **Virtual Network (VNET)** | [Supported via managed network isolation](how-to-secure-online-endpoint.md) | User responsibility | +| **Virtual Network** | [Supported via managed network isolation](concept-secure-online-endpoint.md) | User responsibility | | **Out-of-box monitoring & logging** | [Azure Monitor and Log Analytics powered](how-to-monitor-online-endpoints.md) (includes key metrics and log tables for endpoints and deployments) | User responsibility | | **Logging with Application Insights (legacy)** | Supported | Supported | | **View costs** | [Detailed to endpoint / deployment level](how-to-view-online-endpoints-costs.md) | Cluster level | |
machine-learning | Concept Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md | The following table shows a summary of the different features available to onlin | Swagger support | Yes | No | | Authentication | Key and token | Azure AD | | Private network support | Yes | Yes |-| Managed network isolation<sup>1</sup> | Yes | No | +| Managed network isolation | Yes | No | | Customer-managed keys | Yes | No | | Cost basis | None | None | -<sup>1</sup> [*Managed network isolation*](how-to-secure-online-endpoint.md) allows you to manage the networking configuration of the endpoint independently of the configuration of the Azure Machine Learning workspace. - #### Deployments The following table shows a summary of the different features available to online and batch endpoints at the deployment level. These concepts apply to each deployment under the endpoint. |
machine-learning | Concept Enterprise Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-enterprise-security.md | Last updated 08/26/2022 # Enterprise security and governance for Azure Machine Learning -In this article, you'll learn about security and governance features available for Azure Machine Learning. These features are useful for administrators, DevOps, and MLOps who want to create a secure configuration that is compliant with your companies policies. With Azure Machine Learning and the Azure platform, you can: +In this article, you learn about security and governance features available for Azure Machine Learning. These features are useful for administrators, DevOps, and MLOps who want to create a secure configuration that is compliant with your companies policies. With Azure Machine Learning and the Azure platform, you can: * Restrict access to resources and operations by user account or groups * Restrict incoming and outgoing network communications Each workspace has an associated system-assigned [managed identity](../active-di | Azure Container Registry | Contributor | | Resource group that contains the workspace | Contributor | -The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. The identity token is not accessible to users and cannot be used by them to gain access to these resources. Users can only access the resources through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions. +The system-assigned managed identity is used for internal service-to-service authentication between Azure Machine Learning and other Azure resources. The identity token isn't accessible to users and they can't use it to gain access to these resources. Users can only access the resources through [Azure Machine Learning control and data plane APIs](how-to-assign-roles.md), if they have sufficient RBAC permissions. We don't recommend that admins revoke the access of the managed identity to the resources mentioned in the preceding table. You can restore access by using the [resync keys operation](how-to-change-storage-access-key.md). For more information, see the following articles: ## Network security and isolation -To restrict network access to Azure Machine Learning resources, you can use [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md) and [Azure Machine Learning managed virtual network (preview)](how-to-managed-network.md). Using a virtual network reduces the attack surface for your solution, as well as the chances of data exfiltration. +To restrict network access to Azure Machine Learning resources, you can use an [Azure Machine Learning managed virtual network](how-to-managed-network.md) (preview) or [Azure Virtual Network (VNet)](../virtual-network/virtual-networks-overview.md). Using a virtual network reduces the attack surface for your solution, and the chances of data exfiltration. You don't have to pick one or the other. For example, you can use a managed virtual network to secure managed compute resources and an Azure Virtual Network for your unmanaged resources or to secure client access to the workspace. -* __Azure Machine Learning managed virtual network__ (preview) provides a fully managed solution that enables network isolation for your workspace and managed compute resources. You can use private endpoints to secure communication with other Azure services, and can restrict outbound communications. +* __Azure Machine Learning managed virtual network__ (preview) provides a fully managed solution that enables network isolation for your workspace and managed compute resources. You can use private endpoints to secure communication with other Azure services, and can restrict outbound communications. The following managed compute resources are secured with a managed network: - [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] + * Serverless compute (including Spark serverless) + * Compute cluster + * Compute instance + * Managed online endpoints + * Batch online endpoints - For more information, see [Azure Machine Learning managed virtual network (preview)](how-to-managed-network.md). + For more information, see [Azure Machine Learning managed virtual network](how-to-managed-network.md) (preview). * __Azure Virtual Networks__ provides a more customizable virtual network offering. However, you're responsible for configuration and management. You may need to use network security groups, user-defined routing, or a firewall to restrict outbound communication. You don't have to pick one or the other. For example, you can use a managed virt ## Data encryption -Azure Machine Learning uses a variety of compute resources and data stores on the Azure platform. To learn more about how each of these supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md). +Azure Machine Learning uses various compute resources and data stores on the Azure platform. To learn more about how each of these resources supports data encryption at rest and in transit, see [Data encryption with Azure Machine Learning](concept-data-encryption.md). ## Data exfiltration prevention |
machine-learning | Concept Secure Network Traffic Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md | description: Learn how network traffic flows between components when your Azure -+ If you use Visual Studio Code on a compute instance, you must allow other outbou :::moniker range="azureml-api-2" ## Scenario: Use online endpoints -__Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` restricts the online endpoint to receiving traffic only from the virtual network. For secure inbound communications, the Azure Machine Learning workspace's private endpoint is used. +Security for inbound and outbound communication are configured separately for managed online endpoints. -__Outbound__ communication from a deployment can be secured on a per-deployment basis by using the `egress_public_network_access` flag. Outbound communication in this case is from the deployment to Azure Container Registry, storage blob, and workspace. Setting the flag to `true` will restrict communication with these resources to the virtual network. +#### Inbound communication -> [!NOTE] -> For secure outbound communication, a private endpoint is created for each deployment where `egress_public_network_access` is set to `disabled`. +__Inbound__ communication with the scoring URL of the online endpoint can be secured using the `public_network_access` flag on the endpoint. Setting the flag to `disabled` ensures that the online endpoint receives traffic only from a client's virtual network through the Azure Machine Learning workspace's private endpoint. ++The `public_network_access` flag of the Azure Machine Learning workspace also governs the visibility of the online endpoint. If this flag is `disabled`, then the scoring endpoints can only be accessed from virtual networks that contain a private endpoint for the workspace. If it is `enabled`, then the scoring endpoint can be accessed from the virtual network and public networks. -Visibility of the endpoint is also governed by the `public_network_access` flag of the Azure Machine Learning workspace. If this flag is `disabled`, then the scoring endpoints can only be accessed from virtual networks that contain a private endpoint for the workspace. If it is `enabled`, then the scoring endpoint can be accessed from the virtual network and public networks. +#### Outbound communication -### Supported configurations +__Outbound__ communication from a deployment can be secured at the workspace level by enabling managed virtual network isolation for your Azure Machine Learning workspace (preview). Enabling this setting causes Azure Machine Learning to create a managed virtual network for the workspace. Any deployments in the workspace's managed virtual network can use the virtual network's private endpoints for outbound communication. ++The [legacy network isolation method for securing outbound communication](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) worked by disabling a deployment's `egress_public_network_access` flag. We strongly recommend that you secure outbound communication for deployments by using a [workspace managed virtual network](concept-secure-online-endpoint.md) instead. Unlike the legacy approach, the `egress_public_network_access` flag for the deployment no longer applies when you use a workspace managed virtual network with your deployment (preview). Instead, outbound communication will be controlled by the rules set for the workspace's managed virtual network. -| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? | -| -- | -- | | | -| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes | -| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled | Yes | -| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes | -| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled | Yes | :::moniker-end+ ## Scenario: Use Azure Kubernetes Service For information on the outbound configuration required for Azure Kubernetes Service, see the connectivity requirements section of [How to secure inference](how-to-secure-inferencing-vnet.md). |
machine-learning | Concept Secure Online Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-online-endpoint.md | + + Title: Network isolation with managed online endpoints ++description: Learn how private endpoints provide network isolation for Azure Machine Learning managed online endpoints. ++++++++reviewer: msakande + Last updated : 08/15/2023+++# Network isolation with managed online endpoints +++When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md). In this article, you'll learn how a private endpoint can be used to secure inbound communication to a managed online endpoint. You'll also learn how a workspace managed virtual network can be used to provide secure communication between deployments and resources. +++You can secure inbound scoring requests from clients to an _online endpoint_ and secure outbound communications between a _deployment_, the Azure resources it uses, and private resources. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints-online.md). ++The following architecture diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from a client's virtual network flow through the workspace's private endpoint to the managed online endpoint. Outbound communications from deployments to services are handled through private endpoints from the workspace's managed virtual network to those service instances. +++> [!NOTE] +> This article focuses on network isolation using the workspace's managed virtual network. For a description of the legacy method for network isolation, in which Azure Machine Learning creates a managed virtual network for each deployment in an endpoint, see the [Appendix](#appendix). ++## Limitations +++## Secure inbound scoring requests ++Secure inbound communication from a client to a managed online endpoint is possible by using a [private endpoint for the Azure Machine Learning workspace](./how-to-configure-private-link.md). This private endpoint on the client's virtual network communicates with the workspace of the managed online endpoint and is the means by which the managed online endpoint can receive incoming scoring requests from the client. ++To secure scoring requests to the online endpoint, so that a client can access it only through the workspace's private endpoint, set the `public_network_access` flag for the endpoint to `disabled`. After you've created the endpoint, you can update this setting to enable public network access if desired. ++Set the endpoint's `public_network_access` flag to `disabled`: ++# [Azure CLI](#tab/cli) ++```azurecli +az ml online-endpoint create -f endpoint.yml --set public_network_access=disabled +``` ++# [Python](#tab/python) ++```python +from azure.ai.ml.entities import ManagedOnlineEndpoint ++endpoint = ManagedOnlineEndpoint(name='my-online-endpoint', + description='this is a sample online endpoint', + tags={'foo': 'bar'}, + auth_mode="key", + public_network_access="disabled" + # public_network_access="enabled" +) +``` ++# [Studio](#tab/azure-studio) ++1. Go to the [Azure Machine Learning studio](https://ml.azure.com). +1. Select the **Workspaces** page from the left navigation bar. +1. Enter a workspace by clicking its name. +1. Select the **Endpoints** page from the left navigation bar. +1. Select **+ Create** to open the **Create deployment** setup wizard. +1. Disable the **Public network access** flag at the **Create endpoint** step. ++ :::image type="content" source="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png" alt-text="A screenshot of how to disable public network access for an endpoint." lightbox="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png"::: ++++When `public_network_access` is `disabled`, inbound scoring requests are received using the workspace's private endpoint, and the endpoint can't be reached from public networks. ++Alternatively, if you set the `public_network_access` to `enabled`, the endpoint can receive inbound scoring requests from the internet. ++## Secure outbound access with workspace managed virtual network ++To secure outbound communication from a deployment to services, you need to enable managed virtual network isolation for your Azure Machine Learning workspace so that Azure Machine Learning can create a managed virtual network for the workspace. +All managed online endpoints in the workspace (and managed compute resources for the workspace, such as compute clusters and compute instances) automatically use this workspace managed virtual network, and the deployments under the endpoints share the managed virtual network's private endpoints for communication with the workspace's resources. ++When you secure your workspace with a managed virtual network, the `egress_public_access` flag for managed online deployments no longer applies. Avoid setting this flag when creating the managed online deployment. ++For outbound communication with a workspace managed virtual network, Azure Machine Learning: ++- Creates private endpoints for the managed virtual network to use for communication with Azure resources that are used by the workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry. +- Allows deployments to access the Microsoft Container Registry (MCR), which can be useful when you want to use curated environments or MLflow no-code deployment. +- Allows users to configure private endpoint outbound rules to private resources and configure outbound rules (service tag or FQDN) for public resources. For more information on how to manage outbound rules, see [Manage outbound rules](how-to-managed-network.md#manage-outbound-rules). ++Furthermore, you can configure two isolation modes for outbound traffic from the workspace managed virtual network, namely: ++- **Allow internet outbound**, to allow all internet outbound traffic from the managed virtual network +- **Allow only approved outbound**, to control outbound traffic using private endpoints, FQDN outbound rules, and service tag outbound rules. ++For example, say your workspace's managed virtual network contains two deployments under a managed online endpoint, both deployments can use the workspace's private endpoints to communicate with: ++- The Azure Machine Learning workspace +- The Azure Storage blob that is associated with the workspace +- The Azure Container Registry for the workspace +- The Azure Key Vault +- (Optional) additional private resources that support private endpoints. ++To learn more about configurations for the workspace managed virtual network, see [Managed virtual network architecture](how-to-managed-network.md#managed-virtual-network-architecture). ++## Scenarios for network isolation configuration ++Suppose a managed online endpoint has a deployment that uses an AI model, and you want to use an app to send scoring requests to the endpoint. You can decide what network isolation configuration to use for the managed online endpoint as follows: ++**For inbound communication**: ++If the app is publicly available on the internet, then you need to **enable** `public_network_access` for the endpoint so that it can receive inbound scoring requests from the app. ++However, say the app is private, such as an internal app within your organization. In this scenario, you want the AI model to be used only within your organization rather than expose it to the internet. Therefore, you need to **disable** the endpoint's `public_network_access` so that it can receive inbound scoring requests only through its workspace's private endpoint. ++**For outbound communication (deployment)**: ++Suppose your deployment needs to access private Azure resources (such as the Azure Storage blob, ACR, and Azure Key Vault), or it's unacceptable for the deployment to access the internet. In this case, you need to **enable** the _workspace's managed virtual network_ with the **allow only approved outbound** isolation mode. This isolation mode allows outbound communication from the deployment to approved destinations only, thereby protecting against data exfiltration. Furthermore, you can add outbound rules for the workspace, to allow access to more private or public resources. For more information, see [Configure a managed virtual network to allow only approved outbound](how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-only-approved-outbound). ++However, if you want your deployment to access the internet, you can use the workspace's managed virtual network with the **allow internet outbound** isolation mode. Apart from being able to access the internet, you'll be able to use the private endpoints of the managed virtual network to access private Azure resources that you need. ++Finally, if your deployment doesn't need to access private Azure resources and you don't need to control access to the internet, then you don't need to use a workspace managed virtual network. ++## Appendix ++### Secure outbound access with legacy network isolation method ++For managed online endpoints, you can also secure outbound communication between deployments and resources by using an Azure Machine Learning managed virtual network for each deployment in the endpoint. The secure outbound communication is also handled by using private endpoints to those service instances. ++> [!NOTE] +> We strongly recommend that you use the approach described in [Secure outbound access with workspace managed virtual network](#secure-outbound-access-with-workspace-managed-virtual-network) instead of this legacy method. ++To restrict communication between a deployment and external resources, including the Azure resources it uses, you should ensure that: ++- The deployment's `egress_public_network_access` flag is `disabled`. This flag ensures that the download of the model, code, and images needed by the deployment are secured with a private endpoint. Once you've created the deployment, you can't update (enable or disable) the `egress_public_network_access` flag. Attempting to change the flag while updating the deployment fails with an error. ++- The workspace has a private link that allows access to Azure resources via a private endpoint. ++- The workspace has a `public_network_access` flag that can be enabled or disabled, if you plan on using a managed online deployment that uses __public outbound__, then you must also [configure the workspace to allow public access](how-to-configure-private-link.md#enable-public-access). This is because outbound communication from the online deployment is to the _workspace API_. When the deployment is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access). ++When you have multiple deployments, and you configure the `egress_public_network_access` to `disabled` for each deployment in a managed online endpoint, each deployment has its own independent Azure Machine Learning managed virtual network. For each virtual network, Azure Machine Learning creates three private endpoints for communication to the following ++- The Azure Machine Learning workspace +- The Azure Storage blob that is associated with the workspace +- The Azure Container Registry for the workspace ++For example, if you set the `egress_public_network_access` flag to `disabled` for two deployments of a managed online endpoint, a total of six private endpoints are created. Each deployment would use three private endpoints to communicate with the workspace, blob, and container registry. ++> [!IMPORTANT] +> Azure Machine Learning does not support peering between a deployment's managed virtual network and your client's virtual network. For secure access to resources needed by the deployment, we use private endpoints to communicate with the resources. ++The following diagram shows incoming scoring requests from a client's virtual network flowing through the workspace's private endpoint to the managed online endpoint. The diagram also shows two online deployments, each in its own Azure Machine Learning managed virtual network. Each deployment's virtual network has three private endpoints for outbound communication with the Azure Machine Learning workspace, the Azure Storage blob associated with the workspace, and the Azure Container Registry for the workspace. +++To disable the `egress_public_network_access` and create the private endpoints: ++# [Azure CLI](#tab/cli) ++```azurecli +az ml online-deployment create -f deployment.yml --set egress_public_network_access=disabled +``` ++# [Python](#tab/python) ++```python +blue_deployment = ManagedOnlineDeployment(name='blue', + endpoint_name='my-online-endpoint', + model=model, + code_configuration=CodeConfiguration(code_local_path='./model-1/onlinescoring/', + scoring_script='score.py'), + environment=env, + instance_type='Standard_DS2_v2', + instance_count=1, + egress_public_network_access="disabled" + # egress_public_network_access="enabled" +) + +ml_client.begin_create_or_update(blue_deployment) +``` ++# [Studio](#tab/azure-studio) ++1. Follow the steps in the **Create deployment** setup wizard to the **Deployment** step. +1. Disable the **Egress public network access** flag. ++ :::image type="content" source="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png" alt-text="A screenshot of how to disable the egress public network access for a deployment." lightbox="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png"::: ++++To confirm the creation of the private endpoints, first check the storage account and container registry associated with the workspace (see [Download a configuration file](how-to-manage-workspace.md#download-a-configuration-file)), find each resource from the Azure portal, and check the `Private endpoint connections` tab under the `Networking` menu. ++> [!IMPORTANT] +> - As mentioned earlier, outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__ (in other words, `public_network_access` flag for the endpoint is set to `enabled`), then the workspace must be able to accept that public communication (`public_network_access` flag for the workspace set to `enabled`). +> - When online deployments are created with `egress_public_network_access` flag set to `disabled`, they will have access to the secured resources (workspace, blob, and container registry) only. For instance, if the deployment uses model assets uploaded to other storage accounts, the model download will fail. Ensure model assets are on the storage account associated with the workspace. +> - When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network. On the contrary, when `egress_public_network_access` is set to `enabled`, the deployment can only access the resources with public access, which means it cannot access the resources secured in the virtual network. ++The following table lists the supported configurations when configuring inbound and outbound communications for an online endpoint: ++| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? | +| -- | -- | | | +| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes | +| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes | +| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes | +| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes | ++## Next steps ++- [Workspace managed network isolation](how-to-managed-network.md) +- [How to secure managed online endpoints with network isolation](how-to-secure-online-endpoint.md) |
machine-learning | Dsvm Common Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/data-science-virtual-machine/dsvm-common-identity.md | Azure AD DS makes it simple to manage your identities by providing a fully manag 1. In the Azure portal, add the user to Active Directory: - 1. Sign in to the [Azure Active Directory admin center](https://aad.portal.azure.com) by using an account that's a global admin for the directory. + 1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. - 1. Select **Azure Active Directory** and then **Users and groups**. + 1. Browse to **Azure Active Directory** > **Users** > **All users**. - 1. In **Users and groups**, select **All users**, and then select **New user**. + 1. Select **New user**. The **User** pane opens: |
machine-learning | How To Access Azureml Behind Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md | monikerRange: 'azureml-api-2 || azureml-api-1' Azure Machine Learning requires access to servers and services on the public internet. When implementing network isolation, you need to understand what access is required and how to enable it. > [!NOTE]-> The information in this article applies to Azure Machine Learning workspace configured with a private endpoint. +> The information in this article applies to Azure Machine Learning workspace configured to use an _Azure Virtual Network_. When using a _managed virtual network_, the required inbound and outbound configuration for the workspace is automatically applied. For more information, see [Azure Machine Learning managed virtual network](how-to-managed-network.md). ## Common terms and information __Outbound traffic__ __To allow installation of Python packages for training and deployment__, allow __outbound__ traffic to the following host names: -> [!NOTE] -> This is not a complete list of the hosts required for all Python resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario. --| __Host name__ | __Purpose__ | -| - | - | -| `anaconda.com`<br>`*.anaconda.com` | Used to install default packages. | -| `*.anaconda.org` | Used to get repo data. | -| `pypi.org` | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow `*.pythonhosted.org`. | -| `*pytorch.org` | Used by some examples based on PyTorch. | -| `*.tensorflow.org` | Used by some examples based on Tensorflow. | ## Scenario: Install RStudio on compute instance |
machine-learning | How To Access Data Interactive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md | subscription = '<subscription_id>' resource_group = '<resource_group>' workspace = '<workspace>' datastore_name = '<datastore>'-path_on_datastore '<path>' +path_on_datastore = '<path>' # long-form Datastore uri format:-uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'. +uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}' ``` These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage. |
machine-learning | How To Auto Train Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md | validation_data: # [Python SDK](#tab/python) - [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] You can create data inputs from training and validation MLTable from your local directory or cloud storage with the following code: In individual trials, you directly control the model architecture and hyperparam #### Supported model architectures -The following table summarizes the supported models for each computer vision task. +The following table summarizes the supported legacy models for each computer vision task. Using only these legacy models will trigger runs using the legacy runtime (where each individual run or trial is submitted as a command job). Please see below for HuggingFace and MMDetection support. Task | model architectures | String literal syntax<br> ***`default_model`\**** denoted with \* |-|- Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-wei Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn` Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn` +#### Supported model architectures - HuggingFace and MMDetection (preview) ++With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any image classification model from the [HuggingFace Hub](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers) which is part of the transformers library (such as microsoft/beit-base-patch16-224), as well as any object detection or instance segmentation model from the [MMDetection Version 2.28.2 Model Zoo](https://mmdetection.readthedocs.io/en/v2.28.2/model_zoo.html) (such as atss_r50_fpn_1x_coco). ++In addition to supporting any model from HuggingFace Transfomers and MMDetection 2.28.2, we also offer a list of curated models from these libraries in the azureml-staging registry. These curated models have been tested thoroughly and use default hyperparameters selected from extensive benchmarking to ensure effective training. The table below summarizes these curated models. ++Task | model architectures | String literal syntax +|-|- +Image classification<br> (multi-class and multi-label)| **BEiT** <br> **ViT** <br> **DeiT** <br> **SwinV2]** | [`microsoft/beit-base-patch16-224-pt22k-ft22k`](https://ml.azure.com/registries/azureml/models/microsoft-beit-base-patch16-224-pt22k-ft22k/version/5)<br> [`google/vit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/google-vit-base-patch16-224/version/5)<br> [`facebook/deit-base-patch16-224`](https://ml.azure.com/registries/azureml/models/facebook-deit-base-patch16-224/version/5)<br> [`microsoft/swinv2-base-patch4-window12-192-22k`](https://ml.azure.com/registries/azureml/models/microsoft-swinv2-base-patch4-window12-192-22k/version/5) +Object Detection | **Sparse R-CNN** <br> **Deformable DETR** <br> **VFNet** <br> **YOLOF** <br> **Swin** | [`sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco`](https://ml.azure.com/registries/azureml/models/sparse_rcnn_r50_fpn_300_proposals_crop_mstrain_480-800_3x_coco/version/3)<br> [`sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco`](https://ml.azure.com/registries/azureml/models/sparse_rcnn_r101_fpn_300_proposals_crop_mstrain_480-800_3x_coco/version/3) <br> [`deformable_detr_twostage_refine_r50_16x2_50e_coco`](https://ml.azure.com/registries/azureml/models/deformable_detr_twostage_refine_r50_16x2_50e_coco/version/3) <br> [`vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco`](https://ml.azure.com/registries/azureml/models/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco/version/3) <br> [`vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco`](https://ml.azure.com/registries/azureml/models/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco/version/3) <br> [`yolof_r50_c5_8x8_1x_coco`](https://ml.azure.com/registries/azureml/models/yolof_r50_c5_8x8_1x_coco/version/3) +Instance Segmentation | **Swin** | [`mask_rcnn_swin-t-p4-w7_fpn_1x_coco`](https://ml.azure.com/registries/azureml/models/mask_rcnn_swin-t-p4-w7_fpn_1x_coco/version/3) ++We constantly update the list of curated models. You can get the most up-to-date list of the curated models for a given task using the Python SDK: +``` +credential = DefaultAzureCredential() +ml_client = MLClient(credential, registry_name="azureml-staging") ++models = ml_client.models.list() +classification_models = [] +for model in models: + model = ml_client.models.get(model.name, label="latest") + if model.tags['task'] == 'image-classification': # choose an image task + classification_models.append(model.name) ++classification_models +``` +Output: +``` +['google-vit-base-patch16-224', + 'microsoft-swinv2-base-patch4-window12-192-22k', + 'facebook-deit-base-patch16-224', + 'microsoft-beit-base-patch16-224-pt22k-ft22k'] +``` +Using any HuggingFace or MMDetection model will trigger runs using pipeline components. If both legacy and HuggingFace/MMdetection models are used, all runs/trials will be triggered using components. + In addition to controlling the model architecture, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md). |
machine-learning | How To Auto Train Nlp Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md | You can seamlessly integrate with the [Azure Machine Learning data labeling](how To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md) for more information. - * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. + * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. [!INCLUDE [automl-sdk-version](includes/machine-learning-automl-sdk-version.md)] AutoML NLP allows you to provide a list of models and combinations of hyperparam All the pre-trained text DNN models currently available in AutoML NLP for fine-tuning are listed below: -* bert_base_cased -* bert_large_uncased -* bert_base_multilingual_cased -* bert_base_german_cased -* bert_large_cased -* distilbert_base_cased -* distilbert_base_uncased -* roberta_base -* roberta_large -* distilroberta_base -* xlm_roberta_base -* xlm_roberta_large -* xlnet_base_cased -* xlnet_large_cased +* bert-base-cased +* bert-large-uncased +* bert-base-multilingual-cased +* bert-base-german-cased +* bert-large-cased +* distilbert-base-cased +* distilbert-base-uncased +* roberta-base +* roberta-large +* distilroberta-base +* xlm-roberta-base +* xlm-roberta-large +* xlnet-base-cased +* xlnet-large-cased Note that the large models are larger than their base counterparts. They are typically more performant, but they take up more GPU memory and time for training. As such, their SKU requirements are more stringent: we recommend running on ND-series VMs for the best results. +## Supported model algorithms - HuggingFace (preview) ++With the new backend that runs on [Azure Machine Learning pipelines](concept-ml-pipelines.md), you can additionally use any text/token classification model from the HuggingFace Hub for [Text Classification](https://huggingface.co/models?pipeline_tag=text-classification&library=transformers), [Token Classification](https://huggingface.co/models?pipeline_tag=token-classification&sort=trending) which is part of the transformers library (such as microsoft/deberta-large-mnli). You may also find a curated list of models in [Azure Machine Learning model registry](concept-foundation-models.md?view=azureml-api-2&preserve-view=true) that have been validated with the pipeline components. ++Using any HuggingFace model will trigger runs using pipeline components. If both legacy and HuggingFace models are used, all runs/trials will be triggered using components. + ## Supported hyperparameters The following table describes the hyperparameters that AutoML NLP supports. |
machine-learning | How To Automl Forecasting Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md | You can start by reading the [Set up AutoML to train a time-series forecasting m - [Bike share example](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-task-bike-share/auto-ml-forecasting-bike-share.ipynb) - [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb)-- [Many Models solution](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)-- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)-- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)+- [Many Models solution](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) +- [Forecasting recipes](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) +- [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) ## Why is AutoML slow on my data? To choose between them, note that NRMSE penalizes outliers in the training data ## How can I improve the accuracy of my model? - Ensure that you're configuring AutoML the best way for your data. For more information, see the [What modeling configuration should I use?](#what-modeling-configuration-should-i-use) answer.-- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models. -- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb).+- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models. +- Evaluate the model by using back tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. For an example, see the [back-testing notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb). - If the data is noisy, consider aggregating it to a coarser frequency to increase the signal-to-noise ratio. For more information, see [Frequency and target data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation). - Add new features that can help predict the target. Subject matter expertise can help greatly when you're selecting training data. - Compare validation and test metric values, and determine if the selected model is underfitting or overfitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to overfitting. AutoML supports the following advanced prediction scenarios: - Forecasting beyond the forecast horizon - Forecasting when there's a gap in time between training and forecasting periods -For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). +For examples and details, see the [notebook for advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ## How do I view metrics from forecasting training jobs? |
machine-learning | How To Create Vector Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-vector-index.md | After you create a vector index, you can add it to a prompt flow from the prompt :::image type="content" source="media/how-to-create-vector-index/vector-index-lookup-tool.png" alt-text="Screenshot that shows the Vector Index Lookup tool."::: -1. Enter the path to your vector index, along with the query that you want to perform against the index. +1. Enter the path to your vector index, along with the query that you want to perform against the index. The 'path' is the location for the MLIndex created in the create a vector index section of this tutorial. To know this location select the desired Vector Index, select 'Details', and select 'Index Data'. Then on the 'Index data' page, copy the 'Datasource URI' in the Data sources section. ++1. Enter a query that you want to perform against the index. A query is a question either as plain string or an embedding from the input cell of the previous step. If you choose to enter an embedding, be sure your query is defined in the input section of your prompt flow like the example here: ++ :::image type="content" source="media/how-to-create-vector-index/query-example.png" alt-text="Screenshot that shows the Vector Index Lookup tool query."::: + + An example of a plain string you can input in this case would be: `How to use SDK V2?'. Here is an example of an embedding as an input: `${embed_the_question.output}`. Passing a plain string will only work when the Vector Index is getting used on the workspace which created it. ## Next steps |
machine-learning | How To Deploy Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md | endpoint_name = "endpt-" + datetime.datetime.now().strftime("%m%d%H%M%f") # create an online endpoint endpoint = ManagedOnlineEndpoint( name = endpoint_name, - description="this is a sample endpoint" + description="this is a sample endpoint", auth_mode="key" ) ``` |
machine-learning | How To Enable Studio Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-enable-studio-virtual-network.md | monikerRange: 'azureml-api-2 || azureml-api-1' # Use Azure Machine Learning studio in an Azure virtual network + In this article, you learn how to use Azure Machine Learning studio in a virtual network. The studio includes features like AutoML, the designer, and data labeling. Some of the studio's features are disabled by default in a virtual network. To re-enable these features, you must enable managed identity for storage accounts you intend to use in the studio. In this article, you learn how to: ### Designer sample pipeline -There's a known issue where user cannot run sample pipeline in Designer homepage. This is the sample dataset used in the sample pipeline is Azure Global dataset, and it cannot satisfy all virtual network environment. +There's a known issue where user can't run sample pipeline in Designer homepage. This problem occurs because the sample dataset used in the sample pipeline is an Azure Global dataset. It can't be accessed from a virtual network environment. -To resolve this issue, you can use a public workspace to run sample pipeline to get to know how to use the designer and then replace the sample dataset with your own dataset in the workspace within virtual network. +To resolve this issue, use a public workspace to run the sample pipeline. Or replace the sample dataset with your own dataset in the workspace within a virtual network. ## Datastore: Azure Storage Account Use the following steps to enable access to data stored in Azure Blob and File s > [!TIP] > The first step is not required for the default storage account for the workspace. All other steps are required for *any* storage account behind the VNet and used by the workspace, including the default storage account. -1. **If the storage account is the *default* storage for your workspace, skip this step**. If it is not the default, __Grant the workspace managed identity the 'Storage Blob Data Reader' role__ for the Azure storage account so that it can read data from blob storage. +1. **If the storage account is the *default* storage for your workspace, skip this step**. If it isn't the default, __Grant the workspace managed identity the 'Storage Blob Data Reader' role__ for the Azure storage account so that it can read data from blob storage. For more information, see the [Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) built-in role. Use the following steps to enable access to data stored in Azure Blob and File s For more information, see the [Reader](../role-based-access-control/built-in-roles.md#reader) built-in role. <a id='enable-managed-identity'></a>-1. __Enable managed identity authentication for default storage accounts__. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account, which are defined when you create your workspace. You can also set new defaults in the __Datastore__ management page. +1. __Enable managed identity authentication for default storage accounts__. Each Azure Machine Learning workspace has two default storage accounts, a default blob storage account and a default file store account. Both are defined when you create your workspace. You can also set new defaults in the __Datastore__ management page. ![Screenshot showing where default datastores can be found](./media/how-to-enable-studio-virtual-network/default-datastores.png) Use the following steps to enable access to data stored in Azure Blob and File s |Storage account | Notes | |||- |Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment will fail regardless of any other datastores in use.| + |Workspace default blob storage| Stores model assets from the designer. Enable managed identity authentication on this storage account to deploy models in the designer. If managed identity authentication is disabled, the user's identity is used to access data stored in the blob. <br> <br> You can visualize and run a designer pipeline if it uses a non-default datastore that has been configured to use managed identity. However, if you try to deploy a trained model without managed identity enabled on the default datastore, deployment fails regardless of any other datastores in use.| |Workspace default file store| Stores AutoML experiment assets. Enable managed identity authentication on this storage account to submit AutoML experiments. | 1. __Configure datastores to use managed identity authentication__. After you add an Azure storage account to your virtual network with either a [service endpoint](how-to-secure-workspace-vnet.md?tabs=se#secure-azure-storage-accounts) or [private endpoint](how-to-secure-workspace-vnet.md?tabs=pe#secure-azure-storage-accounts), you must configure your datastore to use [managed identity](../active-directory/managed-identities-azure-resources/overview.md) authentication. Doing so lets the studio access data in your storage account. After you create a SQL contained user, grant permissions to it by using the [GRA ## Intermediate component output -When using the Azure Machine Learning designer intermediate component output, you can specify the output location for any component in the designer. Use this to store intermediate datasets in separate location for security, logging, or auditing purposes. To specify output, use the following steps: +When using the Azure Machine Learning designer intermediate component output, you can specify the output location for any component in the designer. Use this output to store intermediate datasets in separate location for security, logging, or auditing purposes. To specify output, use the following steps: 1. Select the component whose output you'd like to specify. 1. In the component settings pane that appears to the right, select __Output settings__. 1. Specify the datastore you want to use for each component output. -Make sure that you have access to the intermediate storage accounts in your virtual network. Otherwise, the pipeline will fail. +Make sure that you have access to the intermediate storage accounts in your virtual network. Otherwise, the pipeline fails. [Enable managed identity authentication](#enable-managed-identity) for intermediate storage accounts to visualize output data. ## Access the studio from a resource inside the VNet -If you are accessing the studio from a resource inside of a virtual network (for example, a compute instance or virtual machine), you must allow outbound traffic from the virtual network to the studio. +If you're accessing the studio from a resource inside of a virtual network (for example, a compute instance or virtual machine), you must allow outbound traffic from the virtual network to the studio. -For example, if you are using network security groups (NSG) to restrict outbound traffic, add a rule to a __service tag__ destination of __AzureFrontDoor.Frontend__. +For example, if you're using network security groups (NSG) to restrict outbound traffic, add a rule to a __service tag__ destination of __AzureFrontDoor.Frontend__. ## Firewall settings -Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. __This is not supported__ when using Azure Machine Learning studio. It is supported when using the Azure Machine Learning SDK or CLI. +Some storage services, such as Azure Storage Account, have firewall settings that apply to the public endpoint for that specific service instance. Usually this setting allows you to allow/disallow access from specific IP addresses from the public internet. __This is not supported__ when using Azure Machine Learning studio. It's supported when using the Azure Machine Learning SDK or CLI. > [!TIP] > Azure Machine Learning studio is supported when using the Azure Firewall service. For more information, see [Use your workspace behind a firewall](how-to-access-azureml-behind-firewall.md). |
machine-learning | How To Github Actions Machine Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md | GitHub Actions uses a workflow YAML (.yml) file in the `/.github/workflows/` pat * A GitHub account. If you don't have one, sign up for [free](https://github.com/join). -## Step 1. Get the code +## Step 1: Get the code Fork the following repo at GitHub: Fork the following repo at GitHub: https://github.com/azure/azureml-examples ``` -## Step 2. Authenticate with Azure +## Step 2: Authenticate with Azure You'll need to first define how to authenticate with Azure. You can use a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) or [OpenID Connect](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect). You'll need to first define how to authenticate with Azure. You can use a [servi [!INCLUDE [include](~/articles/reusable-content/github-actions/create-secrets-with-openid.md)] -## Step 3. Update `setup.sh` to connect to your Azure Machine Learning workspace +## Step 3: Update `setup.sh` to connect to your Azure Machine Learning workspace You'll need to update the CLI setup file variables to match your workspace. You'll need to update the CLI setup file variables to match your workspace. |LOCATION | Location of your workspace (example: `eastus2`) | |WORKSPACE | Name of Azure Machine Learning workspace | -## Step 4. Update `pipeline.yml` with your compute cluster name +## Step 4: Update `pipeline.yml` with your compute cluster name You'll use a `pipeline.yml` file to deploy your Azure Machine Learning pipeline. This is a machine learning pipeline and not a DevOps pipeline. You only need to make this update if you're using a name other than `cpu-cluster` for your computer cluster name. |
machine-learning | How To Manage Kubernetes Instance Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-kubernetes-instance-types.md | Title: Create and manage instance types for efficient compute resource utilization -description: Learn about what is instance types, and how to create and manage them, and what are benefits of using instance types + Title: Create and manage instance types for efficient utilization of compute resources +description: Learn about what instance types are, how to create and manage them, and what the benefits of using them are. -# Create and manage instance types for efficient compute resource utilization +# Create and manage instance types for efficient utilization of compute resources -## What are instance types? +Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure virtual machine, an example of an instance type is `STANDARD_D2_V3`. -Instance types are an Azure Machine Learning concept that allows targeting certain types of compute nodes for training and inference workloads. For an Azure VM, an example for an instance type is `STANDARD_D2_V3`. +In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that's installed with the Azure Machine Learning extension. Two elements in the Azure Machine Learning extension represent the instance types: -In Kubernetes clusters, instance types are represented in a custom resource definition (CRD) that is installed with the Azure Machine Learning extension. Two elements in Azure Machine Learning extension represent the instance types: -[nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) -and [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/). +- Use [nodeSelector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector) to specify which node a pod should run on. The node must have a corresponding label. +- In the [resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) section, you can set the compute resources (CPU, memory, and NVIDIA GPU) for the pod. -In short, a `nodeSelector` lets you specify which node a pod should run on. The node must have a corresponding label. In the `resources` section, you can set the compute resources (CPU, memory and NVIDIA GPU) for the pod. +If you [specify a nodeSelector field when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the `nodeSelector` field will be applied to all instance types. This means that: ->[!IMPORTANT] -> -> If you have [specified a nodeSelector when deploying the Azure Machine Learning extension](./how-to-deploy-kubernetes-extension.md#review-azure-machine-learning-extension-configuration-settings), the nodeSelector will be applied to all instance types. This means that: -> - For each instance type creating, the specified nodeSelector should be a subset of the extension-specified nodeSelector. -> - If you use an instance type **with nodeSelector**, the workload will run on any node matching both the extension-specified nodeSelector and the instance type-specified nodeSelector. -> - If you use an instance type **without a nodeSelector**, the workload will run on any node mathcing the extension-specified nodeSelector. +- For each instance type that you create, the specified `nodeSelector` field should be a subset of the extension-specified `nodeSelector` field. +- If you use an instance type with `nodeSelector`, the workload will run on any node that matches both the extension-specified `nodeSelector` field and the instance-type-specified `nodeSelector` field. +- If you use an instance type without a `nodeSelector` field, the workload will run on any node that matches the extension-specified `nodeSelector` field. +## Create a default instance type -## Default instance type --By default, a `defaultinstancetype` with the following definition is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace: -- If you don't apply a `nodeSelector`, it means the pod can get scheduled on any node.-- The workload's pods are assigned default resources with 0.1 cpu cores, 2-GB memory and 0 GPU for request.-- The resources used by the workload's pods are limited to 2 cpu cores and 8-GB memory:+By default, an instance type called `defaultinstancetype` is created when you attach a Kubernetes cluster to an Azure Machine Learning workspace. Here's the definition: ```yaml resources: resources: nvidia.com/gpu: null ``` -> [!NOTE] -> - The default instance type purposefully uses little resources. To ensure all ML workloads run with appropriate resources, for example GPU resource, it is highly recommended to create custom instance types. -> - `defaultinstancetype` will not appear as an InstanceType custom resource in the cluster when running the command ```kubectl get instancetype```, but it will appear in all clients (UI, CLI, SDK). -> - `defaultinstancetype` can be overridden with a custom instance type definition having the same name as `defaultinstancetype` (see [Create custom instance types](#create-custom-instance-types) section) +If you don't apply a `nodeSelector` field, the pod can be scheduled on any node. The workload's pods are assigned default resources with 0.1 CPU cores, 2 GB of memory, and 0 GPUs for the request. The resources that the workload's pods use are limited to 2 CPU cores and 8 GB of memory. ++The default instance type purposefully uses few resources. To ensure that all machine learning workloads run with appropriate resources (for example, GPU resource), we highly recommend that you [create custom instance types](#create-a-custom-instance-type). ++Keep in mind the following points about the default instance type: ++- `defaultinstancetype` doesn't appear as an `InstanceType` custom resource in the cluster when you're running the command ```kubectl get instancetype```, but it does appear in all clients (UI, Azure CLI, SDK). +- `defaultinstancetype` can be overridden with the definition of a custom instance type that has the same name. -### Create custom instance types +## Create a custom instance type -To create a new instance type, create a new custom resource for the instance type CRD. For example: +To create a new instance type, create a new custom resource for the instance type CRD. For example: ```bash kubectl apply -f my_instance_type.yaml ``` -With `my_instance_type.yaml`: +Here are the contents of *my_instance_type.yaml*: + ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceType spec: memory: "1500Mi" ``` -The following steps create an instance type with the labeled behavior: -- Pods are scheduled only on nodes with label `mylabel: mylabelvalue`.-- Pods are assigned resource requests of `700m` CPU and `1500Mi` memory.-- Pods are assigned resource limits of `1` CPU, `2Gi` memory and `1` NVIDIA GPU.+The preceding code creates an instance type with the labeled behavior: -Creation of custom instance types must meet the following parameters and definition rules, otherwise the instance type creation fails: +- Pods are scheduled only on nodes that have the label `mylabel: mylabelvalue`. +- Pods are assigned resource requests of `700m` for CPU and `1500Mi` for memory. +- Pods are assigned resource limits of `1` for CPU, `2Gi` for memory, and `1` for NVIDIA GPU. -| Parameter | Required | Description | -| | | | -| name | required | String values, which must be unique in cluster.| -| CPU request | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.| -| Memory request | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.| -| CPU limit | required | String values, which cannot be 0 or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers; for example, `"1"` is equivalent to `1000m`.| -| Memory limit | required | String values, which cannot be 0 or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.| -| GPU | optional | Integer values, which can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). | -| nodeSelector | optional | Map of string keys and values. | +Creation of custom instance types must meet the following parameters and definition rules, or it will fail: +| Parameter | Required or optional | Description | +| | | | +| `name` | Required | String values, which must be unique in a cluster.| +| `CPU request` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.| +| `Memory request` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 mebibytes (MiB).| +| `CPU limit` | Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it as full numbers. For example, `"1"` is equivalent to `1000m`.| +| `Memory limit` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB.| +| `GPU` | Optional | Integer values, which can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). | +| `nodeSelector` | Optional | Map of string keys and values. | It's also possible to create multiple instance types at once: It's also possible to create multiple instance types at once: kubectl apply -f my_instance_type_list.yaml ``` -With `my_instance_type_list.yaml`: +Here are the contents of *my_instance_type_list.yaml*: + ```yaml apiVersion: amlarc.azureml.com/v1alpha1 kind: InstanceTypeList items: memory: "1Gi" ``` -The above example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition created when Kubernetes cluster was attached to Azure Machine Learning workspace. +The preceding example creates two instance types: `cpusmall` and `defaultinstancetype`. This `defaultinstancetype` definition overrides the `defaultinstancetype` definition that was created when you attached the Kubernetes cluster to the Azure Machine Learning workspace. -If you submit a training or inference workload without an instance type, it uses the `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with name `defaultinstancetype`. It's automatically recognized as the default. +If you submit a training or inference workload without an instance type, it uses `defaultinstancetype`. To specify a default instance type for a Kubernetes cluster, create an instance type with the name `defaultinstancetype`. It's automatically recognized as the default. +## Select an instance type to submit a training job -### Select instance type to submit training job +### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli) -#### [Azure CLI](#tab/select-instancetype-to-trainingjob-with-cli) --To select an instance type for a training job using CLI (V2), specify its name as part of the -`resources` properties section in job YAML. For example: +To select an instance type for a training job by using the Azure CLI (v2), specify its name as part of the +`resources` properties section in the job YAML. For example: ```yaml command: python -c "print('Hello world!')" environment: image: library/python:latest compute: azureml:<Kubernetes-compute_target_name> resources:- instance_type: <instance_type_name> + instance_type: <instance type name> ``` -#### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk) +### [Python SDK](#tab/select-instancetype-to-trainingjob-with-sdk) -To select an instance type for a training job using SDK (V2), specify its name for `instance_type` property in `command` class. For example: +To select an instance type for a training job by using the SDK (v2), specify its name for the `instance_type` property in the `command` class. For example: ```python from azure.ai.ml import command command_job = command( command="python -c "print('Hello world!')"", environment="AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu@latest", compute="<Kubernetes-compute_target_name>",- instance_type="<instance_type_name>" + instance_type="<instance type name>" ) ```+ -In the above example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute -target and replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to submit the job. +In the preceding example, replace `<Kubernetes-compute_target_name>` with the name of your Kubernetes compute target. Replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to submit the job. -### Select instance type to deploy model +## Select an instance type to deploy a model -#### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli) +### [Azure CLI](#tab/select-instancetype-to-modeldeployment-with-cli) -To select an instance type for a model deployment using CLI (V2), specify its name for the `instance_type` property in the deployment YAML. For example: +To select an instance type for a model deployment by using the Azure CLI (v2), specify its name for the `instance_type` property in the deployment YAML. For example: ```yaml name: blue environment: image: mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest ``` -#### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk) +### [Python SDK](#tab/select-instancetype-to-modeldeployment-with-sdk) -To select an instance type for a model deployment using SDK (V2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example: +To select an instance type for a model deployment by using the SDK (v2), specify its name for the `instance_type` property in the `KubernetesOnlineDeployment` class. For example: ```python from azure.ai.ml import KubernetesOnlineDeployment,Model,Environment,CodeConfiguration blue_deployment = KubernetesOnlineDeployment( instance_type="<instance type name>", ) ```+ -In the above example, replace `<instance_type_name>` with the name of the instance type you wish to select. If there's no `instance_type` property specified, the system uses `defaultinstancetype` to deploy the model. +In the preceding example, replace `<instance type name>` with the name of the instance type that you want to select. If you don't specify an `instance_type` property, the system uses `defaultinstancetype` to deploy the model. > [!IMPORTANT]-> -> For MLFlow model deployment, the resource request require at least 2 CPU and 4 GB memory, otherwise the deployment will fail. +> For MLflow model deployment, the resource request requires at least 2 CPU cores and 4 GB of memory. Otherwise, the deployment will fail. ++### Resource section validation -#### Resource section validation -If you're using the `resource section` to define the resource request and limit of your model deployments, for example: +You can use the `resources` section to define the resource request and limit of your model deployments. For example: #### [Azure CLI](#tab/define-resource-to-modeldeployment-with-cli) resources: memory: "0.5Gi" instance_type: <instance type name> ```+ #### [Python SDK](#tab/define-resource-to-modeldeployment-with-sdk) ```python blue_deployment = KubernetesOnlineDeployment( instance_type="<instance type name>", ) ```+ -If you use the `resource section`, the valid resource definition need to meet the following rules, otherwise the model deployment fails due to the invalid resource definition: +If you use the `resources` section, a valid resource definition needs to meet the following rules. An invalid resource definition will cause the model deployment to fail. -| Parameter | If necessary | Description | +| Parameter | Required or optional | Description | | | | |-| `requests:`<br>`cpu:`| Required | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`.| -| `requests:`<br>`memory:` | Required | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB. <br>Memory can't be less than **1 MBytes**.| -| `limits:`<br>`cpu:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the CPU in millicores, for example `100m`, or in full numbers, for example `"1"` is equivalent to `1000m`. | -| `limits:`<br>`memory:` | Optional <br>(only required when need GPU) | String values, which can't be 0 or empty. <br>You can specify the memory as a full number + suffix, for example `1024Mi` for 1024 MiB.| -| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(only required when need GPU) | Integer values, which can't be empty and can only be specified in the `limits` section. <br>For more information, see the Kubernetes [documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If require CPU only, you can omit the entire `limits` section.| --> [!NOTE] -> ->If the resource section definition is invalid, the deployment will fail. -> -> The `instance type` is **required** for model deployment. If you have defined the resource section, and it will be validated against the instance type, the rules are as follows: - > * With a valid resource section definition, the resource limits must be less than instance type limits, otherwise deployment will fail. - > * If the user does not define instance type, the `defaultinstancetype` will be used to be validated with resource section. - > * If the user does not define resource section, the instance type will be used to create deployment. +| `requests:`<br>`cpu:`| Required | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example, `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`.| +| `requests:`<br>`memory:` | Required | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1024 MiB. <br>Memory can't be less than 1 MB.| +| `limits:`<br>`cpu:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the CPU in millicores; for example `100m`. You can also specify it in full numbers. For example, `"1"` is equivalent to `1000m`. | +| `limits:`<br>`memory:` | Optional <br>(required only when you need GPU) | String values, which can't be zero or empty. <br>You can specify the memory as a full number + suffix; for example, `1024Mi` for 1,024 MiB.| +| `limits:`<br>`nvidia.com/gpu:` | Optional <br>(required only when you need GPU) | Integer values, which can't be empty and can be specified only in the `limits` section. <br>For more information, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins). <br>If you require CPU only, you can omit the entire `limits` section.| ++The instance type is *required* for model deployment. If you defined the `resources` section, and it will be validated against the instance type, the rules are as follows: +- With a valid `resource` section definition, the resource limits must be less than the instance type limits. Otherwise, deployment will fail. +- If you don't define an instance type, the system uses `defaultinstancetype` for validation with the `resources` section. +- If you don't define the `resources` section, the system uses the instance type to create the deployment. ## Next steps - [Azure Machine Learning inference router and connectivity requirements](./how-to-kubernetes-inference-routing-azureml-fe.md)-- [Secure AKS inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md)+- [Secure Azure Kubernetes Service inferencing environment](./how-to-secure-kubernetes-inferencing-environment.md) |
machine-learning | How To Manage Optimize Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-optimize-cost.md | For information on planning and monitoring costs, see the [plan to manage costs With constantly changing data, you need fast and streamlined model training and retraining to maintain accurate models. However, continuous training comes at a cost, especially for deep learning models on GPUs. -Azure Machine Learning users can use the managed Azure Machine Learning compute cluster, also called AmlCompute. AmlCompute supports a variety of GPU and CPU options. The AmlCompute is internally hosted on behalf of your subscription by Azure Machine Learning. It provides the same enterprise grade security, compliance and governance at Azure IaaS cloud scale. +Azure Machine Learning users can use the managed Azure Machine Learning compute cluster, also called AmlCompute. AmlCompute supports various GPU and CPU options. The AmlCompute is internally hosted on behalf of your subscription by Azure Machine Learning. It provides the same enterprise grade security, compliance and governance at Azure IaaS cloud scale. Because these compute pools are inside of Azure's IaaS infrastructure, you can deploy, scale, and manage your training with the same security and compliance requirements as the rest of your infrastructure. These deployments occur in your subscription and obey your governance rules. Learn more about [Azure Machine Learning compute](how-to-create-attach-compute-cluster.md). Because these compute pools are inside of Azure's IaaS infrastructure, you can d Autoscaling clusters based on the requirements of your workload helps reduce your costs so you only use what you need. -AmlCompute clusters are designed to scale dynamically based on your workload. The cluster can be scaled up to the maximum number of nodes you configure. As each job completes, the cluster will release nodes and scale to your configured minimum node count. +AmlCompute clusters are designed to scale dynamically based on your workload. The cluster can be scaled up to the maximum number of nodes you configure. As each job completes, the cluster releases nodes and scale to your configured minimum node count. [!INCLUDE [min-nodes-note](includes/machine-learning-min-nodes.md)] To set quotas at the workspace level, start in the [Azure portal](https://portal ## Set job autotermination policies -In some cases, you should configure your training runs to limit their duration or terminate them early. For example, when you are using Azure Machine Learning's built-in hyperparameter tuning or automated machine learning. +In some cases, you should configure your training runs to limit their duration or terminate them early. For example, when you're using Azure Machine Learning's built-in hyperparameter tuning or automated machine learning. Here are a few options that you have: * Define a parameter called `max_run_duration_seconds` in your RunConfiguration to control the maximum duration a run can extend to on the compute you choose (either local or remote cloud compute). Low-Priority VMs have a single quota separate from the dedicated quota value, wh ## Schedule compute instances -When you create a [compute instance](concept-compute-instance.md), the VM stays on so it is available for your work. +When you create a [compute instance](concept-compute-instance.md), the VM stays on so it's available for your work. * [Enable idle shutdown (preview)](how-to-create-compute-instance.md#configure-idle-shutdown) to save on cost when the VM has been idle for a specified time period. * Or [set up a schedule](how-to-create-compute-instance.md#schedule-automatic-start-and-stop) to automatically start and stop the compute instance (preview) to save cost when you aren't planning to use it. When you create a [compute instance](concept-compute-instance.md), the VM stays Another way to save money on compute resources is Azure Reserved VM Instance. With this offering, you commit to one-year or three-year terms. These discounts range up to 72% of the pay-as-you-go prices and are applied directly to your monthly Azure bill. -Azure Machine Learning Compute supports reserved instances inherently. If you purchase a one-year or three-year reserved instance, we will automatically apply discount against your Azure Machine Learning managed compute. +Azure Machine Learning Compute supports reserved instances inherently. If you purchase a one-year or three-year reserved instance, we'll automatically apply discount against your Azure Machine Learning managed compute. ## Parallelize training -One of the key methods of optimizing cost and performance is by parallelizing the workload with the help of a parallel component in Azure Machine Learning. A parallel component allows you to use many smaller nodes to execute the task in parallel, hence allowing you to scale horizontally. There is an overhead for parallelization. Depending on the workload and the degree of parallelism that can be achieved, this may or may not be an option. For further information, see the [ParallelComponent](/python/api/azure-ai-ml/azure.ai.ml.entities.parallelcomponent) documentation. +One of the key methods of optimizing cost and performance is by parallelizing the workload with the help of a parallel component in Azure Machine Learning. A parallel component allows you to use many smaller nodes to execute the task in parallel, hence allowing you to scale horizontally. There's an overhead for parallelization. Depending on the workload and the degree of parallelism that can be achieved, this may or may not be an option. For more details, follow this link for [ParallelComponent](/python/api/azure-ai-ml/azure.ai.ml.entities.parallelcomponent) documentation. ## Set data retention & deletion policies |
machine-learning | How To Manage Registries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-registries.md | The response should provide an access token good for one hour. Make note of the ``` To create a registry, use the following command. You can edit the JSON to change the inputs as needed. Replace the `<YOUR-ACCESS-TOKEN>` value with the access token retrieved previously:- ++> [!TIP] +> We recommend using the latest API version when working with the REST API. For a list of the current REST API versions for Azure Machine Learning, see the [Machine Learning REST API reference](/rest/api/azureml/). The current API versions are listed in the table of contents on the left side of the page. ++```bash ```bash-curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2022-12-01-preview -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d ' +curl -X PUT https://management.azure.com/subscriptions/<your-subscription-id>/resourceGroups/<your-resource-group>/providers/Microsoft.MachineLearningServices/registries/reg-from-rest?api-version=2023-04-01 -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" -H 'Content-Type: application/json' -d ' { "properties": { Decide if you want to allow users to only use assets (models, environments and c ### Allow users to use assets from the registry -To let a user only read assets, you can grant the user the built-in __Reader__ role. If don't want to use the built-in role, create a custom role with the following permissions +To let a user only read assets, you can grant the user the built-in __Reader__ role. If you don't want to use the built-in role, create a custom role with the following permissions Permission | Description --|-- |
machine-learning | How To Manage Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md | As your needs change or requirements for automation increase you can also manage [!INCLUDE [register-namespace](includes/machine-learning-register-namespace.md)] -* If you're using Azure Container Registry (ACR), Storage Account, Key Vault, or Application Insights in the different subscription than the workspace, you can't use network isolation with managed online endpoints. If you want to use network isolation with managed online endpoints, you must have ACR, Storage Account, Key Vault, and Application Insights in the same subscription with the workspace. For limitations that apply to network isolation with managed online endpoints, see [How to secure online endpoint](how-to-secure-online-endpoint.md#limitations). +* When you use network isolation that is based on a workspace's managed virtual network (preview) with a deployment, you can use resources (Azure Container Registry (ACR), Storage account, Key Vault, and Application Insights) from a different resource group or subscription than that of your workspace. However, these resources must belong to the same tenant as your workspace. For limitations that apply to securing managed online endpoints using a workspace's managed virtual network, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md#limitations). * By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters. |
machine-learning | How To Managed Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md | Title: Managed virtual network isolation (Preview) + Title: Managed virtual network isolation (preview) description: Use managed virtual network isolation for network security with Azure Machine Learning. -Azure Machine Learning provides preview support for managed virtual network (VNet) isolation. Managed VNet isolation streamlines and automates your network isolation configuration with a built-in, workspace-level Azure Machine Learning managed virtual network. +Azure Machine Learning provides support for managed virtual network (VNet) isolation. Managed VNet isolation streamlines and automates your network isolation configuration with a built-in, workspace-level Azure Machine Learning managed virtual network. [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] ## Managed virtual network architecture -When you enable managed virtual network isolation, a managed VNet is created for the workspace. Managed compute resources (compute clusters and compute instances) for the workspace automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry. --The following diagram shows how a managed virtual network uses private endpoints to communicate with the storage, key vault, and container registry used by the workspace. -+When you enable managed virtual network isolation, a managed VNet is created for the workspace. Managed compute resources you create for the workspace automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry. There are two different configuration modes for outbound traffic from the managed virtual network: There are two different configuration modes for outbound traffic from the manage | Outbound mode | Description | Scenarios | | -- | -- | -- | | Allow internet outbound | Allow all internet outbound traffic from the managed VNet. | Recommended if you need access to machine learning artifacts on the Internet, such as python packages or pretrained models. |-| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | Recommended if you want to minimize the risk of data exfiltration but you will need to prepare all required machine learning artifacts in your private locations. | +| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | Recommended if you want to minimize the risk of data exfiltration but you need to prepare all required machine learning artifacts in your private locations. | ++The managed virtual network is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your workspace default storage, container registry and key vault __if they're configured as private__. After choosing the isolation mode, you only need to consider other outbound requirements you may need to add. ++The following diagram shows a managed virtual network configured to __allow internet outbound__: + -The managed virtual network is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your workspace default storage, container registry and key vault if they're configured as private. After choosing the isolation mode, you only need to consider other outbound requirements you may need to add. +The following diagram shows a managed virtual network configured to __allow only approved outbound__: -## Supported scenarios in preview and to be supported scenarios +> [!NOTE] +> In this configuration, the storage, key vault, and container registry used by the workspace are flagged as private. Since they are flagged as private, a private endpoint is used to communicate with them. -|Scenarios|Supported in preview|To be supported| -|||| -|Isolation Mode| • Allow internet outbound<br>• Allow only approved outbound|| -|Compute|• [Compute Instance](concept-compute-instance.md)<br>• [Compute Cluster](how-to-create-attach-compute-cluster.md)<br>• [Serverless](how-to-use-serverless-compute.md)<br>• [Serverless spark](apache-spark-azure-ml-concepts.md)|• New managed online endpoint creation<br>• Migration of existing managed online endpoint<br>• No Public IP option of Compute Instance, Compute Cluster and Serverless| -|Outbound|• Private Endpoint<br>• Service Tag|• FQDN| ++## Supported scenarios ++|Scenarios|Supported| +||| +|Isolation Mode| • Allow internet outbound<br>• Allow only approved outbound| +|Compute|• [Compute Instance](concept-compute-instance.md)<br>• [Compute Cluster](how-to-create-attach-compute-cluster.md)<br>• [Serverless](how-to-use-serverless-compute.md)<br>• [Serverless spark](apache-spark-azure-ml-concepts.md)<br>• New managed online endpoint creation<br>• No Public IP option of Compute Instance, Compute Cluster and Serverless | +|Outbound|• Private Endpoint<br>• Service Tag<br>• FQDN | ## Prerequisites Before following the steps in this article, make sure you have the following prerequisites: -> [!IMPORTANT] -> To use the information in this article, you must enable this preview feature for your subscription. To check whether it has been registered, or to register it, use the steps in the [Set up preview features in Azure subscription](/azure/azure-resource-manager/management/preview-features). Depending on whether you use the Azure portal, Azure CLI, or Azure PowerShell, you may need to register the feature with a different name. Use the following table to determine the name of the feature to register: -> -> | Registration method | Feature name | -> | -- | -- | -> | Azure portal | `Azure Machine Learning Managed Network` | -> | Azure CLI | `AMLManagedNetworkEnabled` | -> | Azure PowerShell | `AMLManagedNetworkEnabled` | - # [Azure CLI](#tab/azure-cli) * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). Before following the steps in this article, make sure you have the following pre ManagedNetwork, IsolationMode, ServiceTagDestination,- PrivateEndpointDestination + PrivateEndpointDestination, + FqdnDestination ) from azure.identity import DefaultAzureCredential Before following the steps in this article, make sure you have the following pre ## Configure a managed virtual network to allow internet outbound > [!IMPORTANT]-> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. [Manually start provisioning if you plan to submit serverless spark jobs](#configure-for-serverless-spark-jobs). +> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. __If you plan to submit serverless spark jobs__, [Manually start provisioning](#configure-for-serverless-spark-jobs). # [Azure CLI](#tab/azure-cli) You can configure a managed VNet using either the `az ml workspace create` or `a * __Update an existing workspace__: - > [!WARNING] - > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints. + [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)] The following example updates an existing workspace. The `--managed-network allow_internet_outbound` parameter configures a managed VNet for the workspace: To configure a managed VNet that allows internet outbound communications, use th * __Update an existing workspace__: - > [!WARNING] - > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints. + [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)] The following example demonstrates how to create a managed VNet for an existing Azure Machine Learning workspace named `myworkspace`: To configure a managed VNet that allows internet outbound communications, use th :::image type="content" source="./media/how-to-managed-network/use-managed-network-internet-outbound.png" alt-text="Screenshot of creating a workspace with an internet outbound managed network." lightbox="./media/how-to-managed-network/use-managed-network-internet-outbound.png"::: + 1. To add an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information: + + * __Rule name__: A name for the rule. The name must be unique for this workspace. + * __Destination type__: Private Endpoint is the only option when the network isolation is private with internet outbound. Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section. + * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for. + * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for. + * __Resource type__: The type of the Azure resource. + * __Resource name__: The name of the Azure resource. + * __Sub Resource__: The sub resource of the Azure resource type. + * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of adding an outbound rule for a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png"::: ++ Select __Save__ to save the rule. You can continue using __Add user-defined outbound rules__ to add rules. + 1. Continue creating the workspace as normal. * __Update an existing workspace__: - > [!WARNING] - > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints. + [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)] 1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure Machine Learning workspace that you want to enable managed virtual network isolation for.- 1. Select __Networking__, then select __Private with Internet Outbound__. Select __Save__ to save the changes. + 1. Select __Networking__, then select __Private with Internet Outbound__. :::image type="content" source="./media/how-to-managed-network/update-managed-network-internet-outbound.png" alt-text="Screenshot of updating a workspace to managed network with internet outbound." lightbox="./media/how-to-managed-network/update-managed-network-internet-outbound.png"::: + * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information: + + * __Rule name__: A name for the rule. The name must be unique for this workspace. + * __Destination type__: Private Endpoint is the only option when the network isolation is private with internet outbound. Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section. + * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for. + * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for. + * __Resource type__: The type of the Azure resource. + * __Resource name__: The name of the Azure resource. + * __Sub Resource__: The sub resource of the Azure resource type. + * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of updating a managed network by adding a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png"::: ++ * To __delete__ an outbound rule, select __delete__ for the rule. ++ :::image type="content" source="./media/how-to-managed-network/delete-outbound-rule.png" alt-text="Screenshot of the delete rule icon for an approved outbound managed network."::: ++ 1. Select __Save__ at the top of the page to save the changes to the managed network. + ## Configure a managed virtual network to allow only approved outbound > [!IMPORTANT]-> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. [Manually start provisioning if you plan to submit serverless spark jobs](#configure-for-serverless-spark-jobs). +> The creation of the managed virtual network is deferred until a compute resource is created or provisioning is manually started. __If you plan to submit serverless spark jobs__, [Manually start provisioning](#configure-for-serverless-spark-jobs). # [Azure CLI](#tab/azure-cli) managed_network: isolation_mode: allow_only_approved_outbound ``` -You can also define _outbound rules_ to define approved outbound communication. An outbound rule can be created for a type of `service_tag`. You can also define _private endpoints_ that allow an Azure resource to securely communicate with the managed VNet. The following rule demonstrates adding a private endpoint to an Azure Blob resource, a service tag to Azure Data Factory: +You can also define _outbound rules_ to define approved outbound communication. An outbound rule can be created for a type of `service_tag` or `fqdn`. You can also define _private endpoints_ that allow an Azure resource to securely communicate with the managed VNet. The following rule demonstrates adding a private endpoint to an Azure Blob resource, a service tag to Azure Data Factory, and an FQDN to `pypi.org`: -> [!TIP] -> Adding an outbound for a service tag is only valid when the managed VNet is configured to `allow_only_approved_outbound`. +> [!IMPORTANT] +> * Adding an outbound for a service tag or FQDN is only valid when the managed VNet is configured to `allow_only_approved_outbound`. +> * If you add outbound rules, Microsoft can't guarantee data exfiltration. ```yaml managed_network: managed_network: protocol: TCP service_tag: DataFactory type: service_tag+ - name: add-fqdnrule + destination: 'pypi.org' + type: fqdn - name: added-perule destination: service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME> You can configure a managed VNet using either the `az ml workspace create` or `a * __Update an existing workspace__ - > [!WARNING] - > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints. + [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)] The following example uses the `--managed-network allow_only_approved_outbound` parameter to configure the managed VNet for an existing workspace: You can configure a managed VNet using either the `az ml workspace create` or `a protocol: TCP service_tag: DataFactory type: service_tag+ - name: add-fqdnrule + destination: 'pypi.org' + type: fqdn - name: added-perule destination: service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME> You can configure a managed VNet using either the `az ml workspace create` or `a # [Python SDK](#tab/python) -To configure a managed VNet that allows only approved outbound communications, use the `ManagedNetwork` class to define a network with `IsolationMode.ALLOw_ONLY_APPROVED_OUTBOUND`. You can then use the `ManagedNetwork` object to create a new workspace or update an existing one. To define _outbound rules_ to Azure services that the workspace relies on, use the `PrivateEndpointDestination` class to define a new private endpoint to the service. +To configure a managed VNet that allows only approved outbound communications, use the `ManagedNetwork` class to define a network with `IsolationMode.ALLOw_ONLY_APPROVED_OUTBOUND`. You can then use the `ManagedNetwork` object to create a new workspace or update an existing one. To define _outbound rules_, use the following classes: ++| Destination | Class | +| -- | -- | +| __Azure service that the workspace relies on__ | `PrivateEndpointDestination` | +| __Azure service tag__ | `ServiceTagDestination` | +| __Fully qualified domain name (FQDN)__ | `FqdnDestination` | * __Create a new workspace__: To configure a managed VNet that allows only approved outbound communications, u * `myrule` - Adds a private endpoint for an Azure Blob store. * `datafactory` - Adds a service tag rule to communicate with Azure Data Factory. - > [!TIP] - > Adding an outbound for a service tag is only valid when the managed VNet is configured to `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`. + > [!IMPORTANT] + > * Adding an outbound for a service tag is only valid when the managed VNet is configured to `IsolationMode.ALLOW_ONLY_APPROVED_OUTBOUND`. + > * If you add outbound rules, Microsoft can't guarantee data exfiltration. ```python # Basic managed virtual network configuration To configure a managed VNet that allows only approved outbound communications, u ) ) + # Example FQDN rule + ws.managed_network.outbound_rules.append( + FqdnDestination( + name="fqdnrule", + destination="pypi.org" + ) + ) + # Create the workspace ws = ml_client.workspaces.begin_create(ws).result() ``` * __Update an existing workspace__: - > [!WARNING] - > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints. + [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)] The following example demonstrates how to create a managed VNet for an existing Azure Machine Learning workspace named `myworkspace`. The example also adds several outbound rules for the managed VNet: To configure a managed VNet that allows only approved outbound communications, u ) ) + # Example FQDN rule + ws.managed_network.outbound_rules.append( + FqdnDestination( + name="fqdnrule", + destination="pypi.org" + ) + ) + # Update the workspace ml_client.workspaces.begin_update(ws) ``` To configure a managed VNet that allows only approved outbound communications, u :::image type="content" source="./media/how-to-managed-network/use-managed-network-approved-outbound.png" alt-text="Screenshot of creating a workspace with an approved outbound managed network." lightbox="./media/how-to-managed-network/use-managed-network-approved-outbound.png"::: + 1. To add an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information: + + * __Rule name__: A name for the rule. The name must be unique for this workspace. + * __Destination type__: Private Endpoint, Service Tag, or FQDN. Service Tag and FQDN are only available when the network isolation is private with approved outbound. ++ If the destination type is __Private Endpoint__, provide the following information: ++ * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for. + * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for. + * __Resource type__: The type of the Azure resource. + * __Resource name__: The name of the Azure resource. + * __Sub Resource__: The sub resource of the Azure resource type. + * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage. ++ > [!TIP] + > Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of updating an approved outbound network by adding a private endpoint." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png"::: ++ If the destination type is __Service Tag__, provide the following information: ++ * __Service tag__: The service tag to add to the approved outbound rules. + * __Protocol__: The protocol to allow for the service tag. + * __Port ranges__: The port ranges to allow for the service tag. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-service-tag.png" alt-text="Screenshot of updating an approved outbound network by adding a service tag." lightbox="./media/how-to-managed-network/outbound-rule-service-tag.png" ::: ++ If the destination type is __FQDN__, provide the following information: ++ * __FQDN destination__: The fully qualified domain name to add to the approved outbound rules. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-fqdn.png" alt-text="Screenshot of updating an approved outbound network by adding an FQDN rule for an approved outbound managed network." lightbox="./media/how-to-managed-network/outbound-rule-fqdn.png"::: ++ Select __Save__ to save the rule. You can continue using __Add user-defined outbound rules__ to add rules. + 1. Continue creating the workspace as normal. * __Update an existing workspace__: - > [!WARNING] - > Before updating an existing workspace to use a managed virtual network, you must delete all computing resources for the workspace. This includes compute instance, compute cluster, and managed online endpoints. + [!INCLUDE [managed-vnet-update](includes/managed-vnet-update.md)] 1. Sign in to the [Azure portal](https://portal.azure.com), and select the Azure Machine Learning workspace that you want to enable managed virtual network isolation for.- 1. Select __Networking__, then select __Private with Approved Outbound__. Select __Save__ to save the changes. + 1. Select __Networking__, then select __Private with Approved Outbound__. :::image type="content" source="./media/how-to-managed-network/update-managed-network-approved-outbound.png" alt-text="Screenshot of updating a workspace to managed network with approved outbound." lightbox="./media/how-to-managed-network/update-managed-network-approved-outbound.png"::: + * To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information: + + * __Rule name__: A name for the rule. The name must be unique for this workspace. + * __Destination type__: Private Endpoint, Service Tag, or FQDN. Service Tag and FQDN are only available when the network isolation is private with approved outbound. ++ If the destination type is __Private Endpoint__, provide the following information: ++ * __Subscription__: The subscription that contains the Azure resource you want to add a private endpoint for. + * __Resource group__: The resource group that contains the Azure resource you want to add a private endpoint for. + * __Resource type__: The type of the Azure resource. + * __Resource name__: The name of the Azure resource. + * __Sub Resource__: The sub resource of the Azure resource type. + * __Spark enabled__: Select this option if you want to enable serverless spark jobs for the workspace. This option is only available if the resource type is Azure Storage. ++ > [!TIP] + > Azure Machine Learning managed virtual network doesn't support creating a private endpoint to all Azure resource types. For a list of supported resources, see the [Private endpoints](#private-endpoints) section. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-private-endpoint.png" alt-text="Screenshot of updating an approved outbound network by adding a private endpoint rule." lightbox="./media/how-to-managed-network/outbound-rule-private-endpoint.png"::: ++ If the destination type is __Service Tag__, provide the following information: ++ * __Service tag__: The service tag to add to the approved outbound rules. + * __Protocol__: The protocol to allow for the service tag. + * __Port ranges__: The port ranges to allow for the service tag. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-service-tag.png" alt-text="Screenshot of updating an approved outbound network by adding a service tag rule." lightbox="./media/how-to-managed-network/outbound-rule-service-tag.png" ::: ++ If the destination type is __FQDN__, provide the following information: ++ * __FQDN destination__: The fully qualified domain name to add to the approved outbound rules. ++ :::image type="content" source="./media/how-to-managed-network/outbound-rule-fqdn.png" alt-text="Screenshot of updating an approved outbound network by adding an FQDN rule." lightbox="./media/how-to-managed-network/outbound-rule-fqdn.png"::: ++ Select __Save__ to save the rule. You can continue using __Add user-defined outbound rules__ to add rules. ++ * To __delete__ an outbound rule, select __delete__ for the rule. ++ :::image type="content" source="./media/how-to-managed-network/delete-outbound-rule.png" alt-text="Screenshot of the delete rule icon for an approved outbound managed network."::: ++ 1. Select __Save__ at the top of the page to save the changes to the managed network. + ## Configure for serverless spark jobs > [!TIP]-> The steps in this section are only needed for __Spark serverless__. If you are using [serverless __compute cluster__](how-to-use-serverless-compute.md), you can skip this section. +> The steps in this section are only needed if you plan to submit __serverless spark jobs__. If you aren't going to be submitting serverless spark jobs, you can skip this section. To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the managed VNet, you must perform the following actions: ml_client._workspace_outbound_rules.begin_remove(resource_group, ws_name, rule_n :::image type="content" source="./media/how-to-managed-network/manage-outbound-rules.png" alt-text="Screenshot of the outbound rules section." lightbox="./media/how-to-managed-network/manage-outbound-rules.png"::: +* To _add_ an _outbound rule_, select __Add user-defined outbound rules__ from the __Networking__ tab. From the __Workspace outbound rules__ sidebar, provide the following information: ++* To __enable__ or __disable__ a rule, use the toggle in the __Active__ column. ++* To __delete__ an outbound rule, select __delete__ for the rule. + ## List of required rules ml_client._workspace_outbound_rules.begin_remove(resource_group, ws_name, rule_n > These rules are automatically added to the managed VNet. __Private endpoints__:-* When the isolation mode for the managed network is `Allow internet outbound`, private endpoint outbound rules will be automatically created as required rules from the managed network for the workspace and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure ML Workspace). -* When the isolation mode for the managed network is `Allow only approved outbound`, private endpoint outbound rules will be automatically created as required rules from the managed network for the workspace and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure ML Workspace). +* When the isolation mode for the managed network is `Allow internet outbound`, private endpoint outbound rules are automatically created as required rules from the managed network for the workspace and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure Machine Learning workspace). +* When the isolation mode for the managed network is `Allow only approved outbound`, private endpoint outbound rules are automatically created as required rules from the managed network for the workspace and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure Machine Learning workspace). __Outbound__ service tag rules: __Inbound__ service tag rules: ## List of recommended outbound rules -Currently we don't have any recommended outbound rules. ++## Private endpoints ++Private endpoints are currently supported for the following Azure ++* Azure Machine Learning +* Azure Machine Learning registries +* Azure Storage (all sub resource types) +* Azure Container Registry +* Azure Key Vault +* Azure AI services +* Azure SQL Server +* Azure Data Factory +* Azure Cosmos DB (all sub resource types) +* Azure Event Hubs +* Azure Redis Cache +* Azure Databricks +* Azure Database for MariaDB +* Azure Database for PostgreSQL +* Azure Database for MySQL +* Azure SQL Managed Instance ++When you create a private endpoint, you provide the _resource type_ and _subresource_ that the endpoint connects to. Some resources have multiple types and subresources. For more information, see [what is a private endpoint](/azure/private-link/private-endpoint-overview). ++When you create a private endpoint for Azure Machine Learning dependency resources, such as Azure Storage, Azure Container Registry, and Azure Key Vault, the resource can be in a different Azure subscription. However, the resource must be in the same tenant as the Azure Machine Learning workspace. ++> [!IMPORTANT] +> When configuring private endpoints for an Azure Machine Learning managed virtual network, the private endpoints are only created when created when the first _compute is created_ or when managed network provisioning is forced. For more information on forcing the managed network provisioning, see [Configure for serverless spark jobs](#configure-for-serverless-spark-jobs). ++## Pricing ++The Azure Machine Learning managed virtual network feature is free. However, you're charged for the following resources that are used by the managed virtual network: ++* Azure Private Link - Private endpoints used to secure communications between the managed virtual network and Azure resources relies on Azure Private Link. For more information on pricing, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/). +* FQDN outbound rules - FQDN outbound rules are implemented using Azure Firewall. If you use outbound FQDN rules, charges for Azure Firewall are included in your billing. For more information on pricing, see [Azure Firewall pricing](https://azure.microsoft.com/pricing/details/azure-firewall/). ## Limitations * Once you enable managed virtual network isolation of your workspace, you can't disable it. * Managed virtual network uses private endpoint connection to access your private resources. You can't have a private endpoint and a service endpoint at the same time for your Azure resources, such as a storage account. We recommend using private endpoints in all scenarios.-* The managed network will be deleted and cleaned up when the workspace is deleted. +* The managed network is deleted when the workspace is deleted. +* Data exfiltration protection is automatically enabled for the only approved outbound mode. If you add other outbound rules, such as to FQDNs, Microsoft can't guarantee that you're protected from data exfiltration to those outbound destinations. +* Creating a compute cluster in a different region than the workspace isn't supported when using a managed virtual network. ++### Migration of compute resources ++If you have an existing workspace and want to enable managed virtual network for it, there's currently no supported migration path for existing manged compute resources. You'll need to delete all existing managed compute resources and recreate them after enabling the managed virtual network. The following list contains the compute resources that must be deleted and recreated: ++* Compute cluster +* Compute instance +* Managed online endpoints ## Next steps |
machine-learning | How To Mltable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mltable.md | You can optionally choose to load the MLTable object into Pandas, using: ``` #### Save the data loading steps-Next, save all your data loading steps into an MLTable file. If you save your data loading steps, you can reproduce your Pandas data frame at a later point in time, and you don't need to redefine the data loading steps in your code. +Next, save all your data loading steps into an MLTable file. Saving your data loading steps in an MLTable file allows you to reproduce your Pandas data frame at a later point in time, without need to redefine the code each time. +You can choose to save the MLTable yaml file to a cloud storage, or you can also save it to local paths. ```python-# serialize the data loading steps into an MLTable file -tbl.save("./nyc_taxi") +# save the data loading steps in an MLTable file to a cloud storage +# NOTE: the tbl object was defined in the previous snippet. +tbl.save(save_path_dirc= "azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<wsname>/datastores/<name>/paths/titanic", collocated=True, show_progress=True, allow_copy_errors=False, overwrite=True) ``` -You can optionally view the contents of the MLTable file, to understand how the data loading steps are serialized into a file: - ```python-with open("./nyc_taxi/MLTable", "r") as f: - print(f.read()) +# save the data loading steps in an MLTable file to local +# NOTE: the tbl object was defined in the previous snippet. +tbl.save("./titanic") ``` +> [!IMPORTANT] +> - If collocated == True, then we will copy the data to the same folder with MLTable yaml file if they are not currently collocated, and we will use relative paths in MLTable yaml. +> - If collocated == False, we will not move the data and we will use absolute paths for cloud data and use relative paths for local data. +> - We donΓÇÖt support this parameter combination: data is in local, collocated == False, `save_path_dirc` is a cloud directory. Please upload your local data to cloud and use the cloud data paths for MLTable instead. +> - Parameters `show_progress` (default as True), `allow_copy_errors` (default as False), `overwrite`(default as True) are optional. +> ++ ### Reproduce data loading steps Now that the data loading steps have been serialized into a file, you can reproduce them at any point in time, with the load() method. This way, you don't need to redefine your data loading steps in code, and you can more easily share the file. |
machine-learning | How To Network Isolation Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-isolation-planning.md | -In this article, you learn how to plan your network isolation for Azure Machine Learning and our recommendations. This is a document for IT administrators who want to design network architecture. +In this article, you learn how to plan your network isolation for Azure Machine Learning and our recommendations. This article is for IT administrators who want to design network architecture. -## Key considerations +## Recommended architecture (Managed Network Isolation pattern) -### Azure Machine Learning managed virtual network and Azure Virtual Network +[Using a Managed virtual network](how-to-managed-network.md) (preview) provides an easier configuration for network isolation. It automatically secures your workspace and managed compute resources in a managed virtual network. You can add private endpoint connections for other Azure services that the workspace relies on, such as Azure Storage Accounts. Depending on your needs, you can allow all outbound traffic to the public network or allow only the outbound traffic you approve. Outbound traffic required by the Azure Machine Learning service is automatically enabled for the managed virtual network. We recommend using workspace managed network isolation for a built-in friction less network isolation method. We have two patterns: allow internet outbound mode or allow only approved outbound mode. -Azure Machine Learning can use a managed virtual network (preview) or Azure Virtual Network to enable network isolation. +### Allow internet outbound mode -> [!IMPORTANT] -> Azure Machine Learning managed virtual network is currently in preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +Use this option if you want to allow your machine learning engineers access the internet freely. You can create other private endpoint outbound rules to let them access your private resources on Azure. -Using a Managed virtual network provides an easier configuration for network isolation. It automatically secures your workspace and managed compute resources in a managed virtual network. You can add private endpoint connections for other Azure services that the workspace relies on, such as Azure Storage Accounts. Depending on your needs, you can allow all outbound traffic to the public network or allow only the outbound traffic you approve. Outbound traffic required by the Azure Machine Learning service is automatically enabled for the managed virtual network. -Using Azure Virtual Networks provides a more customizable network isolation solution, with the caveat that you are responsible for configuration and management. An Azure Virtual Network can be used to connect unmanaged resources to your workspace. For example, an Azure Virtual Network might be used to enable clients to connect to the workspace through a Virtual Private Network (VPN) gateway, or to allow you to [attach on-premises kubernetes](how-to-attach-kubernetes-anywhere.md) as a compute resource. +### Allow only approved outbound mode -> [!TIP] -> The information in this article is primarily about using Azure Virtual Networks. For more information on Azure Machine Learning managed virtual networks, see the [Managed virtual network](how-to-managed-network.md) article. +Use this option if you want to minimize data exfiltration risk and control what your machine learning engineers can access. You can control outbound rules using private endpoint, service tag and FQDN. +++## Recommended architecture (use your Azure VNet) ++If you have a specific requirement or company policy that prevents you from using a managed virtual network, you can use an __Azure virtual network__ for network isolation. ++The following diagram is our recommended architecture to make all resources private but allow outbound internet access from your VNet. This diagram describes the following architecture: +* Put all resources in the same region. +* A hub VNet, which contains your firewall. +* A spoke VNet, which contains the following resources: + * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. + * A scoring subnet contains an AKS cluster. + * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.) +* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage. ++This architecture balances your network security and your ML engineers' productivity. +++You can automate this environments creation using [a template](tutorial-create-secure-workspace-template.md) without managed online endpoint or AKS. Managed online endpoint is the solution if you don't have an existing AKS cluster for your AI model scoring. See [how to secure online endpoint](how-to-secure-online-endpoint.md) documentation for more info. AKS with Azure Machine Learning extension is the solution if you have an existing AKS cluster for your AI model scoring. See [how to attach kubernetes](how-to-attach-kubernetes-anywhere.md) documentation for more info. ++### Removing firewall requirement ++If you want to remove the firewall requirement, you can use network security groups and [Azure virtual network NAT](/azure/virtual-network/nat-gateway/nat-overview) to allow internet outbound from your private computing resources. +++### Using public workspace ++You can use a public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace. ++## Recommended architecture with data exfiltration prevention ++This diagram shows the recommended architecture to make all resources private and control outbound destinations to prevent data exfiltration. We recommend this architecture when using Azure Machine Learning with your sensitive data in production. This diagram describes the following architecture: +* Put all resources in the same region. +* A hub VNet, which contains your firewall. + * In addition to service tags, the firewall uses FQDNs to prevent data exfiltration. +* A spoke VNet, which contains the following resources: + * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. Additionally, a service endpoint and service endpoint policy are in place to prevent data exfiltration. + * A scoring subnet contains an AKS cluster. + * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.) +* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage. +++The following tables list the required outbound [Azure Service Tags](/azure/virtual-network/service-tags-overview) and fully qualified domain names (FQDN) with data exfiltration protection setting: ++| Outbound service tag | Protocol | Port | +| - | -- | - | +| `AzureActiveDirectory` | TCP | 80, 443 | +| `AzureResourceManager` | TCP | 443 | +| `AzureMachineLearning` | UDP | 5831 | +| `BatchNodeManagement` | TCP | 443 | ++| Outbound FQDN | Protocol | Port | +| - | - | - | +| `mcr.microsoft.com` | TCP | 443 | +| `*.data.mcr.microsoft.com` | TCP | 443 | +| `ml.azure.com` | TCP | 443 | +| `automlresources-prod.azureedge.net` | TCP | 443 | ++### Using public workspace ++You can use the public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace. ++## Key considerations to understand details ### Azure Machine Learning has both IaaS and PaaS resources The following tables list the required outbound [Azure Service Tags](/azure/virt ### Managed online endpoint -Azure Machine Learning managed online endpoint uses Azure Machine Learning managed VNet, instead of using your VNet. If you want to disallow public access to your endpoint, set the `public_network_access` flag to disabled. When this flag is disabled, your endpoint can be accessed via the private endpoint of your workspace, and it can't be reached from public networks. If you want to use a private storage account for your deployment, set the `egress_public_network_access` flag disabled. It automatically creates private endpoints to access your private resources. +Security for inbound and outbound communication are configured separately for managed online endpoints. -> [!TIP] -> The workspace default storage account is the only private storage account supported by managed online endpoint. +#### Inbound communication ++Azure Machine Learning uses a private endpoint to secure inbound communication to a managed online endpoint. Set the endpoint's `public_network_access` flag to `disabled` to prevent public access to it. When this flag is disabled, your endpoint can be accessed only via the private endpoint of your Azure Machine Learning workspace, and it can't be reached from public networks. +#### Outbound communication -For more information, see the [Network isolation of managed online endpoints](how-to-secure-online-endpoint.md) article. ++To secure outbound communication from a deployment to resources, Azure Machine Learning uses a workspace managed virtual network (preview). The deployment needs to be created in the workspace managed VNet so that it can use the private endpoints of the workspace managed virtual network for outbound communication. ++The following architecture diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from a client's virtual network flow through the workspace's private endpoint to the managed online endpoint. Outbound communication from deployments to services is handled through private endpoints from the workspace's managed virtual network to those service instances. +++For more information, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md). ### Private IP address shortage in your main network In this diagram, your main VNet requires the IPs for private endpoints. You can ### Network policy enforcement You can use [built-in policies](how-to-integrate-azure-policy.md) if you want to control network isolation parameters with self-service workspace and computing resources creation. -### Other considerations +### Other minor considerations #### Image build compute setting for ACR behind VNet If you plan on using the Azure Machine Learning studio, there are extra configur <!-- ### Registry --> -## Recommended architecture --The following diagram is our recommended architecture to make all resources private but allow outbound internet access from your VNet. This diagram describes the following architecture: -* Put all resources in the same region. -* A hub VNet, which contains your firewall. -* A spoke VNet, which contains the following resources: - * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. - * A scoring subnet contains an AKS cluster. - * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.) -* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage. --This architecture balances your network security and your ML engineers' productivity. ---You can automate this environments creation using [a template](tutorial-create-secure-workspace-template.md) without managed online endpoint or AKS. Managed online endpoint is the solution if you don't have an existing AKS cluster for your AI model scoring. See [how to secure online endpoint](how-to-secure-online-endpoint.md) documentation for more info. AKS with Azure Machine Learning extension is the solution if you have an existing AKS cluster for your AI model scoring. See [how to attach kubernetes](how-to-attach-kubernetes-anywhere.md) documentation for more info. --### Removing firewall requirement --If you want to remove the firewall requirement, you can use network security groups and [Azure virtual network NAT](/azure/virtual-network/nat-gateway/nat-overview) to allow internet outbound from your private computing resources. ---### Using public workspace --You can use a public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace. --## Recommended architecture with data exfiltration prevention --This diagram shows the recommended architecture to make all resources private and control outbound destinations to prevent data exfiltration. We recommend this architecture when using Azure Machine Learning with your sensitive data in production. This diagram describes the following architecture: -* Put all resources in the same region. -* A hub VNet, which contains your firewall. - * In addition to service tags, the firewall uses FQDNs to prevent data exfiltration. -* A spoke VNet, which contains the following resources: - * A training subnet contains compute instances and clusters used for training ML models. These resources are configured for no public IP. Additionally, a service endpoint and service endpoint policy are in place to prevent data exfiltration. - * A scoring subnet contains an AKS cluster. - * A 'pe' subnet contains private endpoints that connect to the workspace and private resources used by the workspace (storage, key vault, container registry, etc.) -* Managed online endpoints use the private endpoint of the workspace to process incoming requests. A private endpoint is also used to allow managed online endpoint deployments to access private storage. ---The following tables list the required outbound [Azure Service Tags](/azure/virtual-network/service-tags-overview) and fully qualified domain names (FQDN) with data exfiltration protection setting: --| Outbound service tag | Protocol | Port | -| - | -- | - | -| `AzureActiveDirectory` | TCP | 80, 443 | -| `AzureResourceManager` | TCP | 443 | -| `AzureMachineLearning` | UDP | 5831 | -| `BatchNodeManagement` | TCP | 443 | --| Outbound FQDN | Protocol | Port | -| - | - | - | -| `mcr.microsoft.com` | TCP | 443 | -| `*.data.mcr.microsoft.com` | TCP | 443 | -| `ml.azure.com` | TCP | 443 | -| `automlresources-prod.azureedge.net` | TCP | 443 | +## Next steps -### Using public workspace +For more information on using a __managed virtual network__, see the following articles: -You can use the public workspace if you're OK with Azure AD authentication and authorization with conditional access. A public workspace has some features to show data in your private storage account and we recommend using private workspace. +* [Managed Network Isolation](how-to-managed-network.md) +* [Use private endpoint to access your workspace](how-to-configure-private-link.md) +* [Use custom DNS](how-to-custom-dns.md) -## Next steps +For more information on using an __Azure Virtual Network__, see the following articles: * [Virtual network overview](how-to-network-security-overview.md) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure the training environment](how-to-secure-training-vnet.md) * [Secure the inference environment](how-to-secure-inferencing-vnet.md) * [Enable studio functionality](how-to-enable-studio-virtual-network.md)-* [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md) -* [Use custom DNS](how-to-custom-dns.md) +* [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md) |
machine-learning | How To Network Security Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md | monikerRange: 'azureml-api-2 || azureml-api-1' [!INCLUDE [dev v1](includes/machine-learning-dev-v1.md)] :::moniker-end -Secure Azure Machine Learning workspace resources and compute environments using Azure Virtual Networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network. - [!INCLUDE [managed-vnet-note](includes/managed-vnet-note.md)] +Secure Azure Machine Learning workspace resources and compute environments using Azure Virtual Networks (VNets). This article uses an example scenario to show you how to configure a complete virtual network. + This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series: :::moniker range="azureml-api-2"+* [Use managed networks](how-to-managed-network.md) (preview) * [Secure the workspace resources](how-to-secure-workspace-vnet.md) * [Secure machine learning registries](how-to-registry-network-isolation.md) * [Secure the training environment](how-to-secure-training-vnet.md) For a tutorial on creating a secure workspace, see [Tutorial: Create a secure wo ## Prerequisites -This article assumes that you have familiarity with the following topics: +This article assumes that you have familiarity with the following articles: + [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) + [IP networking](../virtual-network/ip-services/public-ip-addresses.md) + [Azure Machine Learning workspace with private endpoint](how-to-configure-private-link.md) The following table compares how services access different parts of an Azure Mac * **Inference compute access** - Access Azure Kubernetes Services (AKS) compute clusters with private IP addresses. -The next sections show you how to secure the network scenario described above. To secure your network, you must: +The next sections show you how to secure the network scenario described previously. To secure your network, you must: 1. Secure the [**workspace and associated resources**](#secure-the-workspace-and-associated-resources). 1. Secure the [**training environment**](#secure-the-training-environment). The next sections show you how to secure the network scenario described above. T If you want to access the workspace over the public internet while keeping all the associated resources secured in a virtual network, use the following steps: -1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) that will contain the resources used by the workspace. +1. Create an [Azure Virtual Network](../virtual-network/virtual-networks-overview.md). This network secures the resources used by the workspace. 1. Use __one__ of the following options to create a publicly accessible workspace: :::moniker range="azureml-api-2" If you want to access the workspace over the public internet while keeping all t Use the following steps to secure your workspace and associated resources. These steps allow your services to communicate in the virtual network. :::moniker range="azureml-api-2"-1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. +1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). This network secures the workspace and other resources. Then create a [Private Link-enabled workspace](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. 1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these | Service | Endpoint information | Allow trusted information | Use the following steps to secure your workspace and associated resources. These | __Azure Container Registry__ | [Private endpoint](../container-registry/container-registry-private-link.md) | [Allow trusted services](../container-registry/allow-access-trusted-services.md) | :::moniker-end :::moniker range="azureml-api-1"-1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md) that will contain the workspace and other resources. Then create a [Private Link-enabled workspace](./v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. +1. Create an [Azure Virtual Networks](../virtual-network/virtual-networks-overview.md). This virtual network secures the workspace and other resources. Then create a [Private Link-enabled workspace](./v1/how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint) to enable communication between your VNet and workspace. 1. Add the following services to the virtual network by using _either_ a __service endpoint__ or a __private endpoint__. Also allow trusted Microsoft services to access these | Service | Endpoint information | Allow trusted information | For detailed instructions on how to complete these steps, see [Secure a training ### Example training job submission -In this section, you learn how Azure Machine Learning securely communicates between services to submit a training job. This shows you how all your configurations work together to secure communication. +In this section, you learn how Azure Machine Learning securely communicates between services to submit a training job. This example shows you how all your configurations work together to secure communication. 1. The client uploads training scripts and training data to storage accounts that are secured with a service or private endpoint. |
machine-learning | How To Prepare Datasets For Automl Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md | In this article, you learn how to prepare image data for training computer visio To generate models for computer vision tasks with automated machine learning, you need to bring labeled image data as input for model training in the form of an `MLTable`. You can create an `MLTable` from labeled training data in JSONL format. -If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model. +If your labeled training data is in a different format (like, pascal VOC or COCO), you can use a [conversion script](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to first convert it to JSONL, and then create an `MLTable`. Alternatively, you can use Azure Machine Learning's [data labeling tool](how-to-create-image-labeling-projects.md) to manually label images, and export the labeled data to use for training your AutoML model. ## Prerequisites my_data = Data( Next, you will need to get the label annotations in JSONL format. The schema of labeled data depends on the computer vision task at hand. Refer to [schemas for JSONL files for AutoML computer vision experiments](reference-automl-images-schema.md) to learn more about the required JSONL schema for each task type. -If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs). +If your training data is in a different format (like, pascal VOC or COCO), [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to convert the data to JSONL are available in [notebook examples](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs). Once you have created jsonl file following the above steps, you can register it as a data asset using UI. Make sure you select `stream` type in schema section as shown below. |
machine-learning | How To Registry Network Isolation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-registry-network-isolation.md | To connect to a registry that's secured behind a VNet, use one of the following * [Azure Bastion](/azure/bastion/bastion-overview)ΓÇ»- In this scenario, you create an Azure Virtual Machine (sometimes called a jump box) inside the VNet. You then connect to the VM using Azure Bastion. Bastion allows you to connect to the VM using either an RDP or SSH session from your local web browser. You then use the jump box as your development environment. Since it is inside the VNet, it can directly access the registry. ### Share assets from workspace to registry +> [!NOTE] +> Currently, sharing an asset from secure workspace to a Azure machine learning registry is not supported if the storage account containing the asset has public access disabled. Create a private endpoint to the registry, storage and ACR from the VNet of the workspace. If you're trying to connect to multiple registries, create private endpoint for each registry and associated storage and ACRs. For more information, see the [How to create a private endpoint](#how-to-create-a-private-endpoint) section. |
machine-learning | How To Secure Inferencing Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-inferencing-vnet.md | Last updated 09/06/2022 # Secure an Azure Machine Learning inferencing environment with virtual networks - In this article, you learn how to secure inferencing environments (online endpoints) with a virtual network in Azure Machine Learning. There are two inference options that can be secured using a VNet: * Azure Machine Learning managed online endpoints++ > [!TIP] + > Microsoft recommends using an Azure Machine Learning **managed virtual networks** (preview) instead of the steps in this article when securing managed online endpoints. With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes. You can also add private endpoints for resources needed by the workspace, such as Azure Storage Account. For more information, see [Workspace managed network isolation](how-to-managed-network.md). + * Azure Kubernetes Service > [!TIP] In this article, you learn how to secure inferencing environments (online endpoi + Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture. -+ An existing virtual network and subnet, that is used to secure the Azure Machine Learning workspace. ++ An existing virtual network and subnet that is used to secure the Azure Machine Learning workspace. [!INCLUDE [network-rbac](includes/network-rbac.md)] To use Azure Kubernetes Service cluster for secure inference, use the following * CLI v2 - https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/kubernetes * Python SDK V2 - https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/kubernetes- * Studio UI - Follow the steps in [managed online endpoint deployment](how-to-use-managed-online-endpoint-studio.md) through the Studio. After entering the __Endpoint name__ select __Kubernetes__ as the compute type instead of __Managed__ + * Studio UI - Follow the steps in [managed online endpoint deployment](how-to-use-managed-online-endpoint-studio.md) through the Studio. After you enter the __Endpoint name__, select __Kubernetes__ as the compute type instead of __Managed__. ## Limit outbound connectivity from the virtual network |
machine-learning | How To Secure Online Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md | Title: Network isolation of managed online endpoints + Title: How to secure managed online endpoints with network isolation description: Use private endpoints to provide network isolation for Azure Machine Learning managed online endpoints. -# Use network isolation with managed online endpoints +# Secure your managed online endpoints with network isolation -When deploying a machine learning model to a managed online endpoint, you can secure communication with the online endpoint by using [private endpoints](../private-link/private-endpoint-overview.md). +In this article, you'll use network isolation to secure a managed online endpoint. You'll create a managed online endpoint that uses an Azure Machine Learning workspace's private endpoint for secure inbound communication. You'll also configure the workspace with a **managed virtual network** that **allows only approved outbound** communication for deployments (preview). Finally, you'll create a deployment that uses the private endpoints of the workspace's managed virtual network for outbound communication. -You can secure the inbound scoring requests from clients to an _online endpoint_. You can also secure the outbound communications between a _deployment_ and the Azure resources it uses. Security for inbound and outbound communication are configured separately. For more information on endpoints and deployments, see [What are endpoints and deployments](concept-endpoints-online.md). -The following diagram shows how communications flow through private endpoints to the managed online endpoint. Incoming scoring requests from clients are received through the workspace private endpoint from your virtual network. Outbound communication with services is handled through private endpoints to those service instances from the deployment: -+For examples that use the legacy method for network isolation, see the deployment files [deploy-moe-vnet-legacy.sh](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet-legacy.sh) (for deployment using a generic model) and [deploy-moe-vnet-mlflow-legacy.sh](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet-mlflow-legacy.sh) (for deployment using an MLflow model) in the azureml-examples GitHub repo. ## Prerequisites -To start with, you would need an Azure subscription, CLI or SDK to interact with Azure Machine Learning workspace and related entities, and the right permission. +To begin, you need an Azure subscription, CLI or SDK to interact with Azure Machine Learning workspace and related entities, and the right permission. * To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today. -* You must install and configure the Azure CLI and `ml` extension or the Azure Machine Learning Python SDK v2. For more information, see the following articles: +* install and configure the [Azure CLI](/cli/azure/) and the `ml` extension to the Azure CLI. For more information, see [Install, set up, and use the CLI (v2)](how-to-configure-cli.md). + >[!TIP] + > Azure Machine Learning managed virtual network was introduced on May 23rd, 2023. If you have an older version of the ml extension, you may need to update it for the examples in this article work. To update the extension, use the following Azure CLI command: + > + > ```azurecli + > az extension update -n ml + > ``` - * [Install, set up, and use the CLI (v2)](how-to-configure-cli.md). - * [Install the Python SDK v2](https://aka.ms/sdk-v2-install). +* The CLI examples in this article assume that you're using the Bash (or compatible) shell. For example, from a Linux system or [Windows Subsystem for Linux](/windows/wsl/about). -* You must have an Azure Resource Group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you configured your `ml` extension per the above article. +* You must have an Azure Resource Group, in which you (or the service principal you use) need to have `Contributor` access. You'll have such a resource group if you've configured your `ml` extension. * If you want to use a [user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp) to create and manage online endpoints and online deployments, the identity should have the proper permissions. For details about the required permissions, see [Set up service authentication](./how-to-identity-based-service-authentication.md#workspace). For example, you need to assign the proper RBAC permission for Azure Key Vault on the identity. -There are additional prerequisites for workspace and its related entities. --* You must have an Azure Machine Learning workspace, and the workspace must use a private endpoint. If you don't have one, the steps in this article create an example workspace, virtual network (VNet), and VM. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](./how-to-configure-private-link.md). --* The workspace has its `public_network_access` flag that can be either enabled or disabled. If you plan on using managed online endpoint deployments that use __public outbound__, then you must also [configure the workspace to allow public access](how-to-configure-private-link.md#enable-public-access). This is because outbound communication from managed online endpoint deployment is to the _workspace API_. When the deployment is configured to use __public outbound__, then the workspace must be able to accept that public communication (allow public access). --* When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). - ## Limitations -* The `v1_legacy_mode` flag must be disabled (false) on your Azure Machine Learning workspace. If this flag is enabled, you won't be able to create a managed online endpoint. For more information, see [Network isolation with v2 API](how-to-configure-network-isolation-with-v2.md). -* If your Azure Machine Learning workspace has a private endpoint that was created before May 24, 2022, you must recreate the workspace's private endpoint before configuring your online endpoints to use a private endpoint. For more information on creating a private endpoint for your workspace, see [How to configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md). +## Prepare your system +1. Create the environment variables used by this example by running the following commands. Replace `<YOUR_WORKSPACE_NAME>` with the name to use for your workspace. Replace `<YOUR_RESOURCEGROUP_NAME>` with the resource group that will contain your workspace. > [!TIP]- > To confirm when a workspace is created, you can check the workspace properties. In Studio, click `View all properties in Azure Portal` from `Directory + Subscription + Workspace` section (top right of the Studio), Click JSON View from top right of the Overview page, and choose the latest API Version. You can check the value of `properties.creationTime`. You can do the same by using `az ml workspace show` with [CLI](how-to-manage-workspace-cli.md#get-workspace-information), or `my_ml_client.workspace.get("my-workspace-name")` with [SDK](how-to-manage-workspace.md?tabs=python#find-a-workspace), or `curl` on workspace with [REST API](how-to-manage-rest.md#drill-down-into-workspaces-and-their-resources). --* When you use network isolation with a deployment, Azure Log Analytics is partially supported. All metrics and the `AMLOnlineEndpointTrafficLog` table are supported via Azure Log Analytics. `AMLOnlineEndpointConsoleLog` and `AMLOnlineEndpointEventLog` tables are currently not supported. As a workaround, you can use the [az ml online-deployment get_logs](/cli/azure/ml/online-deployment#az-ml-online-deployment-get-logs) CLI command, the [OnlineDeploymentOperations.get_logs()](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-get-logs) Python SDK, or the Deployment log tab in the Azure Machine Learning studio instead. For more information, see [Monitoring online endpoints](how-to-monitor-online-endpoints.md). - -* When you use network isolation with a deployment, you can use Azure Container Registry (ACR), Storage account, Key Vault and Application Insights from a different resource group in the same subscription, but you cannot use them if they are in a different subscription. --* For online deployments with `egress_public_network_access` flag set to `disabled`, access from the deployments to Microsoft Container Registry (MCR) is restricted. If you want to leverage container images from MCR (such as when using curated environment or mlflow no-code deployment), recommendation is to build the image locally inside the virtual network ([docker build](https://docs.docker.com/engine/reference/commandline/build/)) and push the image into the private Azure Container Registry (ACR) which is attached with the workspace (for instance, using [docker push](../container-registry/container-registry-get-started-docker-cli.md#push-the-image-to-your-registry)). The images in this ACR is accessible to secured deployments via the private endpoints which are automatically created on behalf of you when you set `egress_public_network_access` flag to `disabled`. For a quick example, please refer to [build image under virtual network](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh) and [end to end example for model deployment under virtual network](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-moe-vnet.sh). --> [!NOTE] -> Requests to create, update, or retrieve the authentication keys are sent to the Azure Resource Manager over the public network. - -## Inbound (scoring) --To secure scoring requests to the online endpoint to your virtual network, set the `public_network_access` flag for the endpoint to `disabled`: --# [Azure CLI](#tab/cli) --```azurecli -az ml online-endpoint create -f endpoint.yml --set public_network_access=disabled -``` --# [Python](#tab/python) --```python -from azure.ai.ml.entities import ManagedOnlineEndpoint --endpoint = ManagedOnlineEndpoint(name='my-online-endpoint', - description='this is a sample online endpoint', - tags={'foo': 'bar'}, - auth_mode="key", - public_network_access="disabled" - # public_network_access="enabled" -) -``` --# [Studio](#tab/azure-studio) --1. Go to the [Azure Machine Learning studio](https://ml.azure.com). -1. Select the **Workspaces** page from the left navigation bar. -1. Enter a workspace by clicking its name. -1. Select the **Endpoints** page from the left navigation bar. -1. Select **+ Create** to open the **Create deployment** setup wizard. -1. Disable the **Public network access** flag at the **Create endpoint** step. -- :::image type="content" source="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png" alt-text="A screenshot of how to disable public network access for an endpoint." lightbox="media/how-to-secure-online-endpoint/endpoint-disable-public-network-access.png"::: ----When `public_network_access` is `disabled`, inbound scoring requests are received using the [private endpoint of the Azure Machine Learning workspace](./how-to-configure-private-link.md), and the endpoint can't be reached from public networks. --> [!NOTE] -> You can update (enable or disable) the `public_network_access` flag of an online endpoint after creating it. --## Outbound (resource access) --To restrict communication between a deployment and external resources, including the Azure resources it uses, set the deployment's `egress_public_network_access` flag to `disabled`. Use this flag to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. Note that disabling the flag alone is not enough ΓÇö your workspace must also have a private link that allows access to Azure resources via a private endpoint. See the [Prerequisites](#prerequisites) for more details. --Secure outbound communication creates three private endpoints per deployment. One to the Azure Blob storage, one to the Azure Container Registry, and one to your workspace. --> [!IMPORTANT] -> * Each managed online endpoint deployment has its own independent Azure Machine Learning managed VNet. If the endpoint has multiple deployments, each deployment has its own managed VNet. -> * We do not support peering between a deployment's managed VNet and your client VNet. For secure access to resources needed by the deployment, we use private endpoints to communicate with the resources. --> [!WARNING] -> * You cannot update (enable or disable) the `egress_public_network_access` flag after creating the deployment. Attempting to change the flag while updating the deployment will fail with an error. --# [Azure CLI](#tab/cli) --```azurecli -az ml online-deployment create -f deployment.yml --set egress_public_network_access=disabled -``` --# [Python](#tab/python) --```python -blue_deployment = ManagedOnlineDeployment(name='blue', - endpoint_name='my-online-endpoint', - model=model, - code_configuration=CodeConfiguration(code_local_path='./model-1/onlinescoring/', - scoring_script='score.py'), - environment=env, - instance_type='Standard_DS2_v2', - instance_count=1, - egress_public_network_access="disabled" - # egress_public_network_access="enabled" -) - -ml_client.begin_create_or_update(blue_deployment) -``` --# [Studio](#tab/azure-studio) --1. Follow the steps in the **Create deployment** setup wizard to the **Deployment** step. -1. Disable the **Egress public network access** flag. -- :::image type="content" source="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png" alt-text="A screenshot of how to disable the egress public network access for a deployment." lightbox="media/how-to-secure-online-endpoint/deployment-disable-egress-public-network-access.png"::: ----The deployment communicates with these resources over the private endpoint: --* The Azure Machine Learning workspace -* The Azure Storage blob that is associated with the workspace -* The Azure Container Registry for the workspace --When you configure the `egress_public_network_access` to `disabled`, a new private endpoint is created per deployment, per service. For example, if you set the flag to `disabled` for three deployments to an online endpoint, a total of nine private endpoints are created. Each deployment would have three private endpoints to communicate with the workspace, blob, and container registry. To confirm the creation of the private endpoints, first check the storage account and container registry associated with the workspace (see [Download a configuration file](how-to-manage-workspace.md#download-a-configuration-file)), find each resource from Azure portal and check `Private endpoint connections` tab under the `Networking` menu. --> [!IMPORTANT] -> - As mentioned earlier, outbound communication from managed online endpoint deployment is to the _workspace API_. When the endpoint is configured to use __public outbound__ (in other words, `public_network_access` flag for the endpoint is set to `enabled`), then the workspace must be able to accept that public communication (`public_network_access` flag for the workspace set to `enabled`). -> - When online deployments are created with `egress_public_network_access` flag set to `disabled`, they will have access to above secured resources only. For instance, if the deployment uses model assets uploaded to other storage accounts, the model download will fail. Ensure model assets are on the storage account associated with the workspace. -> - When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network. On the contrary, when `egress_public_network_access` is set to `enabled`, the deployment can only access the resources with public access, which means it cannot access the resources secured in the virtual network. + > before creating a new workspace, you must create an Azure Resource Group to contain it. For more information, see [Manage Azure Resource Groups](/azure/azure-resource-manager/management/manage-resource-groups-cli). + ```azurecli + export RESOURCEGROUP_NAME="<YOUR_RESOURCEGROUP_NAME>" + export WORKSPACE_NAME="<YOUR_WORKSPACE_NAME>" + ``` -## Scenarios --The following table lists the supported configurations when configuring inbound and outbound communications for an online endpoint: --| Configuration | Inbound </br> (Endpoint property) | Outbound </br> (Deployment property) | Supported? | -| -- | -- | | | -| secure inbound with secure outbound | `public_network_access` is disabled | `egress_public_network_access` is disabled | Yes | -| secure inbound with public outbound | `public_network_access` is disabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes | -| public inbound with secure outbound | `public_network_access` is enabled | `egress_public_network_access` is disabled | Yes | -| public inbound with public outbound | `public_network_access` is enabled | `egress_public_network_access` is enabled</br>The workspace must also allow public access as the deployment outbound is to the workspace API. | Yes | ---## End-to-end example --Use the information in this section to create an example configuration that uses private endpoints to secure online endpoints. --> [!TIP] -> The end-to-end example in this article comes from the files in the __azureml-examples__ GitHub repository. To clone the samples repository and switch to the repository's `cli/` directory, use the following commands: -> -> ```azurecli -> git clone https://github.com/Azure/azureml-examples -> cd azureml-examples/cli -> ``` -> -> In this example, an Azure Virtual Machine is created inside the virtual network. You connect to the VM using SSH, and run the deployment from the VM. This configuration is used to simplify the steps in this example, and does not represent a typical secure configuration. For example, in a production environment you would most likely use a VPN client or Azure ExpressRoute to directly connect clients to the virtual network. --### Create workspace and secured resources --The steps in this section use an Azure Resource Manager template to create the following Azure resources: --* Azure Virtual Network -* Azure Machine Learning workspace -* Azure Container Registry -* Azure Key Vault -* Azure Storage account (blob & file storage) --Public access is disabled for all the services. While the Azure Machine Learning workspace is secured behind a virtual network, it's configured to allow public network access. For more information, see [CLI 2.0 secure communications](how-to-configure-cli.md#secure-communications). A scoring subnet is created, along with outbound rules that allow communication with the following Azure --* Azure Active Directory -* Azure Resource Manager -* Azure Front Door -* Microsoft Container Registries --The following diagram shows the different components created in this architecture: --The following diagram shows the overall architecture of this example: +1. Create your workspace. The `-m allow_only_approved_outbound` parameter configures a managed virtual network for the workspace and blocks outbound traffic except to approved destinations. + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_workspace_allow_only_approved_outbound" ::: -To create the resources, use the following Azure CLI commands. To create a resource group. Replace `<my-resource-group>` and `<my-location>` with the desired values. + Alternatively, if you'd like to allow the deployment to send outbound traffic to the internet, uncomment the following code and run it instead. -```azurecli -# create resource group -az group create --name <my-resource-group> --location <my-location> -``` + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_workspace_internet_outbound" ::: -Clone the example files for the deployment, use the following command: + For more information on how to create a new workspace or to upgrade your existing workspace to use a manged virtual network, see [Configure a managed virtual network to allow internet outbound](how-to-managed-network.md#configure-a-managed-virtual-network-to-allow-internet-outbound). -```azurecli -#Clone the example files -git clone https://github.com/Azure/azureml-examples -``` + When the workspace is configured with a private endpoint, the Azure Container Registry for the workspace must be configured for __Premium__ tier. For more information, see [Azure Container Registry service tiers](../container-registry/container-registry-skus.md). -To create the resources, use the following Azure CLI commands. Replace `<UNIQUE_SUFFIX>` with a unique suffix for the resources that are created. +1. Configure the defaults for the CLI so that you can avoid passing in the values for your workspace and resource group multiple times. -```azurecli -az deployment group create --template-file endpoints/online/managed/vnet/setup_ws/main.bicep --parameters suffix=$SUFFIX --resource-group <my-resource-group> -``` -### Create the virtual machine jump box + ```azurecli + az configure --defaults workspace=$WORKSPACE_NAME group=$RESOURCEGROUP_NAME + ``` -To create an Azure Virtual Machine that can be used to connect to the virtual network, use the following command. Replace `<your-new-password>` with the password you want to use when connecting to this VM: +1. Clone the examples repository to get the example files for the endpoint and deployment, then go to the repository's `/cli` directory. -```azurecli -# create vm -az vm create --name test-vm --vnet-name vnet-$SUFFIX --subnet snet-scoring --image UbuntuLTS --admin-username azureuser --admin-password <your-new-password> --resource-group <my-resource-group> -``` + ```azurecli + git clone --depth 1 https://github.com/Azure/azureml-examples + cd /cli + ``` -> [!IMPORTANT] -> The VM created by these commands has a public endpoint that you can connect to over the public network. +The commands in this tutorial are in the file `deploy-managed-online-endpoint-workspacevnet.sh` in the `cli` directory, and the YAML configuration files are in the `endpoints/online/managed/sample/` subdirectory. -The response from this command is similar to the following JSON document: +## Create a secured managed online endpoint -```json -{ - "fqdns": "", - "id": "/subscriptions/<GUID>/resourceGroups/<my-resource-group>/providers/Microsoft.Compute/virtualMachines/test-vm", - "location": "westus", - "macAddress": "00-0D-3A-ED-D8-E8", - "powerState": "VM running", - "privateIpAddress": "192.168.0.12", - "publicIpAddress": "20.114.122.77", - "resourceGroup": "<my-resource-group>", - "zones": "" -} -``` +To create a secured managed online endpoint, create the endpoint in your workspace and set the endpoint's `public_network_access` to `disabled` to control inbound communication. The endpoint will then have to use the workspace's private endpoint for inbound communication. -Use the following command to connect to the VM using SSH. Replace `publicIpAddress` with the value of the public IP address in the response from the previous command: +Because the workspace is configured to have a managed virtual network, any deployments of the endpoint will use the private endpoints of the managed virtual network for outbound communication (preview). -```azurecli -ssh azureusere@publicIpAddress -``` +1. Set the endpoint's name. -When prompted, enter the password you used when creating the VM. + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="set_endpoint_name" ::: -### Configure the VM +1. Create an endpoint with `public_network_access` disabled to block inbound traffic. -1. Use the following commands from the SSH session to install the CLI and Docker: + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_endpoint_inbound_blocked" ::: - :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="setup_docker_az_cli"::: + Alternatively, if you'd like to allow the endpoint to receive scoring requests from the internet, uncomment the following code and run it instead. -1. To create the environment variables used by this example, run the following commands. Replace `<YOUR_SUBSCRIPTION_ID>` with your Azure subscription ID. Replace `<YOUR_RESOURCE_GROUP>` with the resource group that contains your workspace. Replace `<SUFFIX_USED_IN_SETUP>` with the suffix you provided earlier. Replace `<LOCATION>` with the location of your Azure workspace. Replace `<YOUR_ENDPOINT_NAME>` with the name to use for the endpoint. + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_endpoint_inbound_allowed" ::: - > [!TIP] - > Use the tabs to select whether you want to perform a deployment using an MLflow model or generic ML model. +1. Create a deployment in the workspace managed virtual network. - # [Generic model](#tab/model) + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_deployment" ::: - :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet.sh" id="set_env_vars"::: +1. Get the status of the deployment. - # [MLflow model](#tab/mlflow) + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="get_status" ::: - :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-vnet-mlflow.sh" id="set_env_vars"::: +1. Test the endpoint with a scoring request, using the CLI. - + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="test_endpoint" ::: -1. To sign in to the Azure CLI in the VM environment, use the following command: +1. Get deployment logs. - :::code language="azurecli" source="~/azureml-examples-main/cli/misc.sh" id="az_login"::: + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="get_logs" ::: -1. To configure the defaults for the CLI, use the following commands: +1. Delete the endpoint if you no longer need it. - :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/vmsetup.sh" id="configure_defaults"::: + :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="delete_endpoint" ::: -1. To clone the example files for the deployment, use the following command: +1. Delete all the resources created in this article. Replace `<resource-group-name>` with the name of the resource group used in this example: ```azurecli- sudo mkdir -p /home/samples; sudo git clone -b main --depth 1 https://github.com/Azure/azureml-examples.git /home/samples/azureml-examples + az group delete --resource-group <resource-group-name> ``` -1. To build a custom docker image to use with the deployment, use the following commands: -- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/build_image.sh" id="build_image"::: -- > [!TIP] - > In this example, we build the Docker image before pushing it to Azure Container Registry. Alternatively, you can build the image in your virtual network by using an Azure Machine Learning compute cluster and environments. For more information, see [Secure Azure Machine Learning workspace](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). --### Create a secured managed online endpoint --1. To create a managed online endpoint that is secured using a private endpoint for inbound and outbound communication, use the following commands: -- > [!TIP] - > You can test or debug the Docker image locally by using the `--local` flag when creating the deployment. For more information, see the [Deploy and debug locally](how-to-deploy-online-endpoints.md#deploy-and-debug-locally-by-using-local-endpoints) article. -- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/create_moe.sh" id="create_vnet_deployment"::: ---1. To make a scoring request with the endpoint, use the following commands: -- :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/online/managed/vnet/setup_vm/scripts/score_endpoint.sh" id="check_deployment"::: --### Cleanup --To delete the endpoint, use the following command: ---To delete the VM, use the following command: ---To delete all the resources created in this article, use the following command. Replace `<resource-group-name>` with the name of the resource group used in this example: --```azurecli -az group delete --resource-group <resource-group-name> -``` - ## Troubleshooting [!INCLUDE [network isolation issues](includes/machine-learning-online-endpoint-troubleshooting.md)] ## Next steps +- [Network isolation with managed online endpoints](concept-secure-online-endpoint.md) +- [Workspace managed network isolation](how-to-managed-network.md) +- [Tutorial: How to create a secure workspace](tutorial-create-secure-workspace.md) - [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md)-- [How to autoscale managed online endpoints](how-to-autoscale-endpoints.md)-- [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md) |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md | ms.devlang: azurecli [!INCLUDE [SDK v2](includes/machine-learning-sdk-v2.md)] Azure Machine Learning compute instance and compute cluster can be used to securely train models in an Azure Virtual Network. When planning your environment, you can configure the compute instance/cluster with or without a public IP address. The general differences between the two are: The following table contains the differences between these configurations: You can also use Azure Databricks or HDInsight to train models in a virtual network. -> [!TIP] -> Azure Machine Learning also provides **managed virtual networks** (preview). With a managed virtual network, Azure Machine Learning handles the job of network isolation for your workspace and managed computes. You can also add private endpoints for resources needed by the workspace, such as Azure Storage Account. -> -> At this time, the managed virtual networks preview **doesn't** support no public IP configuration for compute resources. For more information, see [Workspace managed network isolation](how-to-managed-network.md). - > [!IMPORTANT] > Items marked (preview) in this article are currently in public preview. > The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. |
machine-learning | How To Secure Workspace Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md | In this article you learn how to enable the following workspaces resources in a ### Azure storage account -* If you plan to use Azure Machine Learning studio and the storage account is also in the VNet, there are extra validation requirements: +* If you plan to use Azure Machine Learning studio and the storage account is also in the virtual network, there are extra validation requirements: - * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the VNet. - * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same VNet. In this case, they can be in different subnets. + * If the storage account uses a __service endpoint__, the workspace private endpoint and storage service endpoint must be in the same subnet of the virtual network. + * If the storage account uses a __private endpoint__, the workspace private endpoint and storage private endpoint must be in the same virtual network. In this case, they can be in different subnets. ### Azure Container Instances -When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a VNet isn't supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md). +When your Azure Machine Learning workspace is configured with a private endpoint, deploying to Azure Container Instances in a virtual network isn't supported. Instead, consider using a [Managed online endpoint with network isolation](how-to-secure-online-endpoint.md). ### Azure Container Registry Azure Container Registry can be configured to use a private endpoint. Use the fo 1. Configure the ACR for the workspace to [Allow access by trusted services](../container-registry/allow-access-trusted-services.md). -1. Create an Azure Machine Learning compute cluster. This cluster is used to build Docker images when ACR is behind a VNet. For more information, see [Create a compute cluster](how-to-create-attach-compute-cluster.md). +1. Create an Azure Machine Learning compute cluster. This cluster is used to build Docker images when ACR is behind a virtual network. For more information, see [Create a compute cluster](how-to-create-attach-compute-cluster.md). 1. Use one of the following methods to configure the workspace to build Docker images using the compute cluster. Azure Container Registry can be configured to use a private endpoint. Use the fo To enable network isolation for Azure Monitor and the Application Insights instance for the workspace, use the following steps: -1. Open your Application Insights resource in the Azure Portal. The __Overview__ tab may or may not have a Workspace property. If it _doesn't_ have the property, perform step 2. If it _does_, then you can proceed directly to step 3. +1. Open your Application Insights resource in the Azure portal. The __Overview__ tab may or may not have a Workspace property. If it _doesn't_ have the property, perform step 2. If it _does_, then you can proceed directly to step 3. > [!TIP] > New workspaces create a workspace-based Application Insights resource by default. If your workspace was recently created, then you would not need to perform step 2. 1. Upgrade the Application Insights instance for your workspace. For steps on how to upgrade, see [Migrate to workspace-based Application Insights resources](/azure/azure-monitor/app/convert-classic-resource). -1. Create an Azure Monitor Private Link Scope and add the Application Insights instance from step 1 to the scope. For steps on how to do this, see [Configure your Azure Monitor private link](/azure/azure-monitor/logs/private-link-configure). +1. Create an Azure Monitor Private Link Scope and add the Application Insights instance from step 1 to the scope. For more information, see [Configure your Azure Monitor private link](/azure/azure-monitor/logs/private-link-configure). ## Securely connect to your workspace To enable network isolation for Azure Monitor and the Application Insights insta > [!IMPORTANT] > While this is a supported configuration for Azure Machine Learning, Microsoft doesn't recommend it. You should verify this configuration with your security team before using it in production. -In some cases, you may need to allow access to the workspace from the public network (without connecting through the VNet using the methods detailed the [Securely connect to your workspace](#securely-connect-to-your-workspace) section). Access over the public internet is secured using TLS. +In some cases, you may need to allow access to the workspace from the public network (without connecting through the virtual network using the methods detailed the [Securely connect to your workspace](#securely-connect-to-your-workspace) section). Access over the public internet is secured using TLS. To enable public network access to the workspace, use the following steps: 1. [Enable public access](how-to-configure-private-link.md#enable-public-access) to the workspace after configuring the workspace's private endpoint.-1. [Configure the Azure Storage firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet. You may need to change the allowed IP address if the clients don't have a static IP. For example, if one of your Data Scientists is working from home and can't establish a VPN connection to the VNet. +1. [Configure the Azure Storage firewall](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json#grant-access-from-an-internet-ip-range) to allow communication with the IP address of clients that connect over the public internet. You may need to change the allowed IP address if the clients don't have a static IP. For example, if one of your Data Scientists is working from home and can't establish a VPN connection to the virtual network. ## Next steps |
machine-learning | How To Setup Mlops Azureml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md | Before you can set up an MLOps project with Azure Machine Learning, you need to } ``` -1. Repeat **Step 3.** if you're creating service principals for Dev and Prod environments. For this demo, we'll be creating only one environment, which is Prod. +1. Repeat **Step 3** if you're creating service principals for Dev and Prod environments. For this demo, we'll be creating only one environment, which is Prod. 1. Close the Cloud Shell once the service principals are created. Before you can set up an MLOps project with Azure Machine Learning, you need to 5. Select **Azure Resource Manager**, select **Next**, select **Service principal (manual)**, select **Next** and select the Scope Level **Subscription**. - **Subscription Name** - Use the name of the subscription where your service principal is stored.- - **Subscription Id** - Use the `subscriptionId` you used in **Step 1.** input as the Subscription ID - - **Service Principal Id** - Use the `appId` from **Step 1.** output as the Service Principal ID - - **Service principal key** - Use the `password` from **Step 1.** output as the Service Principal Key - - **Tenant ID** - Use the `tenant` from **Step 1.** output as the Tenant ID + - **Subscription Id** - Use the `subscriptionId` you used in **Step 1** input as the Subscription ID + - **Service Principal Id** - Use the `appId` from **Step 1** output as the Service Principal ID + - **Service principal key** - Use the `password` from **Step 1** output as the Service Principal Key + - **Tenant ID** - Use the `tenant` from **Step 1** output as the Tenant ID 6. Name the service connection **Azure-ARM-Prod**. |
machine-learning | How To Submit Spark Jobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md | These prerequisites cover the submission of a Spark job from Azure Machine Learn ### Attach user assigned managed identity using `ARMClient` -1. Install [ARMClient](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API. +1. Install [`ARMClient`](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API. 1. Create a JSON file that defines the user-assigned managed identity that should be attached to the workspace: ```json { These prerequisites cover the submission of a Spark job from Azure Machine Learn > - To ensure successful execution of the Spark job, assign the **Contributor** and **Storage Blob Data Contributor** roles, on the Azure storage account used for data input and output, to the identity that the Spark job uses > - Public Network Access should be enabled in Azure Synapse workspace to ensure successful execution of the Spark job using an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md). > - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool, in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.-> - Serverless Spark compute supports a managed virtual network (preview). If a [managed network is provisioned for the serverless Spark compute, the corresponding private endpoints for the storage account should also be provisioned](./how-to-managed-network.md#configure-for-serverless-spark-jobs) to ensure data access. +> - Serverless Spark compute supports Azure Machine Learning managed virtual network (preview). If a [managed network is provisioned for the serverless Spark compute, the corresponding private endpoints for the storage account should also be provisioned](./how-to-managed-network.md#configure-for-serverless-spark-jobs) to ensure data access. ## Submit a standalone Spark job A Python script developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md) can be used to submit a batch job to process a larger volume of data, after making necessary changes for Python script parameterization. A simple data wrangling batch job can be submitted as a standalone Spark job. |
machine-learning | How To Troubleshoot Auto Ml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md | + + Title: Troubleshoot automated ML experiments ++description: Learn how to troubleshoot and resolve issues in your automated machine learning experiments. +++++ Last updated : 10/21/2021+++++# Troubleshoot automated ML experiments +++In this guide, learn how to identify and resolve issues in your automated machine learning experiments. ++## Troubleshoot automated ML for Images and NLP in Studio ++In case of failures in runs for Automated ML for Images and NLP, you can use the following steps to understand the error. +1. In the studio UI, the AutoML run should have a failure message indicating the reason for failure. +2. For more details, go to the child run of this AutoML run. This child run is a HyperDrive run. +3. In the "Trials" tab, you can check all the trials done for this HyperDrive run. +4. Go to the failed trial runs. +5. These runs should have an error message in the "Status" section of the "Overview" tab indicating the reason for failure. + Please click on "See more details" to get more details about the failure. +6. You can look at "std_log.txt" in the "Outputs + Logs" tab to look at detailed logs and exception traces. ++If your Automated ML runs uses pipeline runs for trials, you can follow the following steps to understand the error. +1. Please follow the steps 1-4 above to identify the failed trial run. +2. This run should show you the pipeline run and the failed nodes in the pipeline are marked with Red color. +3. Double click the failed node in the pipeline. +4. These runs should have an error message in the "Status" section of the "Overview" tab indicating the reason for failure. + Please click on "See more details" to get more details about the failure. +5. You can look at "std_log.txt" in the "Outputs + Logs" tab to look at detailed logs and exception traces. ++## Next steps +++ [Train computer vision models with automated machine learning](how-to-auto-train-image-models.md).++ [Train natural language processing models with automated machine learning](how-to-auto-train-nlp-models.md). |
machine-learning | How To Use Automl Onnx Model Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-onnx-model-dotnet.md | ONNX is an open-source format for AI models. ONNX supports interoperability betw - [.NET Core SDK 3.1 or greater](https://dotnet.microsoft.com/download) - Text Editor or IDE (such as [Visual Studio](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/Download))-- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb).+- ONNX model. To learn how to train an AutoML ONNX model, see the following [bank marketing classification notebook (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb). - [Netron](https://github.com/lutzroeder/netron) (optional) ## Create a C# console application |
machine-learning | How To Use Batch Model Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-model-deployments.md | A model deployment is a set of resources required for hosting the model that doe | `settings.retry_settings.timeout` | [Optional] The timeout in seconds for a `scoring_script` `run()` for scoring a mini batch. | | `settings.error_threshold` | [Optional] The number of input file scoring failures that should be ignored. If the error count for the entire input goes above this value, the batch scoring job will be terminated. The example uses `-1`, which indicates that any number of failures is allowed without terminating the batch scoring job. | | `settings.logging_level` | [Optional] Log verbosity. Values in increasing verbosity are: WARNING, INFO, and DEBUG. |+ | `settings.environment_variables` | [Optional] Dictionary of environment variable name-value pairs to set for each batch scoring job. | # [Python](#tab/python) A model deployment is a set of resources required for hosting the model that doe | `settings.retry_settingstimeout` | The timeout in seconds for scoring a mini batch (default is 30) | | `settings.output_action` | Indicates how the output should be organized in the output file. Allowed values are `append_row` or `summary_only`. Default is `append_row` | | `settings.logging_level` | The log verbosity level. Allowed values are `warning`, `info`, `debug`. Default is `info`. |- | `environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. | + | `settings.environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. | # [Studio](#tab/studio) |
machine-learning | How To Use Batch Pipeline Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md | Once the deployment is created, it's ready to receive jobs. You can invoke the d > [!TIP]-> In this example, the pipeline doesn't have inputs or outputs. However, they can be indicated at invocation time if any. To learn more about how to indicate inputs and outputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md). +> In this example, the pipeline doesn't have inputs or outputs. However, if the pipeline component requires some, they can be indicated at invocation time. To learn about how to indicate inputs and outputs, see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md) or see the tutorial [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md). You can monitor the progress of the show and stream the logs using: ml_client.compute.begin_delete(name="batch-cluster") - [How to deploy a training pipeline with batch endpoints (preview)](how-to-use-batch-training-pipeline.md) - [How to deploy a pipeline to perform batch scoring with preprocessing (preview)](how-to-use-batch-scoring-pipeline.md) - [Create batch endpoints from pipeline jobs (preview)](how-to-use-batch-pipeline-from-job.md)-- [Access data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md)+- [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md) - [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md) |
machine-learning | How To Use Foundation Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md | You can quickly test out any pre-trained model using the Sample Inference widget > * When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network. > * When `egress_public_network_access` is set to `enabled` for a managed online endpoint deployment, the deployment can only access the resources with public access. Which means that it cannot access resources secured in the virtual network. >-> For more information, see [Outbound resource access for managed online endpoints](how-to-secure-online-endpoint.md#outbound-resource-access). +> For more information, see [Secure outbound access with legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method). ## How to evaluate foundation models using your own test data |
machine-learning | How To Use Serverless Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md | Last updated 05/09/2023 [!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)] -You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a fully-managed, on-demand compute. It is created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up. +You no longer need to [create and manage compute](./how-to-create-attach-compute-cluster.md) to train your model in a scalable way. Your job can instead be submitted to a new compute target type, called _serverless compute_. Serverless compute is the easiest way to run training jobs on Azure Machine Learning. Serverless compute is a fully managed, on-demand compute. It is created, scaled, and managed by Azure Machine Learning for you. Through model training with serverless compute, machine learning professionals can focus on their expertise of building machine learning models and not have to learn about compute infrastructure or setting it up. [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)] -Machine learning professionals can specify the resources the job needs. Azure Machine Learning manages the compute infrastructure, and provides managed network isolation reducing the burden on you. +Machine learning professionals can specify the resources the job needs. Azure Machine Learning manages the compute infrastructure, and provides managed network (preview) isolation reducing the burden on you. Enterprises can also reduce costs by specifying optimal resources for each job. IT Admins can still apply control by specifying cores quota at subscription and workspace level and apply Azure policies. Serverless compute can be used to run command, sweep, AutoML, pipeline, distribu * When using [Azure Machine Learning designer](concept-designer.md), select **Serverless** as default compute. > [!IMPORTANT]-> If you want to use serverless compute with a workspace that is configured for network isolation, the workspace must be using a managed network isolation (preview). For more information, see [workspace managed network isolation](how-to-managed-network.md). +> If you want to use serverless compute with a workspace that is configured for network isolation, the workspace must be using managed network isolation. For more information, see [workspace managed network isolation](how-to-managed-network.md). ## Performance considerations |
machine-learning | Migrate To V2 Assets Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-assets-model.md | This article gives a comparison of scenario(s) in SDK v1 and SDK v2. ml_client.models.create_or_update(run_model) ``` +For more information about models, see [Work with models in Azure Machine Learning](how-to-manage-models.md). + ## Mapping of key functionality in SDK v1 and SDK v2 |Functionality in SDK v1|Rough mapping in SDK v2| |
machine-learning | Migrate To V2 Execution Automl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-automl.md | This article gives a comparison of scenario(s) in SDK v1 and SDK v2. ## Submit AutoML run -* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb). +* SDK v1: Below is a sample AutoML classification task. For the entire code, check out our [examples repo](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb). ```python # Imports |
machine-learning | Migrate To V2 Execution Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-execution-pipeline.md | For more information, see the documentation here: * [steps in SDK v1](/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py&preserve-view=true) * [Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md)-* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb) +* [Build a simple ML pipeline for image classification (SDK v1)](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/using-pipelines/image-classification.ipynb) * [OutputDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py&preserve-view=true) * [`mldesigner`](https://pypi.org/project/mldesigner/) |
machine-learning | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md | Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
machine-learning | How To Create Manage Runtime | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-create-manage-runtime.md | If your compute instance is behind a VNet, you need to make the following change - Make sure the managed identity of workspace have `Storage Blob Data Contributor`, `Storage Table Data Contributor` roles on the workspace default storage account. > [!NOTE] -> This only works if your AOAI and other cognitive services allow access from all networks. +> This only works if your AOAI and other Azure AI services allow access from all networks. ### Managed endpoint runtime related |
machine-learning | How To Develop A Standard Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-develop-a-standard-flow.md | We also support the input type of int, bool, double, list and object. :::image type="content" source="./media/how-to-develop-a-standard-flow/flow-input-datatype.png" alt-text="Screenshot of inputs showing the type drop-down menu with string selected. " lightbox = "./media/how-to-develop-a-standard-flow/flow-input-datatype.png"::: -You should first set the input schema (name: url; type: string), then set a value manually or by: +## Develop the flow using different tools -1. Inputting data manually in the value field. -2. Selecting a row of existing dataset in **fill value from data**. ---The dataset selection supports search and autosuggestion. ---After selecting a row, the url is backfilled to the value field. --If the existing datasets don't meet your needs, upload new data from files. We support **.csv** and **.txt** for now. ---## Develop tool in your flow --In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety and Vector Search. +In one flow, you can consume different kinds of tools. We now support LLM, Python, Serp API, Content Safety, Vector Search and etc. ### Add tool as your need First define flow output schema, then select in drop-down the node whose output ## Next steps -- [Develop a customized evaluation flow](how-to-develop-an-evaluation-flow.md)+- [Bulk test using more data and evaluate the flow performance](how-to-bulk-test-evaluate-flow.md) - [Tune prompts using variants](how-to-tune-prompts-using-variants.md) - [Deploy a flow](how-to-deploy-for-real-time-inference.md) |
machine-learning | How To Integrate With Langchain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-integrate-with-langchain.md | Prompt Flow can also be used together with the [LangChain](https://python.langch > Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +We introduce the following sections: +* [Benefits of LangChain integration](#benefits-of-langchain-integration) +* [How to convert LangChain code into flow](#how-to-convert-langchain-code-into-flow) + * [Prerequisites for environment and runtime](#prerequisites-for-environment-and-runtime) + * [Convert credentials to prompt flow connection](#convert-credentials-to-prompt-flow-connection) + * [LangChain code conversion to a runnable flow](#langchain-code-conversion-to-a-runnable-flow) + ## Benefits of LangChain integration We consider the integration of LangChain and Prompt flow as a powerful combination that can help you to build and test your custom language models with ease, especially in the case where you may want to use LangChain modules to initially build your flow and then use our Prompt Flow to easily scale the experiments for bulk testing, evaluating then eventually deploying. Then you can create a [Prompt flow runtime](./how-to-create-manage-runtime.md) b :::image type="content" source="./media/how-to-integrate-with-langchain/runtime-custom-env.png" alt-text="Screenshot of flows on the runtime tab with the add compute instance runtime popup. " lightbox = "./media/how-to-integrate-with-langchain/runtime-custom-env.png"::: -### Convert credentials to custom connection +### Convert credentials to prompt flow connection ++When developing your LangChain code, you might have [defined environment variables to store your credentials, such as the AzureOpenAI API KEY](https://python.langchain.com/docs/integrations/llms/azure_openai_example), which is necessary for invoking the AzureOpenAI model. -Custom connection helps you to securely store and manage secret keys or other sensitive credentials required for interacting with LLM, rather than exposing them in environment variables hard code in your code and running on the cloud, protecting them from potential security breaches. -#### Create a custom connection +Instead of directly coding the credentials in your code and exposing them as environment variables when running LangChain code in the cloud, it is recommended to convert the credentials from environment variables into a connection in prompt flow. This allows you to securely store and manage the credentials separately from your code. -Create a custom connection that stores all your LLM API KEY or other required credentials. +#### Create a connection ++Create a connection that securely stores your credentials, such as your LLM API KEY or other required credentials. 1. Go to Prompt flow in your workspace, then go to **connections** tab.-2. Select **Create** and select **Custom**. +2. Select **Create** and select a connection type to store your credentials. (Take custom connection as an example) :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-1.png" alt-text="Screenshot of flows on the connections tab highlighting the custom button in the create drop-down menu. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-1.png":::-1. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**. +3. In the right panel, you can define your connection name, and you can add multiple *Key-value pairs* to store your credentials and keys by selecting **Add key-value pairs**. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-2.png" alt-text="Screenshot of add custom connection point to the add key-value pairs button. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-2.png"::: > [!NOTE] Create a custom connection that stores all your LLM API KEY or other required cr Then this custom connection is used to replace the key and credential you explicitly defined in LangChain code, if you already have a LangChain integration Prompt flow, you can jump toΓÇïΓÇïΓÇïΓÇïΓÇïΓÇïΓÇï [Configure connection, input and output](#configure-connection-input-and-output). + ### LangChain code conversion to a runnable flow All LangChain code can directly run in the Python tools in your flow as long as your runtime environment contains the dependency packages, you can easily convert your LangChain code into a flow by following the steps below. -#### Create a flow with Prompt tools and Python tools +#### Convert LangChain code to flow structure > [!NOTE] > There are two ways to convert your LangChain code into a flow. All LangChain code can directly run in the Python tools in your flow as long as - To simplify the conversion process, you can directly initialize the LLM model for invocation in a Python node by utilizing the LangChain integrated LLM library. - Another approach is converting your LLM consuming from LangChain code to our LLM tools in the flow, for better further experimental management. - For quick conversion of LangChain code into a flow, we recommend two types of flow structures, based on the use case: || Types | Desc | Case | |-| -- | -- | -- |-|**Type A**| A flow that includes both **prompt tools** and **python tools**| You can extract your prompt template from your code into a prompt node, then combine the remaining code in a single Python node or multiple Python tools. | This structure is ideal for who want to easily **tune the prompt** by running flow variants and then choose the optimal one based on evaluation results.| -|**Type B**| A flow that includes **python tools** only| You can create a new flow with python tools only, all code including prompt definition will run in python tools.| This structure is suitable for who don't need to explicit tune the prompt in workspace, but require faster batch testing based on larger scale datasets. | +|**Type A**| A flow that includes both **prompt nodes** and **python nodes**| You can extract your prompt template from your code into a prompt node, then combine the remaining code in a single Python node or multiple Python tools. | This structure is ideal for who want to easily **tune the prompt** by running flow variants and then choose the optimal one based on evaluation results.| +|**Type B**| A flow that includes **python nodes** only| You can create a new flow with python nodes only, all code including prompt definition will run in python nodes.| This structure is suitable for who don't need to explicit tune the prompt in workspace, but require faster batch testing based on larger scale datasets. | For example the type A flow from the chart is like: To create a flow in Azure Machine Learning, you can go to your workspace, then s #### Configure connection, input and output -After you have a properly structured flow and are done moving the code to specific tools, you need to configure the input, output, and connection settings in your flow and code to replace your original definitions. +After you have a properly structured flow and are done moving the code to specific tool nodes, you need to replace the original environment variables with the corresponding key in the connection, and configure the input and output of the flow. ++**Configure connection** ++To utilize a connection that replaces the environment variables you originally defined in LangChain code, you need to import promptflow connection library `promptflow.connections` in the python node. -To utilize a [custom connection](#create-a-custom-connection) that stores all the required keys and credentials, follow these steps: +For example: -1. In the python tools, import custom connection library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function. +If you have a LangChain code that consumes the AzureOpenAI model, you can replace the environment variables with the corresponding key in the Azure OpenAI connection: ++Import library `from promptflow.connections import AzureOpenAIConnection` ++++For custom connection, you need to follow the steps: ++1. Import library `from promptflow.connections import CustomConnection`, and define an input parameter of type `CustomConnection` in the tool function. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png" alt-text="Screenshot of doc search chain node highlighting the custom connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-1.png"::: 1. Parse the input to the input section, then select your target custom connection in the value dropdown. :::image type="content" source="./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png" alt-text="Screenshot of the chain node highlighting the connection. " lightbox = "./media/how-to-integrate-with-langchain/custom-connection-python-node-2.png"::: 1. Replace the environment variables that originally defined the key and credential with the corresponding key added in the connection. 1. Save and return to authoring page, and configure the connection parameter in the node input. +**Configure input and output** + Before running the flow, configure the **node input and output**, as well as the overall **flow input and output**. This step is crucial to ensure that all the required data is properly passed through the flow and that the desired results are obtained. ## Next steps |
machine-learning | Quickstart Create Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md | Title: "Create workspace resources" + Title: "Tutorial: Create workspace resources" description: Create an Azure Machine Learning workspace and cloud resources that can be used to train machine learning models. +content_well_notification: + - AI-contribution #Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning. -# Create resources you need to get started +# Tutorial: Create resources you need to get started -In this article, you'll create the resources you need to start working with Azure Machine Learning. +In this tutorial, you will create the resources you need to start working with Azure Machine Learning. -* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create. -* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials. +> [!div class="checklist"] +>* A *workspace*. To use Azure Machine Learning, you'll first need a workspace. The workspace is the central place to view and manage all the artifacts and resources you create. +>* A *compute instance*. A compute instance is a pre-configured cloud-computing resource that you can use to train, automate, manage, and track machine learning models. A compute instance is the quickest way to start using the Azure Machine Learning SDKs and CLIs. You'll use it to run Jupyter notebooks and Python scripts in the rest of the tutorials. ++This video shows you how to create a workspace and compute instance. The steps are also described in the sections below. +> [!VIDEO https://learn-video.azurefd.net/vod/player?id=a0e901d2-e82a-4e96-9c7f-3b5467859969] ## Prerequisites |
machine-learning | Reference Automl Images Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md | In instance segmentation, output consists of multiple boxes with their scaled to > These settings are currently in public preview. They are provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!WARNING]-> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations. +> **Explainability** is supported only for **multi-class classification** and **multi-label classification**. While generating explanations on online endpoint, if you encounter timeout issues, use [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring) to generate explanations. In this section, we document the input data format required to make predictions and generate explanations for the predicted class/classes using a deployed model. There's no separate deployment needed for explainability. The same endpoint for online scoring can be utilized to generate explanations. We just need to pass some extra explainability related parameters in input schema and get either visualizations of explanations and/or attribution score matrices (pixel level explanations). If `model_explainability`, `visualizations`, `attributions` are set to `True` in > [!WARNING]-> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring). +> While generating explanations on online endpoint, make sure to select only few classes based on confidence score in order to avoid timeout issues on the endpoint or use the endpoint with GPU instance type. To generate explanations for large number of classes in multi-label classification, refer to [batch scoring notebook (SDK v1)](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass-batch-scoring). ```json [ |
machine-learning | Reference Machine Learning Cloud Parity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md | The information in the rest of this document provides information on what featur | **SDK support** | | | | | [Python SDK support](/python/api/overview/azure/ml/) | GA | YES | YES | | **[Security](concept-enterprise-security.md)** | | | | +| Managed virtual network support | Preview | Preview | Preview | | Virtual Network (VNet) support for training | GA | YES | YES | | Virtual Network (VNet) support for inference | GA | YES | YES | | Scoring endpoint authentication | Public Preview | YES | YES | The information in the rest of this document provides information on what featur | **Compute instance** | | | | | Managed compute Instances for integrated Notebooks | GA | YES | N/A | | Jupyter, JupyterLab Integration | GA | YES | N/A |+| Managed virtual network support | Preview | Preview | N/A | | Virtual Network (VNet) support | GA | YES | N/A | | **SDK support** | | | | | Python SDK support | GA | YES | N/A | |
machine-learning | Reference Yaml Deployment Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-batch.md | When `type: model`, the following syntax is enforced: | `settings.retry_settings.timeout` | integer | The timeout in seconds for scoring a single mini batch. Use larger values when the mini-batch size is bigger or the model is more expensive to run. | | `30` | | `settings.output_action` | string | Indicates how the output should be organized in the output file. Use `summary_only` if you are generating the output files as indicated at [Customize outputs in model deployments](how-to-deploy-model-custom-output.md). Use `append_row` if you are returning predictions as part of the `run()` function `return` statement. | `append_row`, `summary_only` | `append_row` | | `settings.output_file_name` | string | Name of the batch scoring output file. | | `predictions.csv` |-| `environment_variables` | object | Dictionary of environment variable key-value pairs to set for each batch scoring job. | | | +| `settings.environment_variables` | object | Dictionary of environment variable key-value pairs to set for each batch scoring job. | | | ### YAML syntax for pipeline component deployments |
machine-learning | Reference Yaml Deployment Managed Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | `request_settings` | object | Scoring request settings for the deployment. See [RequestSettings](#requestsettings) for the set of configurable properties. | | | | `liveness_probe` | object | Liveness probe settings for monitoring the health of the container regularly. See [ProbeSettings](#probesettings) for the set of configurable properties. | | | | `readiness_probe` | object | Readiness probe settings for validating if the container is ready to serve traffic. See [ProbeSettings](#probesettings) for the set of configurable properties. | | |-| `egress_public_network_access` | string | This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` | +| `egress_public_network_access` | string |**Note:** This key is applicable when you use the [legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method) to secure outbound communication for a deployment. We strongly recommend that you secure outbound communication for deployments using [a workspace managed VNet](concept-secure-online-endpoint.md) (preview) instead. <br><br>This flag secures the deployment by restricting communication between the deployment and the Azure resources used by it. Set to `disabled` to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. This flag is applicable only for managed online endpoints. | `enabled`, `disabled` | `enabled` | ### RequestSettings |
machine-learning | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
machine-learning | Tutorial Automated Ml Forecast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md | Also try automated machine learning for these other model types: * An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md). -* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file +* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file ## Sign in to the studio Before you configure your experiment, upload your data file to your workspace in 1. Select **Upload files** from the **Upload** drop-down.. - 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv). + 1. Choose the **bike-no.csv** file on your local computer. This is the file you downloaded as a [prerequisite](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv). 1. Select **Next** |
machine-learning | Tutorial Create Secure Workspace Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-vnet.md | + + Title: Create a secure workspace with Azure Virtual Network ++description: Create an Azure Machine Learning workspace and required Azure services inside an Azure Virtual Network. ++++++ Last updated : 08/22/2023+++monikerRange: 'azureml-api-2 || azureml-api-1' ++# Tutorial: How to create a secure workspace with an Azure Virtual Network ++In this article, learn how to create and connect to a secure Azure Machine Learning workspace. The steps in this article use an Azure Virtual Network to create a security boundary around resources used by Azure Machine Learning. ++> [!IMPORTANT] +> We recommend using the Azure Machine Learning managed virtual network instead of an Azure Virtual Network. For a version of this tutorial that uses a managed virtual network, see [Tutorial: Create a secure workspace with a managed virtual network](tutorial-create-secure-workspace.md). ++In this tutorial, you accomplish the following tasks: ++> [!div class="checklist"] +> * Create an Azure Virtual Network (VNet) to __secure communications between services in the virtual network__. +> * Create an Azure Storage Account (blob and file) behind the VNet. This service is used as __default storage for the workspace__. +> * Create an Azure Key Vault behind the VNet. This service is used to __store secrets used by the workspace__. For example, the security information needed to access the storage account. +> * Create an Azure Container Registry (ACR). This service is used as a repository for Docker images. __Docker images provide the compute environments needed when training a machine learning model or deploying a trained model as an endpoint__. +> * Create an Azure Machine Learning workspace. +> * Create a jump box. A jump box is an Azure Virtual Machine that is behind the VNet. Since the VNet restricts access from the public internet, __the jump box is used as a way to connect to resources behind the VNet__. +> * Configure Azure Machine Learning studio to work behind a VNet. The studio provides a __web interface for Azure Machine Learning__. +> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__. In configurations where Azure Container Registry is behind the VNet, it is also used to build Docker images. +> * Connect to the jump box and use the Azure Machine Learning studio. ++> [!TIP] +> If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md). ++After completing this tutorial, you'll have the following architecture: ++* An Azure Virtual Network, which contains three subnets: + * __Training__: Contains the Azure Machine Learning workspace, dependency services, and resources used for training models. + * __Scoring__: For the steps in this tutorial, it isn't used. However if you continue using this workspace for other tutorials, we recommend using this subnet when deploying models to [endpoints](concept-endpoints.md). + * __AzureBastionSubnet__: Used by the Azure Bastion service to securely connect clients to Azure Virtual Machines. +* An Azure Machine Learning workspace that uses a private endpoint to communicate using the VNet. +* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the VNet. +* An Azure Container Registry that uses a private endpoint communicate using the VNet. +* Azure Bastion, which allows you to use your browser to securely communicate with the jump box VM inside the VNet. +* An Azure Virtual Machine that you can remotely connect to and access resources secured inside the VNet. +* An Azure Machine Learning compute instance and compute cluster. ++> [!TIP] +> The Azure Batch Service listed on the diagram is a back-end service required by the compute clusters and compute instances. +++## Prerequisites ++* Familiarity with Azure Virtual Networks and IP networking. If you aren't familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module. +* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2. ++## Create a virtual network ++To create a virtual network, use the following steps: ++1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Network__ in the search field. Select the __Virtual Network__ entry, and then select __Create__. +++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-resource-search-vnet.png" alt-text="Screenshot of the create resource search form with virtual network selected."::: ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-resource-vnet.png" alt-text="Screenshot of the virtual network create form."::: ++1. From the __Basics__ tab, select the Azure __subscription__ to use for this resource and then select or create a new __resource group__. Under __Instance details__, enter a friendly __name__ for your virtual network and select the __region__ to create it in. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-basics.png" alt-text="Screenshot of the basic virtual network configuration form."::: ++1. Select __Security__. Select to __Enable Azure Bastion__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields: ++ * __Bastion name__: A unique name for this Bastion instance + * __Public IP address__: Create a new public IP address. ++ Leave the other fields at the default values. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-bastion.png" alt-text="Screenshot of Bastion config."::: ++1. Select __IP Addresses__. The default settings should be similar to the following image: ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-ip-address-default.png" alt-text="Screenshot of the default IP Address form."::: ++ Use the following steps to configure the IP address and configure a subnet for training and scoring resources: ++ > [!TIP] + > While you can use a single subnet for all Azure Machine Learning resources, the steps in this article show how to create two subnets to separate the training & scoring resources. + > + > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet. ++ 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.16.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__. + + > [!IMPORTANT] + > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure. ++ 1. Select the __Default__ subnet and then select __Remove subnet__. + + :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet."::: ++ 1. To create a subnet to contain the workspace, dependency services, and resources used for _training_, select __+ Add subnet__ and set the subnet name, starting address, and subnet size. The following are the values used in this tutorial: + * __Name__: Training + * __Starting address__: 172.16.0.0 + * __Subnet size__: /24 (256 addresses) ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet."::: ++ 1. To create a subnet for compute resources used to _score_ your models, select __+ Add subnet__ again, and set the name and address range: + * __Subnet name__: Scoring + * __Starting address__: 172.16.1.0 + * __Subnet size__: /24 (256 addresses) ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet."::: ++ 1. To create a subnet for _Azure Bastion_, select __+ Add subnet__ and set the template, starting address, and subnet size: + * __Subnet template__: Azure Bastion + * __Starting address__: 172.16.2.0 + * __Subnet size__: /26 (64 addresses) ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/vnet-add-azure-bastion-subnet.png" alt-text="Screenshot of Azure Bastion subnet."::: ++1. Select __Review + create__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-ip-address-final.png" alt-text="Screenshot of the review + create button."::: ++1. Verify that the information is correct, and then select __Create__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-vnet-review.png" alt-text="Screenshot of the virtual network review + create page."::: ++## Create a storage account ++1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Storage account__. Select the __Storage Account__ entry, and then select __Create__. +1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Storage account name__, and set __Redundancy__ to __Locally-redundant storage (LRS)__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-storage.png" alt-text="Screenshot of storage account basic config."::: ++1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add private endpoint__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-enable-private-endpoint.png" alt-text="Screenshot of the form to add the blob private network."::: ++1. On the __Create private endpoint__ form, use the following values: + * __Subscription__: The same Azure subscription that contains the previous resources you've created. + * __Resource group__: The same Azure resource group that contains the previous resources you've created. + * __Location__: The same Azure region that contains the previous resources you've created. + * __Name__: A unique name for this private endpoint. + * __Target sub-resource__: blob + * __Virtual network__: The virtual network you created earlier. + * __Subnet__: Training (172.16.0.0/24) + * __Private DNS integration__: Yes + * __Private DNS Zone__: privatelink.blob.core.windows.net ++ Select __OK__ to create the private endpoint. ++1. Select __Review + create__. Verify that the information is correct, and then select __Create__. ++1. Once the Storage Account has been created, select __Go to resource__: ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-go-to-resource.png" alt-text="Screenshot of the go to new storage resource button."::: ++1. From the left navigation, select __Networking__ the __Private endpoint connections__ tab, and then select __+ Private endpoint__: ++ > [!NOTE] + > While you created a private endpoint for Blob storage in the previous steps, you must also create one for File storage. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-networking.png" alt-text="Screenshot of the storage account networking form."::: ++1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you've used for previous resources. Enter a unique __Name__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-private-endpoint.png" alt-text="Screenshot of the basics form when adding the file private endpoint."::: ++1. Select __Next : Resource__, and then set __Target sub-resource__ to __file__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-private-endpoint-resource.png" alt-text="Screenshot of the resource form when selecting a sub-resource of 'file'."::: ++1. Select __Next : Configuration__, and then use the following values: + * __Virtual network__: The network you created previously + * __Subnet__: Training + * __Integrate with private DNS zone__: Yes + * __Private DNS zone__: privatelink.file.core.windows.net ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-file-private-endpoint-config.png" alt-text="Screenshot of the configuration form when adding the file private endpoint."::: ++1. Select __Review + Create__. Verify that the information is correct, and then select __Create__. ++> [!TIP] +> If you plan to use a [batch endpoint](concept-endpoints.md) or an Azure Machine Learning pipeline that uses a [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md), it is also required to configure private endpoints target **queue** and **table** sub-resources. ParallelRunStep uses queue and table under the hood for task scheduling and dispatching. ++## Create a key vault ++1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Key Vault__. Select the __Key Vault__ entry, and then select __Create__. +1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Key vault name__. Leave the other fields at the default value. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-key-vault.png" alt-text="Screenshot of the basics form when creating a new key vault."::: ++1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/key-vault-networking.png" alt-text="Screenshot of the networking form when adding a private endpoint for the key vault."::: ++1. On the __Create private endpoint__ form, use the following values: + * __Subscription__: The same Azure subscription that contains the previous resources you've created. + * __Resource group__: The same Azure resource group that contains the previous resources you've created. + * __Location__: The same Azure region that contains the previous resources you've created. + * __Name__: A unique name for this private endpoint. + * __Target sub-resource__: Vault + * __Virtual network__: The virtual network you created earlier. + * __Subnet__: Training (172.16.0.0/24) + * __Private DNS integration__: Yes + * __Private DNS Zone__: privatelink.vaultcore.azure.net ++ Select __OK__ to create the private endpoint. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/key-vault-private-endpoint.png" alt-text="Screenshot of the key vault private endpoint configuration form."::: ++1. Select __Review + create__. Verify that the information is correct, and then select __Create__. ++## Create a container registry ++1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Container Registry__. Select the __Container Registry__ entry, and then select __Create__. +1. From the __Basics__ tab, select the __subscription__, __resource group__, and __location__ you previously used for the virtual network. Enter a unique __Registry name__ and set the __SKU__ to __Premium__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-container-registry.png" alt-text="Screenshot of the basics form when creating a container registry."::: ++1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-networking.png" alt-text="Screenshot of the networking form when adding a container registry private endpoint."::: ++1. On the __Create private endpoint__ form, use the following values: + * __Subscription__: The same Azure subscription that contains the previous resources you've created. + * __Resource group__: The same Azure resource group that contains the previous resources you've created. + * __Location__: The same Azure region that contains the previous resources you've created. + * __Name__: A unique name for this private endpoint. + * __Target sub-resource__: registry + * __Virtual network__: The virtual network you created earlier. + * __Subnet__: Training (172.16.0.0/24) + * __Private DNS integration__: Yes + * __Private DNS Zone__: privatelink.azurecr.io ++ Select __OK__ to create the private endpoint. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-private-endpoint.png" alt-text="Screenshot of the configuration form for the container registry private endpoint."::: ++1. Select __Review + create__. Verify that the information is correct, and then select __Create__. +1. After the container registry has been created, select __Go to resource__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-go-to-resource.png" alt-text="Screenshot of the 'go to resource' button."::: ++1. From the left of the page, select __Access keys__, and then enable __Admin user__. This setting is required when using Azure Container Registry inside a virtual network with Azure Machine Learning. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/container-registry-admin-user.png" alt-text="Screenshot of the container registry access keys form, with the 'admin user' option enabled."::: ++## Create a workspace ++1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Machine Learning__. Select the __Machine Learning__ entry, and then select __Create__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/machine-learning-create.png" alt-text="Screenshot of the create page for Azure Machine Learning."::: ++1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the following values for the other fields: + * __Workspace name__: A unique name for your workspace. + * __Storage account__: Select the storage account you created previously. + * __Key vault__: Select the key vault you created previously. + * __Application insights__: Use the default value. + * __Container registry__: Use the container registry you created previously. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-machine-learning-workspace.png" alt-text="Screenshot of the basic workspace configuration form."::: ++1. From the __Networking__ tab, select __Private with Internet Outbound__. In the __Workspace inbound access__ section, select __+ add__. ++1. On the __Create private endpoint__ form, use the following values: + * __Subscription__: The same Azure subscription that contains the previous resources you've created. + * __Resource group__: The same Azure resource group that contains the previous resources you've created. + * __Location__: The same Azure region that contains the previous resources you've created. + * __Name__: A unique name for this private endpoint. + * __Target sub-resource__: amlworkspace + * __Virtual network__: The virtual network you created earlier. + * __Subnet__: Training (172.16.0.0/24) + * __Private DNS integration__: Yes + * __Private DNS Zone__: Leave the two private DNS zones at the default values of __privatelink.api.azureml.ms__ and __privatelink.notebooks.azure.net__. ++ Select __OK__ to create the private endpoint. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/machine-learning-workspace-private-endpoint.png" alt-text="Screenshot of the workspace private network configuration form."::: ++1. From the __Networking__ tab, in the __Workspace outbound access__ section, select __Use my own virtual network__. +1. Select __Review + create__. Verify that the information is correct, and then select __Create__. +1. Once the workspace has been created, select __Go to resource__. +1. From the __Settings__ section on the left, select __Private endpoint connections__ and then select the link in the __Private endpoint__ column: ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/workspace-private-endpoint-connections.png" alt-text="Screenshot of the private endpoint connections for the workspace."::: ++1. Once the private endpoint information appears, select __DNS configuration__ from the left of the page. Save the IP address and fully qualified domain name (FQDN) information on this page, as it will be used later. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/workspace-private-endpoint-dns.png" alt-text="screenshot of the IP and FQDN entries for the workspace."::: ++> [!IMPORTANT] +> There are still some configuration steps needed before you can fully use the workspace. However, these require you to connect to the workspace. ++## Enable studio ++Azure Machine Learning studio is a web-based application that lets you easily manage your workspace. However, it needs some extra configuration before it can be used with resources secured inside a VNet. Use the following steps to enable studio: ++1. When using an Azure Storage Account that has a private endpoint, add the service principal for the workspace as a __Reader__ for the storage private endpoint(s). From the Azure portal, select your storage account and then select __Networking__. Next, select __Private endpoint connections__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-private-endpoint-select.png" alt-text="Screenshot of storage private endpoint connections."::: ++1. For __each private endpoint listed__, use the following steps: ++ 1. Select the link in the __Private endpoint__ column. + + :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/storage-private-endpoint-selected.png" alt-text="Screenshot of the endpoint links in the private endpoint column."::: ++ 1. Select __Access control (IAM)__ from the left side. + 1. Select __+ Add__, and then __Add role assignment (Preview)__. ++ ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png) ++ 1. On the __Role__ tab, select the __Reader__. ++ ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png) ++ 1. On the __Members__ tab, select __User, group, or service principal__ in the __Assign access to__ area and then select __+ Select members__. In the __Select members__ dialog, enter the name as your Azure Machine Learning workspace. Select the service principal for the workspace, and then use the __Select__ button. ++ 1. On the **Review + assign** tab, select **Review + assign** to assign the role. ++## Secure Azure Monitor and Application Insights ++> [!NOTE] +> For more information on securing Azure Monitor and Application Insights, see the following links: +> * [Migrate to workspace-based Application Insights resources](../azure-monitor/app/convert-classic-resource.md). +> * [Configure your Azure Monitor private link](../azure-monitor/logs/private-link-configure.md). ++1. In the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace. From __Overview__, select the __Application Insights__ link. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/workspace-application-insight.png" alt-text="Screenshot of the Application Insights link."::: ++1. In the __Properties__ for Application Insights, check the __WORKSPACE__ entry to see if it contains a value. If it _doesn't_, select __Migrate to Workspace-based__, select the __Subscription__ and __Log Analytics Workspace__ to use, then select __Apply__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/migrate-workspace-based.png" alt-text="Screenshot of the link to migrate to workspace-based."::: ++1. In the Azure portal, select __Home__, and then search for __Private link__. Select the __Azure Monitor Private Link Scope__ result and then select __Create__. +1. From the __Basics__ tab, select the same __Subscription__, __Resource Group__, and __Resource group region__ as your Azure Machine Learning workspace. Enter a __Name__ for the instance, and then select __Review + Create__. To create the instance, select __Create__. +1. Once the Azure Monitor Private Link Scope instance has been created, select the instance in the Azure portal. From the __Configure__ section, select __Azure Monitor Resources__ and then select __+ Add__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/add-monitor-resources.png" alt-text="Screenshot of the add button."::: ++1. From __Select a scope__, use the filters to select the Application Insights instance for your Azure Machine Learning workspace. Select __Apply__ to add the instance. +1. From the __Configure__ section, select __Private Endpoint connections__ and then select __+ Private Endpoint__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/private-endpoint-connections.png" alt-text="Screenshot of the add private endpoint button."::: ++1. Select the same __Subscription__, __Resource Group__, and __Region__ that contains your VNet. Select __Next: Resource__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/monitor-private-endpoint-basics.png" alt-text="Screenshot of the Azure Monitor private endpoint basics."::: ++1. Select `Microsoft.insights/privateLinkScopes` as the __Resource type__. Select the Private Link Scope you created earlier as the __Resource__. Select `azuremonitor` as the __Target sub-resource__. Finally, select __Next: Virtual Network__ to continue. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/monitor-private-endpoint-resource.png" alt-text="Screenshot of the Azure Monitor private endpoint resources."::: ++1. Select the __Virtual network__ you created earlier, and the __Training__ subnet. Select __Next__ until you arrive at __Review + Create__. Select __Create__ to create the private endpoint. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/monitor-private-endpoint-network.png" alt-text="Screenshot of the Azure Monitor private endpoint network."::: ++1. After the private endpoint has been created, return to the __Azure Monitor Private Link Scope__ resource in the portal. From the __Configure__ section, select __Access modes__. Select __Private only__ for __Ingestion access mode__ and __Query access mode__, then select __Save__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/access-modes.png" alt-text="Screenshot of the private link scope access modes."::: ++## Connect to the workspace ++There are several ways that you can connect to the secured workspace. The steps in this article use a __jump box__, which is a virtual machine in the VNet. You can connect to it using your web browser and Azure Bastion. The following table lists several other ways that you might connect to the secure workspace: ++| Method | Description | +| -- | -- | +| [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) | Connects on-premises networks to the VNet over a private connection. Connection is made over the public internet. | +| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | Connects on-premises networks into the cloud over a private connection. Connection is made using a connectivity provider. | ++> [!IMPORTANT] +> When using a __VPN gateway__ or __ExpressRoute__, you will need to plan how name resolution works between your on-premises resources and those in the VNet. For more information, see [Use a custom DNS server](how-to-custom-dns.md). ++### Create a jump box (VM) ++Use the following steps to create an Azure Virtual Machine to use as a jump box. Azure Bastion enables you to connect to the VM desktop through your browser. From the VM desktop, you can then use the browser on the VM to connect to resources inside the VNet, such as Azure Machine Learning studio. Or you can install development tools on the VM. ++> [!TIP] +> The steps below create a Windows 11 enterprise VM. Depending on your requirements, you may want to select a different VM image. The Windows 11 (or 10) enterprise image is useful if you need to join the VM to your organization's domain. ++1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Machine__. Select the __Virtual Machine__ entry, and then select __Create__. ++1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields: ++ * __Virtual machine name__: A unique name for the VM. + * __Username__: The username you'll use to log in to the VM. + * __Password__: The password for the username. + * __Security type__: Standard. + * __Image__: Windows 11 Enterprise. ++ > [!TIP] + > If Windows 11 Enterprise isn't in the list for image selection, use _See all images__. Find the __Windows 11__ entry from Microsoft, and use the __Select__ drop-down to select the enterprise image. +++ You can leave other fields at the default values. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-virtual-machine-basic.png" alt-text="Screenshot of the virtual machine basics configuration."::: ++1. Select __Networking__, and then select the __Virtual network__ you created earlier. Use the following information to set the remaining fields: ++ * Select the __Training__ subnet. + * Set the __Public IP__ to __None__. + * Leave the other fields at the default value. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-virtual-machine-network.png" alt-text="Screenshot of the virtual machine network configuration."::: ++1. Select __Review + create__. Verify that the information is correct, and then select __Create__. +++### Connect to the jump box ++1. Once the virtual machine has been created, select __Go to resource__. +1. From the top of the page, select __Connect__ and then __Bastion__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/virtual-machine-connect.png" alt-text="Screenshot of the 'connect' list, with 'Bastion' selected."::: ++1. Select __Use Bastion__, and then provide your authentication information for the virtual machine, and a connection will be established in your browser. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/use-bastion.png" alt-text="Screenshot of the Use Bastion button."::: ++## Create a compute cluster and compute instance ++A compute cluster is used by your training jobs. A compute instance provides a Jupyter Notebook experience on a shared compute resource attached to your workspace. ++1. From an Azure Bastion connection to the jump box, open the __Microsoft Edge__ browser on the remote desktop. +1. In the remote browser session, go to __https://ml.azure.com__. When prompted, authenticate using your Azure AD account. +1. From the __Welcome to studio!__ screen, select the __Machine Learning workspace__ you created earlier and then select __Get started__. ++ > [!TIP] + > If your Azure AD account has access to multiple subscriptions or directories, use the __Directory and Subscription__ dropdown to select the one that contains the workspace. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-select-workspace.png" alt-text="Screenshot of the select Machine Learning workspace form."::: ++1. From studio, select __Compute__, __Compute clusters__, and then __+ New__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-new-compute-cluster.png" alt-text="Screenshot of the compute clusters page, with the 'new' button selected."::: ++1. From the __Virtual Machine__ dialog, select __Next__ to accept the default virtual machine configuration. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-new-compute-vm.png" alt-text="Screenshot of the compute cluster virtual machine configuration."::: + +1. From the __Configure Settings__ dialog, enter __cpu-cluster__ as the __Compute name__. Set the __Subnet__ to __Training__ and then select __Create__ to create the cluster. ++ > [!TIP] + > Compute clusters dynamically scale the nodes in the cluster as needed. We recommend leaving the minimum number of nodes at 0 to reduce costs when the cluster is not in use. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/studio-new-compute-settings.png" alt-text="Screenshot of the configure settings form."::: ++1. From studio, select __Compute__, __Compute instance__, and then __+ New__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-compute-instance.png" alt-text="Screenshot of the compute instances page, with the 'new' button selected."::: ++1. From the __Virtual Machine__ dialog, enter a unique __Computer name__ and select __Next: Advanced Settings__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-compute-instance-vm.png" alt-text="Screenshot of compute instance virtual machine configuration."::: ++1. From the __Advanced Settings__ dialog, set the __Subnet__ to __Training__, and then select __Create__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/create-compute-instance-settings.png" alt-text="Screenshot of the advanced settings."::: ++> [!TIP] +> When you create a compute cluster or compute instance, Azure Machine Learning dynamically adds a Network Security Group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance: +> +> * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag. +> * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag. +> +> The following screenshot shows an example of these rules: +> +> :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG"::: ++For more information on creating a compute cluster and compute cluster, including how to do so with Python and the CLI, see the following articles: ++* [Create a compute cluster](how-to-create-attach-compute-cluster.md) +* [Create a compute instance](how-to-create-compute-instance.md) ++## Configure image builds +++When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images: ++1. Navigate to [https://shell.azure.com/](https://shell.azure.com/) to open the Azure Cloud Shell. +1. From the Cloud Shell, use the following command to install the 2.0 CLI for Azure Machine Learning: + + ```azurecli-interactive + az extension add -n ml + ``` ++1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use: + + ```azurecli-interactive + az ml workspace update \ + -n myworkspace \ + -g myresourcegroup \ + -i mycomputecluster + ``` ++ > [!NOTE] + > You can use the same compute cluster to train models and build Docker images for the workspace. ++## Use the workspace ++> [!IMPORTANT] +> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment (SDK/CLI v1)](./v1/how-to-secure-inferencing-vnet.md). +> +> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md). ++At this point, you can use the studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md). ++## Stop compute instance and jump box ++> [!WARNING] +> While it is running (started), the compute instance and jump box will continue charging your subscription. To avoid excess cost, __stop__ them when they are not in use. ++The compute cluster dynamically scales between the minimum and maximum node count set when you created it. If you accepted the defaults, the minimum is 0, which effectively turns off the cluster when not in use. +### Stop the compute instance ++From studio, select __Compute__, __Compute clusters__, and then select the compute instance. Finally, select __Stop__ from the top of the page. +++### Stop the jump box ++Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you're ready to use it again, use the __Start__ button to start it. +++You can also configure the jump box to automatically shut down at a specific time. To do so, select __Auto-shutdown__, __Enable__, set a time, and then select __Save__. +++## Clean up resources ++If you plan to continue using the secured workspace and other resources, skip this section. ++To delete all resources created in this tutorial, use the following steps: ++1. In the Azure portal, select __Resource groups__ on the far left. +1. From the list, select the resource group that you created in this tutorial. +1. Select __Delete resource group__. ++ :::image type="content" source="./media/tutorial-create-secure-workspace-vnet/delete-resources.png" alt-text="Screenshot of the delete resource group link."::: ++1. Enter the resource group name, then select __Delete__. +## Next steps ++Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md). +Now that you've created a secure workspace, learn how to [deploy a model](./v1/how-to-deploy-and-where.md). |
machine-learning | Tutorial Create Secure Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md | Title: Create a secure workspace + Title: Create a secure workspace with a managed virtual network -description: Create an Azure Machine Learning workspace and required Azure services inside a secure virtual network. +description: Create an Azure Machine Learning workspace and required Azure services inside a managed virtual network. Previously updated : 09/06/2022 Last updated : 08/11/2023 - monikerRange: 'azureml-api-2 || azureml-api-1' -# Tutorial: How to create a secure workspace +# Tutorial: How to create a secure workspace with a managed virtual network -In this article, learn how to create and connect to a secure Azure Machine Learning workspace. The steps in this article use an Azure Virtual Network to create a security boundary around resources used by Azure Machine Learning. -+In this article, learn how to create and connect to a secure Azure Machine Learning workspace. The steps in this article use an Azure Machine Learning managed virtual network to create a security boundary around resources used by Azure Machine Learning. In this tutorial, you accomplish the following tasks: > [!div class="checklist"]-> * Create an Azure Virtual Network (VNet) to __secure communications between services in the virtual network__. -> * Create an Azure Storage Account (blob and file) behind the VNet. This service is used as __default storage for the workspace__. -> * Create an Azure Key Vault behind the VNet. This service is used to __store secrets used by the workspace__. For example, the security information needed to access the storage account. -> * Create an Azure Container Registry (ACR). This service is used as a repository for Docker images. __Docker images provide the compute environments needed when training a machine learning model or deploying a trained model as an endpoint__. -> * Create an Azure Machine Learning workspace. -> * Create a jump box. A jump box is an Azure Virtual Machine that is behind the VNet. Since the VNet restricts access from the public internet, __the jump box is used as a way to connect to resources behind the VNet__. -> * Configure Azure Machine Learning studio to work behind a VNet. The studio provides a __web interface for Azure Machine Learning__. -> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__. In configurations where Azure Container Registry is behind the VNet, it is also used to build Docker images. -> * Connect to the jump box and use the Azure Machine Learning studio. --> [!TIP] -> If you're looking for a template (Microsoft Bicep or Hashicorp Terraform) that demonstrates how to create a secure workspace, see [Tutorial - Create a secure workspace using a template](tutorial-create-secure-workspace-template.md). +> * Create an Azure Machine Learning workspace configured to use a managed virtual network. +> * Create an Azure Machine Learning compute cluster. A compute cluster is used when __training machine learning models in the cloud__. After completing this tutorial, you'll have the following architecture: -* An Azure Virtual Network, which contains three subnets: - * __Training__: Contains the Azure Machine Learning workspace, dependency services, and resources used for training models. - * __Scoring__: For the steps in this tutorial, it isn't used. However if you continue using this workspace for other tutorials, we recommend using this subnet when deploying models to [endpoints](concept-endpoints.md). - * __AzureBastionSubnet__: Used by the Azure Bastion service to securely connect clients to Azure Virtual Machines. -* An Azure Machine Learning workspace that uses a private endpoint to communicate using the VNet. -* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the VNet. -* An Azure Container Registry that uses a private endpoint communicate using the VNet. -* Azure Bastion, which allows you to use your browser to securely communicate with the jump box VM inside the VNet. -* An Azure Virtual Machine that you can remotely connect to and access resources secured inside the VNet. -* An Azure Machine Learning compute instance and compute cluster. --> [!TIP] -> The Azure Batch Service listed on the diagram is a back-end service required by the compute clusters and compute instances. -+* An Azure Machine Learning workspace that uses a private endpoint to communicate using the managed network. +* An Azure Storage Account that uses private endpoints to allow storage services such as blob and file to communicate using the managed network. +* An Azure Container Registry that uses a private endpoint communicate using the managed network. +* An Azure Key Vault that uses a private endpoint to communicate using the managed network. +* An Azure Machine Learning compute instance and compute cluster secured by the managed network. ## Prerequisites -* Familiarity with Azure Virtual Networks and IP networking. If you aren't familiar, try the [Fundamentals of computer networking](/training/modules/network-fundamentals/) module. -* While most of the steps in this article use the Azure portal or the Azure Machine Learning studio, some steps use the Azure CLI extension for Machine Learning v2. --## Create a virtual network --To create a virtual network, use the following steps: --1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Network__ in the search field. Select the __Virtual Network__ entry, and then select __Create__. --- :::image type="content" source="./media/tutorial-create-secure-workspace/create-resource-search-vnet.png" alt-text="The create resource UI search"::: -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-resource-vnet.png" alt-text="Virtual network create"::: --1. From the __Basics__ tab, select the Azure __subscription__ to use for this resource and then select or create a new __resource group__. Under __Instance details__, enter a friendly __name__ for your virtual network and select the __region__ to create it in. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-basics.png" alt-text="Image of the basic virtual network config"::: --1. Select __Security__. Select to __Enable Azure Bastion__. [Azure Bastion](../bastion/bastion-overview.md) provides a secure way to access the VM jump box you'll create inside the VNet in a later step. Use the following values for the remaining fields: -- * __Bastion name__: A unique name for this Bastion instance - * __Public IP address__: Create a new public IP address. -- Leave the other fields at the default values. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-bastion.png" alt-text="Screenshot of Bastion config."::: --1. Select __IP Addresses__. The default settings should be similar to the following image: -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-default.png" alt-text="Default IP Address screen."::: -- Use the following steps to configure the IP address and configure a subnet for training and scoring resources: -- > [!TIP] - > While you can use a single subnet for all Azure Machine Learning resources, the steps in this article show how to create two subnets to separate the training & scoring resources. - > - > The workspace and other dependency services will go into the training subnet. They can still be used by resources in other subnets, such as the scoring subnet. -- 1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.16.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__. - - > [!IMPORTANT] - > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure. -- 1. Select the __Default__ subnet and then select __Remove subnet__. - - :::image type="content" source="./media/tutorial-create-secure-workspace/delete-default-subnet.png" alt-text="Screenshot of deleting default subnet."::: -- 1. To create a subnet to contain the workspace, dependency services, and resources used for _training_, select __+ Add subnet__ and set the subnet name, starting address, and subnet size. The following are the values used in this tutorial: - * __Name__: Training - * __Starting address__: 172.16.0.0 - * __Subnet size__: /24 (256 addresses) -- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-training-subnet.png" alt-text="Screenshot of Training subnet."::: -- 1. To create a subnet for compute resources used to _score_ your models, select __+ Add subnet__ again, and set the name and address range: - * __Subnet name__: Scoring - * __Starting address__: 172.16.1.0 - * __Subnet size__: /24 (256 addresses) -- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-scoring-subnet.png" alt-text="Screenshot of Scoring subnet."::: -- 1. To create a subnet for _Azure Bastion_, select __+ Add subnet__ and set the template, starting address, and subnet size: - * __Subnet template__: Azure Bastion - * __Starting address__: 172.16.2.0 - * __Subnet size__: /26 (64 addresses) -- :::image type="content" source="./media/tutorial-create-secure-workspace/vnet-add-azure-bastion-subnet.png" alt-text="Screenshot of Azure Bastion subnet."::: --1. Select __Review + create__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-ip-address-final.png" alt-text="Screenshot showing the review + create button"::: --1. Verify that the information is correct, and then select __Create__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-vnet-review.png" alt-text="Screenshot of the review page"::: --## Create a storage account --1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Storage account__. Select the __Storage Account__ entry, and then select __Create__. -1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Storage account name__, and set __Redundancy__ to __Locally-redundant storage (LRS)__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-storage.png" alt-text="Image of storage account basic config"::: --1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add private endpoint__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-enable-private-endpoint.png" alt-text="UI to add the blob private network"::: --1. On the __Create private endpoint__ form, use the following values: - * __Subscription__: The same Azure subscription that contains the previous resources you've created. - * __Resource group__: The same Azure resource group that contains the previous resources you've created. - * __Location__: The same Azure region that contains the previous resources you've created. - * __Name__: A unique name for this private endpoint. - * __Target sub-resource__: blob - * __Virtual network__: The virtual network you created earlier. - * __Subnet__: Training (172.16.0.0/24) - * __Private DNS integration__: Yes - * __Private DNS Zone__: privatelink.blob.core.windows.net -- Select __OK__ to create the private endpoint. --1. Select __Review + create__. Verify that the information is correct, and then select __Create__. --1. Once the Storage Account has been created, select __Go to resource__: -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-go-to-resource.png" alt-text="Go to new storage resource"::: --1. From the left navigation, select __Networking__ the __Private endpoint connections__ tab, and then select __+ Private endpoint__: -- > [!NOTE] - > While you created a private endpoint for Blob storage in the previous steps, you must also create one for File storage. -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-networking.png" alt-text="UI for storage account networking"::: --1. On the __Create a private endpoint__ form, use the same __subscription__, __resource group__, and __Region__ that you've used for previous resources. Enter a unique __Name__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint.png" alt-text="UI to add the file private endpoint"::: --1. Select __Next : Resource__, and then set __Target sub-resource__ to __file__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint-resource.png" alt-text="Add the subresource of 'file'"::: --1. Select __Next : Configuration__, and then use the following values: - * __Virtual network__: The network you created previously - * __Subnet__: Training - * __Integrate with private DNS zone__: Yes - * __Private DNS zone__: privatelink.file.core.windows.net -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-file-private-endpoint-config.png" alt-text="UI to configure the file private endpoint"::: --1. Select __Review + Create__. Verify that the information is correct, and then select __Create__. --> [!TIP] -> If you plan to use a [batch endpoint](concept-endpoints.md) or an Azure Machine Learning pipeline that uses a [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md), it is also required to configure private endpoints target **queue** and **table** sub-resources. ParallelRunStep uses queue and table under the hood for task scheduling and dispatching. --## Create a key vault --1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Key Vault__. Select the __Key Vault__ entry, and then select __Create__. -1. From the __Basics__ tab, select the __subscription__, __resource group__, and __region__ you previously used for the virtual network. Enter a unique __Key vault name__. Leave the other fields at the default value. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-key-vault.png" alt-text="Create a new key vault"::: --1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/key-vault-networking.png" alt-text="Key vault networking"::: --1. On the __Create private endpoint__ form, use the following values: - * __Subscription__: The same Azure subscription that contains the previous resources you've created. - * __Resource group__: The same Azure resource group that contains the previous resources you've created. - * __Location__: The same Azure region that contains the previous resources you've created. - * __Name__: A unique name for this private endpoint. - * __Target sub-resource__: Vault - * __Virtual network__: The virtual network you created earlier. - * __Subnet__: Training (172.16.0.0/24) - * __Private DNS integration__: Yes - * __Private DNS Zone__: privatelink.vaultcore.azure.net -- Select __OK__ to create the private endpoint. -- :::image type="content" source="./media/tutorial-create-secure-workspace/key-vault-private-endpoint.png" alt-text="Configure a key vault private endpoint"::: --1. Select __Review + create__. Verify that the information is correct, and then select __Create__. --## Create a container registry --1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Container Registry__. Select the __Container Registry__ entry, and then select __Create__. -1. From the __Basics__ tab, select the __subscription__, __resource group__, and __location__ you previously used for the virtual network. Enter a unique __Registry name__ and set the __SKU__ to __Premium__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-container-registry.png" alt-text="Create a container registry"::: --1. From the __Networking__ tab, select __Private endpoint__ and then select __+ Add__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-networking.png" alt-text="Container registry networking"::: --1. On the __Create private endpoint__ form, use the following values: - * __Subscription__: The same Azure subscription that contains the previous resources you've created. - * __Resource group__: The same Azure resource group that contains the previous resources you've created. - * __Location__: The same Azure region that contains the previous resources you've created. - * __Name__: A unique name for this private endpoint. - * __Target sub-resource__: registry - * __Virtual network__: The virtual network you created earlier. - * __Subnet__: Training (172.16.0.0/24) - * __Private DNS integration__: Yes - * __Private DNS Zone__: privatelink.azurecr.io -- Select __OK__ to create the private endpoint. -- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-private-endpoint.png" alt-text="Configure container registry private endpoint"::: --1. Select __Review + create__. Verify that the information is correct, and then select __Create__. -1. After the container registry has been created, select __Go to resource__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-go-to-resource.png" alt-text="Select 'go to resource'"::: --1. From the left of the page, select __Access keys__, and then enable __Admin user__. This setting is required when using Azure Container Registry inside a virtual network with Azure Machine Learning. -- :::image type="content" source="./media/tutorial-create-secure-workspace/container-registry-admin-user.png" alt-text="Screenshot of admin user toggle"::: +* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). ## Create a workspace -1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Machine Learning__. Select the __Machine Learning__ entry, and then select __Create__. +1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Azure Machine Learning__. Select the __Azure Machine Learning__ entry, and then select __Create__. - :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-create.png" alt-text="{alt-text}"::: +1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ to create the service in. If you don't have an existing resource group, select __Create new__ to create one. Enter a unique name for the __Workspace name__. Leave the rest of the fields at the default values; new instances of the required services are created for the workspace. -1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Use the following values for the other fields: - * __Workspace name__: A unique name for your workspace. - * __Storage account__: Select the storage account you created previously. - * __Key vault__: Select the key vault you created previously. - * __Application insights__: Use the default value. - * __Container registry__: Use the container registry you created previously. + :::image type="content" source="./media/tutorial-create-secure-workspace/create-workspace.png" alt-text="Screenshot of the workspace creation form."::: - :::image type="content" source="./media/tutorial-create-secure-workspace/create-machine-learning-workspace.png" alt-text="Basic workspace configuration"::: +1. From the __Networking__ tab, select __Private with Internet Outbound__. -1. From the __Networking__ tab, select __Private with Internet Outbound__. In the __Workspace inbound access__ section, select __+ add__. + :::image type="content" source="./media/tutorial-create-secure-workspace/private-internet-outbound.png" alt-text="Screenshot of the workspace network tab with internet outbound selected."::: -1. On the __Create private endpoint__ form, use the following values: - * __Subscription__: The same Azure subscription that contains the previous resources you've created. - * __Resource group__: The same Azure resource group that contains the previous resources you've created. - * __Location__: The same Azure region that contains the previous resources you've created. - * __Name__: A unique name for this private endpoint. - * __Target sub-resource__: amlworkspace - * __Virtual network__: The virtual network you created earlier. - * __Subnet__: Training (172.16.0.0/24) - * __Private DNS integration__: Yes - * __Private DNS Zone__: Leave the two private DNS zones at the default values of __privatelink.api.azureml.ms__ and __privatelink.notebooks.azure.net__. -- Select __OK__ to create the private endpoint. -- :::image type="content" source="./media/tutorial-create-secure-workspace/machine-learning-workspace-private-endpoint.png" alt-text="Screenshot of workspace private network config"::: --1. From the __Networking__ tab, in the __Workspace outbound access__ section, select __Use my own virtual network__. 1. Select __Review + create__. Verify that the information is correct, and then select __Create__.-1. Once the workspace has been created, select __Go to resource__. -1. From the __Settings__ section on the left, select __Private endpoint connections__ and then select the link in the __Private endpoint__ column: -- :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-private-endpoint-connections.png" alt-text="Screenshot of workspace private endpoint connections"::: -1. Once the private endpoint information appears, select __DNS configuration__ from the left of the page. Save the IP address and fully qualified domain name (FQDN) information on this page, as it will be used later. + :::image type="content" source="./media/tutorial-create-secure-workspace/review-create-machine-learning.png" alt-text="Screenshot of the review page for workspace creation."::: - :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-private-endpoint-dns.png" alt-text="screenshot of IP and FQDN entries"::: --> [!IMPORTANT] -> There are still some configuration steps needed before you can fully use the workspace. However, these require you to connect to the workspace. --## Enable studio --Azure Machine Learning studio is a web-based application that lets you easily manage your workspace. However, it needs some extra configuration before it can be used with resources secured inside a VNet. Use the following steps to enable studio: --1. When using an Azure Storage Account that has a private endpoint, add the service principal for the workspace as a __Reader__ for the storage private endpoint(s). From the Azure portal, select your storage account and then select __Networking__. Next, select __Private endpoint connections__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-select.png" alt-text="Screenshot of storage private endpoints"::: --1. For __each private endpoint listed__, use the following steps: -- 1. Select the link in the __Private endpoint__ column. - - :::image type="content" source="./media/tutorial-create-secure-workspace/storage-private-endpoint-selected.png" alt-text="Screenshot of endpoints to select"::: -- 1. Select __Access control (IAM)__ from the left side. - 1. Select __+ Add__, and then __Add role assignment (Preview)__. -- ![Access control (IAM) page with Add role assignment menu open.](../../includes/role-based-access-control/media/add-role-assignment-menu-generic.png) -- 1. On the __Role__ tab, select the __Reader__. -- ![Add role assignment page with Role tab selected.](../../includes/role-based-access-control/media/add-role-assignment-role-generic.png) -- 1. On the __Members__ tab, select __User, group, or service principal__ in the __Assign access to__ area and then select __+ Select members__. In the __Select members__ dialog, enter the name as your Azure Machine Learning workspace. Select the service principal for the workspace, and then use the __Select__ button. -- 1. On the **Review + assign** tab, select **Review + assign** to assign the role. --## Secure Azure Monitor and Application Insights --> [!NOTE] -> For more information on securing Azure Monitor and Application Insights, see the following links: -> * [Migrate to workspace-based Application Insights resources](../azure-monitor/app/convert-classic-resource.md). -> * [Configure your Azure Monitor private link](../azure-monitor/logs/private-link-configure.md). --1. In the [Azure portal](https://portal.azure.com), select your Azure Machine Learning workspace. From __Overview__, select the __Application Insights__ link. -- :::image type="content" source="./media/tutorial-create-secure-workspace/workspace-application-insight.png" alt-text="Screenshot of the Application Insights link."::: --1. In the __Properties__ for Application Insights, check the __WORKSPACE__ entry to see if it contains a value. If it _doesn't_, select __Migrate to Workspace-based__, select the __Subscription__ and __Log Analytics Workspace__ to use, then select __Apply__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/migrate-workspace-based.png" alt-text="Screenshot of the link to migrate to workspace-based."::: --1. In the Azure portal, select __Home__, and then search for __Private link__. Select the __Azure Monitor Private Link Scope__ result and then select __Create__. -1. From the __Basics__ tab, select the same __Subscription__, __Resource Group__, and __Resource group region__ as your Azure Machine Learning workspace. Enter a __Name__ for the instance, and then select __Review + Create__. To create the instance, select __Create__. -1. Once the Azure Monitor Private Link Scope instance has been created, select the instance in the Azure portal. From the __Configure__ section, select __Azure Monitor Resources__ and then select __+ Add__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/add-monitor-resources.png" alt-text="Screenshot of the add button."::: --1. From __Select a scope__, use the filters to select the Application Insights instance for your Azure Machine Learning workspace. Select __Apply__ to add the instance. -1. From the __Configure__ section, select __Private Endpoint connections__ and then select __+ Private Endpoint__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/private-endpoint-connections.png" alt-text="Screenshot of the add private endpoint button."::: --1. Select the same __Subscription__, __Resource Group__, and __Region__ that contains your VNet. Select __Next: Resource__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/monitor-private-endpoint-basics.png" alt-text="Screenshot of the Azure Monitor private endpoint basics."::: --1. Select `Microsoft.insights/privateLinkScopes` as the __Resource type__. Select the Private Link Scope you created earlier as the __Resource__. Select `azuremonitor` as the __Target sub-resource__. Finally, select __Next: Virtual Network__ to continue. -- :::image type="content" source="./media/tutorial-create-secure-workspace/monitor-private-endpoint-resource.png" alt-text="Screenshot of the Azure Monitor private endpoint resources."::: --1. Select the __Virtual network__ you created earlier, and the __Training__ subnet. Select __Next__ until you arrive at __Review + Create__. Select __Create__ to create the private endpoint. -- :::image type="content" source="./media/tutorial-create-secure-workspace/monitor-private-endpoint-network.png" alt-text="Screenshot of the Azure Monitor private endpoint network."::: --1. After the private endpoint has been created, return to the __Azure Monitor Private Link Scope__ resource in the portal. From the __Configure__ section, select __Access modes__. Select __Private only__ for __Ingestion access mode__ and __Query access mode__, then select __Save__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/access-modes.png" alt-text="Screenshot of the private link scope access modes."::: +1. Once the workspace has been created, select __Go to resource__. ## Connect to the workspace -There are several ways that you can connect to the secured workspace. The steps in this article use a __jump box__, which is a virtual machine in the VNet. You can connect to it using your web browser and Azure Bastion. The following table lists several other ways that you might connect to the secure workspace: --| Method | Description | -| -- | -- | -| [Azure VPN gateway](../vpn-gateway/vpn-gateway-about-vpngateways.md) | Connects on-premises networks to the VNet over a private connection. Connection is made over the public internet. | -| [ExpressRoute](https://azure.microsoft.com/services/expressroute/) | Connects on-premises networks into the cloud over a private connection. Connection is made using a connectivity provider. | --> [!IMPORTANT] -> When using a __VPN gateway__ or __ExpressRoute__, you will need to plan how name resolution works between your on-premises resources and those in the VNet. For more information, see [Use a custom DNS server](how-to-custom-dns.md). --### Create a jump box (VM) --Use the following steps to create an Azure Virtual Machine to use as a jump box. Azure Bastion enables you to connect to the VM desktop through your browser. From the VM desktop, you can then use the browser on the VM to connect to resources inside the VNet, such as Azure Machine Learning studio. Or you can install development tools on the VM. +From the __Overview__ page for your workspace, select __Launch studio__. > [!TIP]-> The steps below create a Windows 11 enterprise VM. Depending on your requirements, you may want to select a different VM image. The Windows 11 (or 10) enterprise image is useful if you need to join the VM to your organization's domain. --1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Virtual Machine__. Select the __Virtual Machine__ entry, and then select __Create__. --1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields: -- * __Virtual machine name__: A unique name for the VM. - * __Username__: The username you'll use to log in to the VM. - * __Password__: The password for the username. - * __Security type__: Standard. - * __Image__: Windows 11 Enterprise. -- > [!TIP] - > If Windows 11 Enterprise isn't in the list for image selection, use _See all images__. Find the __Windows 11__ entry from Microsoft, and use the __Select__ drop-down to select the enterprise image. --- You can leave other fields at the default values. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-basic.png" alt-text="Image of VM basic configuration"::: +> You can also go to the [Azure Machine Learning studio](https://ml.azure.com) and select your workspace from the list. -1. Select __Networking__, and then select the __Virtual network__ you created earlier. Use the following information to set the remaining fields: - * Select the __Training__ subnet. - * Set the __Public IP__ to __None__. - * Leave the other fields at the default value. +## Create compute instance - :::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-network.png" alt-text="Image of VM network configuration"::: +1. From studio, select __Compute__, __Compute instances__, and then __+ New__. -1. Select __Review + create__. Verify that the information is correct, and then select __Create__. ---### Connect to the jump box --1. Once the virtual machine has been created, select __Go to resource__. -1. From the top of the page, select __Connect__ and then __Bastion__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-connect.png" alt-text="Image of the connect/bastion UI"::: --1. Select __Use Bastion__, and then provide your authentication information for the virtual machine, and a connection will be established in your browser. -- :::image type="content" source="./media/tutorial-create-secure-workspace/use-bastion.png" alt-text="Image of use bastion dialog"::: --## Create a compute cluster and compute instance --A compute cluster is used by your training jobs. A compute instance provides a Jupyter Notebook experience on a shared compute resource attached to your workspace. --1. From an Azure Bastion connection to the jump box, open the __Microsoft Edge__ browser on the remote desktop. -1. In the remote browser session, go to __https://ml.azure.com__. When prompted, authenticate using your Azure AD account. -1. From the __Welcome to studio!__ screen, select the __Machine Learning workspace__ you created earlier and then select __Get started__. -- > [!TIP] - > If your Azure AD account has access to multiple subscriptions or directories, use the __Directory and Subscription__ dropdown to select the one that contains the workspace. -- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-select-workspace.png" alt-text="Screenshot of the select workspace dialog"::: --1. From studio, select __Compute__, __Compute clusters__, and then __+ New__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-cluster.png" alt-text="Screenshot of new compute cluster workflow"::: --1. From the __Virtual Machine__ dialog, select __Next__ to accept the default virtual machine configuration. -- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-vm.png" alt-text="Screenshot of compute cluster vm settings"::: + :::image type="content" source="./media/tutorial-create-secure-workspace/create-new-compute-instance.png" alt-text="Screenshot of the new compute option in studio."::: -1. From the __Configure Settings__ dialog, enter __cpu-cluster__ as the __Compute name__. Set the __Subnet__ to __Training__ and then select __Create__ to create the cluster. -- > [!TIP] - > Compute clusters dynamically scale the nodes in the cluster as needed. We recommend leaving the minimum number of nodes at 0 to reduce costs when the cluster is not in use. -- :::image type="content" source="./media/tutorial-create-secure-workspace/studio-new-compute-settings.png" alt-text="Screenshot of new compute cluster settings"::: --1. From studio, select __Compute__, __Compute instance__, and then __+ New__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance.png" alt-text="Screenshot of new compute instance workflow"::: +1. From the __Configure required settings__ dialog, enter a unique value as the __Compute name__. Leave the rest of the selections at the default value. -1. From the __Virtual Machine__ dialog, enter a unique __Computer name__ and select __Next: Advanced Settings__. +1. Select __Create__. The compute instance takes a few minutes to create. The compute instance is created within the managed network. - :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-vm.png" alt-text="Screenshot of compute instance vm settings"::: --1. From the __Advanced Settings__ dialog, set the __Subnet__ to __Training__, and then select __Create__. -- :::image type="content" source="./media/tutorial-create-secure-workspace/create-compute-instance-settings.png" alt-text="Screenshot of compute instance settings"::: --> [!TIP] -> When you create a compute cluster or compute instance, Azure Machine Learning dynamically adds a Network Security Group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance: -> -> * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag. -> * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag. -> -> The following screenshot shows an example of these rules: -> -> :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG"::: --For more information on creating a compute cluster and compute cluster, including how to do so with Python and the CLI, see the following articles: --* [Create a compute cluster](how-to-create-attach-compute-cluster.md) -* [Create a compute instance](how-to-create-compute-instance.md) --## Configure image builds ---When Azure Container Registry is behind the virtual network, Azure Machine Learning can't use it to directly build Docker images (used for training and deployment). Instead, configure the workspace to use the compute cluster you created earlier. Use the following steps to create a compute cluster and configure the workspace to use it to build images: --1. Navigate to [https://shell.azure.com/](https://shell.azure.com/) to open the Azure Cloud Shell. -1. From the Cloud Shell, use the following command to install the 2.0 CLI for Azure Machine Learning: - - ```azurecli-interactive - az extension add -n ml - ``` --1. To update the workspace to use the compute cluster to build Docker images. Replace `docs-ml-rg` with your resource group. Replace `docs-ml-ws` with your workspace. Replace `cpu-cluster` with the compute cluster to use: - - ```azurecli-interactive - az ml workspace update \ - -n myworkspace \ - -g myresourcegroup \ - -i mycomputecluster - ``` -- > [!NOTE] - > You can use the same compute cluster to train models and build Docker images for the workspace. + > [!TIP] + > It may take several minutes to create the first compute resource. This delay occurs because the managed virtual network is also being created. The managed virtual network isn't created until the first compute resource is created. Subsequent managed compute resources will be created much faster. ## Use the workspace -> [!IMPORTANT] -> The steps in this article put Azure Container Registry behind the VNet. In this configuration, you cannot deploy a model to Azure Container Instances inside the VNet. We do not recommend using Azure Container Instances with Azure Machine Learning in a virtual network. For more information, see [Secure the inference environment (SDK/CLI v1)](./v1/how-to-secure-inferencing-vnet.md). -> -> As an alternative to Azure Container Instances, try Azure Machine Learning managed online endpoints. For more information, see [Enable network isolation for managed online endpoints](how-to-secure-online-endpoint.md). +At this point, you can use the studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [Tutorial: Model development](tutorial-cloud-workstation.md). -At this point, you can use the studio to interactively work with notebooks on the compute instance and run training jobs on the compute cluster. For a tutorial on using the compute instance and compute cluster, see [Tutorial: Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md). +## Stop compute instance -## Stop compute instance and jump box +While it's running (started), the compute instance continues charging your subscription. To avoid excess cost, __stop__ it when not in use. -> [!WARNING] -> While it is running (started), the compute instance and jump box will continue charging your subscription. To avoid excess cost, __stop__ them when they are not in use. --The compute cluster dynamically scales between the minimum and maximum node count set when you created it. If you accepted the defaults, the minimum is 0, which effectively turns off the cluster when not in use. -### Stop the compute instance --From studio, select __Compute__, __Compute clusters__, and then select the compute instance. Finally, select __Stop__ from the top of the page. +From studio, select __Compute__, __Compute instances__, and then select the compute instance. Finally, select __Stop__ from the top of the page. :::image type="content" source="./media/tutorial-create-secure-workspace/compute-instance-stop.png" alt-text="Screenshot of stop button for compute instance":::-### Stop the jump box --Once it has been created, select the virtual machine in the Azure portal and then use the __Stop__ button. When you're ready to use it again, use the __Start__ button to start it. ---You can also configure the jump box to automatically shut down at a specific time. To do so, select __Auto-shutdown__, __Enable__, set a time, and then select __Save__. - ## Clean up resources If you plan to continue using the secured workspace and other resources, skip th To delete all resources created in this tutorial, use the following steps: -1. In the Azure portal, select __Resource groups__ on the far left. +1. In the Azure portal, select __Resource groups__. 1. From the list, select the resource group that you created in this tutorial. 1. Select __Delete resource group__. :::image type="content" source="./media/tutorial-create-secure-workspace/delete-resources.png" alt-text="Screenshot of delete resource group button"::: 1. Enter the resource group name, then select __Delete__.+ ## Next steps Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).-Now that you've created a secure workspace, learn how to [deploy a model](./v1/how-to-deploy-and-where.md). ++For more information on the managed virtual network, see [Secure your workspace with a managed virtual network](how-to-managed-network.md). |
machine-learning | Tutorial Enable Materialization Backfill Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-materialization-backfill-data.md | Title: "Tutorial #2: enable materialization and backfill feature data (preview)"- -description: Managed Feature Store tutorial part 2. + Title: "Tutorial 2: Enable materialization and backfill feature data (preview)" ++description: This is part 2 of a tutorial series on managed feature store. -# Tutorial #2: Enable materialization and backfill feature data (preview) +# Tutorial 2: Enable materialization and backfill feature data (preview) -This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization. +This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization. -Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. This tutorial describes materialization, which computes the feature values for a given feature window, and then stores those values in a materialization store. All feature queries can then use the values from the materialization store. A feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This works well for the prototyping phase. However, for training and inference operations in a production environment, it's recommended that you materialize the features, for greater reliability and availability. +This tutorial is the second part of a four-part series. The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. This tutorial describes materialization. -This tutorial is part two of a four part series. In this tutorial, you'll learn how to: +Materialization computes the feature values for a feature window and then stores those values in a materialization store. All feature queries can then use the values from the materialization store. ++Without materialization, a feature set query applies the transformations to the source on the fly, to compute the features before it returns the values. This process works well for the prototyping phase. However, for training and inference operations in a production environment, we recommend that you materialize the features for greater reliability and availability. ++In this tutorial, you learn how to: > [!div class="checklist"]-> * Enable offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user assigned managed identity -> * Enable offline materialization on the feature sets, and backfill the feature data +> * Enable an offline store on the feature store by creating and attaching an Azure Data Lake Storage Gen2 container and a user-assigned managed identity (UAI). +> * Enable offline materialization on the feature sets, and backfill the feature data. -> [!IMPORTANT] -> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites -Before you proceed with this article, make sure you cover these prerequisites: --* Complete the part 1 tutorial, to create the required feature store, account entity and transaction feature set -* An Azure Resource group, where you (or the service principal you use) have `User Access Administrator`and `Contributor` roles. +Before you proceed with this tutorial, be sure to cover these prerequisites: -To proceed with this article, your user account needs the owner role or contributor role for the resource group that holds the created feature store. +* Completion of [Tutorial 1: Develop and register a feature set with managed feature store](tutorial-get-started-with-feature-store.md), to create the required feature store, account entity, and `transactions` feature set. +* An Azure resource group, where you (or the service principal that you use) have User Access Administrator and Contributor roles. +* On your user account, the Owner or Contributor role for the resource group that holds the created feature store. ## Set up This list summarizes the required setup steps: -1. In your project workspace, create an Azure Machine Learning compute resource, to run the training pipeline -1. In your feature store workspace, create an offline materialization store: create an Azure gen2 storage account and a container inside it, and attach it to the feature store. Optional: you can use an existing storage container -1. Create and assign a user-assigned managed identity to the feature store. Optionally, you can use an existing managed identity. The system managed materialization jobs - in other words, the recurrent jobs - use the managed identity. Part 3 of the tutorial relies on this -1. Grant required role-based authentication control (RBAC) permissions to the user-assigned managed identity -1. Grant required role-based authentication control (RBAC) to your Azure AD identity. Users, including yourself, need read access to the sources and the materialization store +1. In your project workspace, create an Azure Machine Learning compute resource to run the training pipeline. +1. In your feature store workspace, create an offline materialization store. Create an Azure Data Lake Storage Gen2 account and a container inside it, and attach it to the feature store. Optionally, you can use an existing storage container. +1. Create and assign a UAI to the feature store. Optionally, you can use an existing managed identity. The system-managed materialization jobs - in other words, the recurrent jobs - use the managed identity. The third tutorial in the series relies on it. +1. Grant required role-based access control (RBAC) permissions to the UAI. +1. Grant required RBAC permissions to your Azure Active Directory (Azure AD) identity. Users, including you, need read access to the sources and the materialization store. -### Configure the Azure Machine Learning spark notebook +### Configure the Azure Machine Learning Spark notebook -1. Running the tutorial: +You can create a new notebook and execute the instructions in this tutorial step by step. You can also open the existing notebook named *2. Enable materialization and backfill feature data.ipynb* from the *featurestore_sample/notebooks* directory, and then run it. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. - You can create a new notebook, and execute the instructions in this document, step by step. You can also open the existing notebook named `2. Enable materialization and backfill feature data.ipynb`, and then run it. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation. --1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. +1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**. 1. Configure the session: - * Select "configure session" in the bottom nav - * Select **upload conda file** - * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development) - * Increase the session time-out (idle time) to avoid frequent prerequisite reruns + 1. On the toolbar, select **Configure session**. + 1. On the **Python packages** tab, select **Upload Conda file**. + 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment). + 1. Increase the session time-out (idle time) to avoid frequent prerequisite reruns. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=start-spark-session)] This list summarizes the required setup steps: [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=root-dir)] - 1. Set up the CLI +1. Set up the CLI. ++ # [Python SDK](#tab/python) ++ Not applicable. ++ # [Azure CLI](#tab/cli) ++ 1. Install the Azure Machine Learning extension. ++ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)] ++ 1. Authenticate. ++ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)] ++ 1. Set the default subscription. ++ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)] - # [Python SDK](#tab/python) - - Not applicable - - # [Azure CLI](#tab/cli) - - 1. Install the Azure Machine Learning extension - - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=install-ml-ext-cli)] - - 1. Authentication - - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=auth-cli)] - - 1. Set the default subscription - - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=set-default-subs-cli)] - -1. Initialize the project workspace properties +1. Initialize the project workspace properties. This is the current workspace. You'll run the tutorial notebook from this workspace. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-ws-crud-client)] -1. Initialize the feature store properties +1. Initialize the feature store properties. - Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial. + Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-crud-client)] -1. Initialize the feature store core SDK client +1. Initialize the feature store core SDK client. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=init-fs-core-sdk)] -1. Set up the offline materialization store +1. Set up the offline materialization store. - You can create a new gen2 storage account and a container. You can also reuse an existing gen2 storage account and container as the offline materialization store for the feature store. + You can create a new storage account and a container. You can also reuse an existing storage account and container as the offline materialization store for the feature store. # [Python SDK](#tab/python) This list summarizes the required setup steps: # [Azure CLI](#tab/cli) - Not applicable + Not applicable. -## Set values for the Azure Data Lake Storage Gen2 storage +## Set values for Azure Data Lake Storage Gen2 storage - The materialization store uses these values. You can optionally override the default settings. +The materialization store uses these values. You can optionally override the default settings. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-offline-store-params)] -1. Storage containers +1. Create storage containers. - Option 1: create new storage and container resources + The first option is to create new storage and container resources. # [Python SDK](#tab/python) This list summarizes the required setup steps: - Option 2: reuse an existing storage container + The second option is to reuse an existing storage container. # [Python SDK](#tab/python)- + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]- + # [Azure CLI](#tab/cli)- + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=use-existing-storage)]- + -1. Set up user assigned managed identity (UAI) +1. Set up a UAI. - The system-managed materialization jobs will use the UAI. For example, the recurrent job in part 3 of this tutorial uses this UAI. + The system-managed materialization jobs will use the UAI. For example, the recurrent job in the third tutorial uses this UAI. ### Set the UAI values - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)] --### User assigned managed identity (option 1) +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=set-uai-params)] - Create a new one +### Set up a UAI - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)] +The first option is to create a new managed identity. -### User assigned managed identity (option 2) +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=create-new-uai)] - Reuse an existing managed identity +The second option is to reuse an existing managed identity. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=use-existing-uai)] ### Retrieve UAI properties - Run this code sample in the SDK to retrieve the UAI properties: +Run this code sample in the SDK to retrieve the UAI properties. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=retrieve-uai-properties)] - + -## Grant RBAC permission to the user assigned managed identity (UAI) +## Grant RBAC permission to the UAI - This UAI is assigned to the feature store shortly. It requires these permissions: +This UAI is assigned to the feature store shortly. It requires these permissions: - | **Scope** | **Action/Role** | - ||--| - | Feature Store | Azure Machine Learning Data Scientist role | - | Storage account of feature store offline store | Blob storage data contributor role | - | Storage accounts of source data | Blob storage data reader role | +| Scope | Role | +||--| +| Feature store | Azure Machine Learning Data Scientist role | +| Storage account of the offline store on the feature store | Storage Blob Data Contributor role | +| Storage accounts of the source data | Storage Blob Data Reader role | - The next CLI commands will assign the first two roles to the UAI. In this example, "Storage accounts of source data" doesn't apply because we read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see the [access control document]() in the documentation resources. +The next CLI commands assign the first two roles to the UAI. In this example, the "storage accounts of the source data" scope doesn't apply because you read the sample data from a public access blob storage. To use your own data sources, you must assign the required roles to the UAI. To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md). - # [Python SDK](#tab/python) +# [Python SDK](#tab/python) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai)] - # [Azure CLI](#tab/cli) +# [Azure CLI](#tab/cli) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-fs)] - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-uai-offline-store)] - + -### Grant the blob data reader role access to your user account in the offline store +### Grant the Storage Blob Data Reader role access to your user account in the offline store - If the feature data is materialized, you need this role to read feature data from the offline materialization store. +If the feature data is materialized, you need the Storage Blob Data Reader role to read feature data from the offline materialization store. - Obtain your Azure AD object ID value from the Azure portal as described [here](/partner-center/find-ids-and-domain-names#find-the-user-object-id). +Obtain your Azure AD object ID value from the Azure portal, as described in [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id). - To learn more about access control, see the [access control document](./how-to-setup-access-control-feature-store.md). +To learn more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md). - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=grant-rbac-to-user-identity)] - The following steps grant the blob data reader role access to your user account. +The following steps grant the Storage Blob Data Reader role access to your user account: - 1. Attach the offline materialization store and UAI, to enable the offline store on the feature store +1. Attach the offline materialization store and UAI, to enable the offline store on the feature store. # [Python SDK](#tab/python) This list summarizes the required setup steps: # [Azure CLI](#tab/cli) - Action: inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store. + Inspect file `xxxx`. This command attaches the offline store and the UAI, to update the feature store. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=dump_featurestore_yaml)] This list summarizes the required setup steps: - 2. Enable offline materialization on the transactions feature set +2. Enable offline materialization on the `transactions` feature set. - Once materialization is enabled on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. See [part 3](./tutorial-experiment-train-models-using-features.md) of this tutorial series for more information. + After you enable materialization on a feature set, you can perform a backfill, as explained in this tutorial. You can also schedule recurrent materialization jobs. For more information, see [the third tutorial in the series](./tutorial-experiment-train-models-using-features.md). - # [Python SDK](#tab/python) + # [Python SDK](#tab/python) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)] - # [Azure CLI](#tab/cli) + # [Azure CLI](#tab/cli) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/2. Enable materialization and backfill feature data.ipynb?name=enable-offline-mat-txns-fset)] - + - Optional: you can save the feature set asset as a YAML resource + Optionally, you can save the feature set asset as a YAML resource. - # [Python SDK](#tab/python) + # [Python SDK](#tab/python) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=dump-txn-fset-yaml)] - # [Azure CLI](#tab/cli) + # [Azure CLI](#tab/cli) - Not applicable + Not applicable. - + - 3. Backfill data for the transactions feature set +3. Backfill data for the `transactions` feature set. - As explained earlier in this tutorial, materialization computes the feature values for a given feature window, and stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill, for a feature window of three months. + As explained earlier in this tutorial, materialization computes the feature values for a feature window, and it stores these computed values in a materialization store. Feature materialization increases the reliability and availability of the computed values. All feature queries now use the values from the materialization store. This step performs a one-time backfill for a feature window of three months. > [!NOTE]- > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two year window. + > You might need to determine a backfill data window. The window must match the window of your training data. For example, to use two years of data for training, you need to retrieve features for the same window. This means you should backfill for a two-year window. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=backfill-txns-fset)] - We'll print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data, and it also uses the materialization store by default. + Next, print sample data from the feature set. The output information shows that the data was retrieved from the materialization store. The `get_offline_features()` method retrieved the training and inference data. It also uses the materialization store by default. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/2. Enable materialization and backfill feature data.ipynb?name=sample-txns-fset-data)] -## Cleanup +## Clean up -The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources +The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources. ## Next steps -* [Part 3: tutorial features and the machine learning lifecycle](./tutorial-experiment-train-models-using-features.md) -* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) -* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md) -* Reference: [YAML reference](./reference-yaml-overview.md) +* Go to the next tutorial in the series: [Experiment and train models by using features](./tutorial-experiment-train-models-using-features.md). +* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md). +* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md). +* View the [YAML reference](./reference-yaml-overview.md). |
machine-learning | Tutorial Enable Recurrent Materialization Run Batch Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md | Title: "Tutorial #4: enable recurrent materialization and run batch inference (preview)"- -description: Managed Feature Store tutorial part 4 + Title: "Tutorial 4: Enable recurrent materialization and run batch inference (preview)" ++description: This is part 4 of a tutorial series on managed feature store. -# Tutorial #4: Enable recurrent materialization and run batch inference (preview) +# Tutorial 4: Enable recurrent materialization and run batch inference (preview) -This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization. +This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization. -Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Part 3 of this tutorial showed how to experiment with features, as a way to improve model performance. Part 3 also showed how a feature store increases agility in the experimentation and training flows. Tutorial 4 explains how to +The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. The second tutorial showed how to enable materialization and perform a backfill. The third tutorial showed how to experiment with features as a way to improve model performance. It also showed how a feature store increases agility in the experimentation and training flows. ++This tutorial explains how to: > [!div class="checklist"]-> * Run batch inference for the registered model -> * Enable recurrent materialization for the `transactions` feature set -> * Run a batch inference pipeline on the registered model +> * Run batch inference for the registered model. +> * Enable recurrent materialization for the `transactions` feature set. +> * Run a batch inference pipeline on the registered model. -> [!IMPORTANT] -> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites -Before you proceed with this article, make sure you complete parts 1, 2, and 3 of this tutorial series. +Before you proceed with the following procedures, be sure to complete the first, second, and third tutorials in the series. ## Set up -### Configure the Azure Machine Learning spark notebook -- 1. In the "Compute" dropdown in the top nav, select "Configure session" -- To run this tutorial, you can create a new notebook, and execute the instructions in this document, step by step. You can also open and run the existing notebook named `4. Enable recurrent materialization and run batch inference`. You can find that notebook, and all the notebooks in this series, at the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation. +1. Configure the Azure Machine Learning Spark notebook. - 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. + To run this tutorial, you can create a new notebook and execute the instructions step by step. You can also open and run the existing notebook named *4. Enable recurrent materialization and run batch inference*. You can find that notebook, and all the notebooks in this series, in the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. - 1. Configure session: + 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**. - * Select "configure session" in the bottom nav - * Select **upload conda file** - * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development) - * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns + 1. Configure the session: + + 1. When the toolbar displays **Configure session**, select it. + 1. On the **Python packages** tab, select **Upload conda file**. + 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment). + 1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns. -### Start the spark session + 1. Start the Spark session. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=start-spark-session)] -### Set up the root directory for the samples + 1. Set up the root directory for the samples. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=root-dir)] - ### [Python SDK](#tab/python) + ### [Python SDK](#tab/python) - Not applicable + Not applicable. - ### [Azure CLI](#tab/cli) + ### [Azure CLI](#tab/cli) - **Set up the CLI** + Set up the CLI: - 1. Install the Azure Machine Learning extension + 1. Install the Azure Machine Learning extension. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)] - 1. Authentication + 1. Authenticate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)] - 1. Set the default subscription + 1. Set the default subscription. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/4. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)] - + -1. Initialize the project workspace CRUD client +1. Initialize the project workspace CRUD (create, read, update, and delete) client. - The tutorial notebook runs from this current workspace + The tutorial notebook runs from this current workspace. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-ws-crud-client)] -1. Initialize the feature store variables +1. Initialize the feature store variables. - Make sure that you update the `featurestore_name` value, to reflect what you created in part 1 of this tutorial. + Be sure to update the `featurestore_name` value, to reflect what you created in the first tutorial. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-crud-client)] -1. Initialize the feature store SDK client +1. Initialize the feature store SDK client. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=init-fs-core-sdk)] -## Enable recurrent materialization on the `transactions` feature set +## Enable recurrent materialization on the transactions feature set -We enabled materialization in tutorial part 2, and we also performed backfill on the transactions feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store. However, to handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up-to-date. These jobs run on user-defined schedules. The recurrent job schedule works this way: +In the second tutorial, you enabled materialization and performed backfill on the `transactions` feature set. Backfill is an on-demand, one-time operation that computes and places feature values in the materialization store. -* Interval and frequency values define a window. For example, values of +To handle inference of the model in production, you might want to set up recurrent materialization jobs to keep the materialization store up to date. These jobs run on user-defined schedules. The recurrent job schedule works this way: - * interval = 3 - * frequency = Hour +* Interval and frequency values define a window. For example, the following values define a three-hour window: - define a three-hour window. + * `interval` = `3` + * `frequency` = `Hour` -* The first window starts at the start_time defined in the RecurrenceTrigger, and so on. +* The first window starts at the `start_time` value defined in `RecurrenceTrigger`, and so on. * The first recurrent job is submitted at the start of the next window after the update time.-* Later recurrent jobs will be submitted at every window after the first job. +* Later recurrent jobs are submitted at every window after the first job. -As explained in earlier parts of this tutorial, once data is materialized (backfill / recurrent materialization), feature retrieval uses the materialized data by default. +As explained in earlier tutorials, after data is materialized (backfill or recurrent materialization), feature retrieval uses the materialized data by default. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=enable-recurrent-mat-txns-fset)] -## (Optional) Save the feature set asset yaml file +## (Optional) Save the YAML file for the feature set asset - We use the updated settings to save the yaml file +You use the updated settings to save the YAML file. - ### [Python SDK](#tab/python) +### [Python SDK](#tab/python) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=dump-txn-fset-with-mat-yaml)] - ### [Azure CLI](#tab/cli) +### [Azure CLI](#tab/cli) - Not applicable +Not applicable. - +++## Run the batch inference pipeline -## Run the batch-inference pipeline +The batch inference has these steps: - The batch-inference has these steps: +1. You use the same built-in feature retrieval component for feature retrieval that you used in the training pipeline (covered in the third tutorial). For pipeline training, you provided a feature retrieval specification as a component input. For batch inference, you pass the registered model as the input. The component looks for the feature retrieval specification in the model artifact. - 1. Feature retrieval: this uses the same built-in feature retrieval component used in the training pipeline, covered in tutorial part 3. For pipeline training, we provided a feature retrieval spec as a component input. However, for batch inference, we pass the registered model as the input, and the component looks for the feature retrieval spec in the model artifact. - - Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features, and outputs the data for batch inference. + Additionally, for training, the observation data had the target variable. However, the batch inference observation data doesn't have the target variable. The feature retrieval step joins the observation data with the features and outputs the data for batch inference. - 1. Batch inference: This step uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output. +1. The pipeline uses the batch inference input data from previous step, runs inference on the model, and appends the predicted value as output. > [!NOTE]- > We use a job for batch inference in this example. You can also use Azure ML's batch endpoints. + > You use a job for batch inference in this example. You can also use batch endpoints in Azure Machine Learning. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=run-batch-inf-pipeline)] - ### Inspect the batch inference output data +### Inspect the output data for batch inference ++In the pipeline view: - In the pipeline view - 1. Select `inference_step` in the `outputs` card - 1. Copy the Data field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1` - 1. Paste the Data field value in the following cell, with separate name and version values (note that the last character is the version, preceded by a `:`). - 1. Note the `predict_is_fraud` column that the batch inference pipeline generated +1. Select `inference_step` in the `outputs` card. +1. Copy the `Data` field value. It looks something like `azureml_995abbc2-3171-461e-8214-c3c5d17ede83_output_data_data_with_prediction:1`. +1. Paste the `Data` field value in the following cell, with separate name and version values. The last character is the version, preceded by a colon (`:`). +1. Note the `predict_is_fraud` column that the batch inference pipeline generated. - Explanation: In the batch inference pipeline (`/project/fraud_mode/pipelines/batch_inference_pipeline.yaml`) outputs, since we didn't provide `name` or `version` values in the `outputs` of the `inference_step`, the system created an untracked data asset with a guid as the name value, and 1 as the version value. In this cell, we derive and then display the data path from the asset: + In the batch inference pipeline (*/project/fraud_mode/pipelines/batch_inference_pipeline.yaml*) outputs, because you didn't provide `name` or `version` values for `outputs` of `inference_step`, the system created an untracked data asset with a GUID as the name value and `1` as the version value. In this cell, you derive and then display the data path from the asset. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/4. Enable recurrent materialization and run batch inference.ipynb?name=inspect-batch-inf-output-data)] -## Cleanup +## Clean up -If you created a resource group for the tutorial, you can delete the resource group, to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually: +If you created a resource group for the tutorial, you can delete the resource group to delete all the resources associated with this tutorial. Otherwise, you can delete the resources individually: -1. To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it -1. Follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) to delete the user-assigned managed identity -1. To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage you created, and delete it +- To delete the feature store, go to the resource group in the Azure portal, select the feature store, and delete it. +- To delete the user-assigned managed identity, follow [these instructions](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). +- To delete the offline store (storage account), go to the resource group in the Azure portal, select the storage that you created, and delete it. ## Next steps -* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) -* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) -* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md) -* Reference: [YAML reference](./reference-yaml-overview.md) +* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). +* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md). +* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md). +* View the [YAML reference](./reference-yaml-overview.md). |
machine-learning | Tutorial Experiment Train Models Using Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-experiment-train-models-using-features.md | Title: "Tutorial #3: experiment and train models using features (preview)"- -description: Managed Feature Store tutorial part 3. + Title: "Tutorial 3: Experiment and train models by using features (preview)" ++description: This is part 3 of a tutorial series on managed feature store. -# Tutorial #3: Experiment and train models using features (preview) +# Tutorial 3: Experiment and train models by using features (preview) -This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization. +This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization. -Part 1 of this tutorial showed how to create a feature set spec with custom transformations, and use that feature set to generate training data. Part 2 of the tutorial showed how to enable materialization and perform a backfill. Tutorial 3 shows how to experiment with features, as a way to improve model performance. This tutorial also shows how a feature store increases agility in the experimentation and training flows. It shows how to: +The first tutorial showed how to create a feature set specification with custom transformations, and then use that feature set to generate training data. The second tutorial showed how to enable materialization and perform a backfill. ++This tutorial shows how to experiment with features as a way to improve model performance. It also shows how a feature store increases agility in the experimentation and training flows. ++In this tutorial, you learn how to: > [!div class="checklist"]-> * Prototype a new `accounts` feature set spec, using existing precomputed values as features. Then, register the local feature set spec as a feature set in the feature store. This differs from tutorial part 1, where we created a feature set that had custom transformations -> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature-retrieval spec -> * Run a training pipeline that uses the feature retrieval spec to train a new model. This pipeline uses the built-in feature-retrieval component, to generate the training data +> * Prototype a new `accounts` feature set specification, by using existing precomputed values as features. Then, register the local feature set specification as a feature set in the feature store. This process differs from the first tutorial, where you created a feature set that had custom transformations. +> * Select features for the model from the `transactions` and `accounts` feature sets, and save them as a feature retrieval specification. +> * Run a training pipeline that uses the feature retrieval specification to train a new model. This pipeline uses the built-in feature retrieval component to generate the training data. -> [!IMPORTANT] -> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Prerequisites -Before you proceed with this article, make sure you complete parts 1 and 2 of this tutorial series. +Before you proceed with the following procedures, be sure to complete the first and second tutorials in the series. ## Set up -1. Configure the Azure Machine Learning spark notebook +1. Configure the Azure Machine Learning Spark notebook. - 1. Running the tutorial: You can create a new notebook, and execute the instructions in this document step by step. You can also open and run existing notebook `3. Experiment and train models using features.ipynb`. You can find the notebooks in the `featurestore_sample/notebooks directory`. You can select from `sdk_only`, or `sdk_and_cli`. You can keep this document open, and refer to it for documentation links and more explanation. + You can create a new notebook and execute the instructions in this tutorial step by step. You can also open and run the existing notebook named *3. Experiment and train models using features.ipynb* from the *featurestore_sample/notebooks* directory. You can choose *sdk_only* or *sdk_and_cli*. Keep this tutorial open and refer to it for documentation links and more explanation. - 1. Select Azure Machine Learning Spark compute in the "Compute" dropdown, located in the top nav. Wait for a status bar in the top to display "configure session". + 1. On the top menu, in the **Compute** dropdown list, select **Serverless Spark Compute** under **Azure Machine Learning Serverless Spark**. 1. Configure the session: - * Select "configure session" in the bottom nav - * Select **upload conda file** - * Upload the **conda.yml** file you [uploaded in Tutorial #1](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment-for-development) - * (Optional) Increase the session time-out (idle time) to avoid frequent prerequisite reruns + 1. When the toolbar displays **Configure session**, select it. + 1. On the **Python packages** tab, select **Upload Conda file**. + 1. Upload the *conda.yml* file that you [uploaded in the first tutorial](./tutorial-get-started-with-feature-store.md#prepare-the-notebook-environment). + 1. Optionally, increase the session time-out (idle time) to avoid frequent prerequisite reruns. - 1. Start the spark session + 1. Start the Spark session. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=start-spark-session)] - 1. Set up the root directory for the samples + 1. Set up the root directory for the samples. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=root-dir)] ### [Python SDK](#tab/python)- - Not applicable - ++ Not applicable. + ### [Azure CLI](#tab/cli)- - Set up the CLI - - 1. Install the Azure Machine Learning extension - ++ Set up the CLI: ++ 1. Install the Azure Machine Learning extension. + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=install-ml-ext-cli)]- - 1. Authentication - ++ 1. Authenticate. + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=auth-cli)]- - 1. Set the default subscription - ++ 1. Set the default subscription. + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Experiment and train models using features.ipynb?name=set-default-subs-cli)]- + -1. Initialize the project workspace variables +1. Initialize the project workspace variables. This is the current workspace, and the tutorial notebook runs in this resource. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-ws-crud-client)] -1. Initialize the feature store variables +1. Initialize the feature store variables. - Make sure that you update the `featurestore_name` and `featurestore_location` values shown, to reflect what you created in part 1 of this tutorial. + Be sure to update the `featurestore_name` and `featurestore_location` values to reflect what you created in the first tutorial. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-crud-client)] -1. Initialize the feature store consumption client +1. Initialize the feature store consumption client. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=init-fs-core-sdk)] -1. Create a compute cluster +1. Create a compute cluster named `cpu-cluster` in the project workspace. - We'll create a compute cluster named `cpu-cluster` in the project workspace. We need this compute cluster when we run the training / batch inference jobs. + You'll need this compute cluster when you run the training/batch inference jobs. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-compute-cluster)] -## Create the accounts feature set locally +## Create the account feature set locally ++In the first tutorial, you created a `transactions` feature set that had custom transformations. Here, you create an `accounts` feature set that uses precomputed values. -In tutorial part 1, we created a transactions feature set that had custom transformations. Here, we create an accounts feature set that uses precomputed values. +To onboard precomputed features, you can create a feature set specification without writing any transformation code. You use a feature set specification to develop and test a feature set in a fully local development environment. -To onboard precomputed features, you can create a feature set spec without writing any transformation code. A feature set spec is a specification that we use to develop and test a feature set, in a fully local development environment. We don't need to connect to a feature store. In this step, you create the feature set spec locally, and then sample the values from it. For managed feature store capabilities, you must use a feature asset definition to register the feature set spec with a feature store. Later steps in this tutorial provide more details. +You don't need to connect to a feature store. In this procedure, you create the feature set specification locally, and then sample the values from it. For capabilities of managed feature store, you must use a feature asset definition to register the feature set specification with a feature store. Later steps in this tutorial provide more details. -1. Explore the source data for the accounts +1. Explore the source data for the accounts. > [!NOTE]- > This notebook uses sample data hosted in a publicly-accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets using your own source data, please host those feature sets in an adls gen2 account, and use an `abfss` driver in the data path. + > This notebook uses sample data hosted in a publicly accessible blob container. Only a `wasbs` driver can read it in Spark. When you create feature sets by using your own source data, host those feature sets in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=explore-accts-fset-src-data)] -1. Create the `accounts` feature set spec in local, from these precomputed features +1. Create the `accounts` feature set specification locally, from these precomputed features. - We don't need any transformation code here, because we reference precomputed features. + You don't need any transformation code here, because you reference precomputed features. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=create-accts-fset-spec)] -1. Export as a feature set spec +1. Export as a feature set specification. ++ To register the feature set specification with the feature store, you must save the feature set specification in a specific format. ++ After you run the next cell, inspect the generated `accounts` feature set specification. To see the specification, open the *featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml* file from the file tree. - To register the feature set spec with the feature store, you must save the feature set spec in a specific format. + The specification has these important elements: - Action: After you run the next cell, inspect the generated `accounts` feature set spec. To see the spec, open the `featurestore/featuresets/accounts/spec/FeatureSetSpec.yaml` file from the file tree to see the spec. + - `source`: A reference to a storage resource. In this case, it's a Parquet file in a blob storage resource. - The spec has these important elements: + - `features`: A list of features and their datatypes. With provided transformation code (see the "Day 2" section), the code must return a DataFrame that maps to the features and datatypes. Without the provided transformation code, the system builds the query to map the features and datatypes to the source. In this case, the transformation code is the generated `accounts` feature set specification, because it's precomputed. - 1. `source`: a reference to a storage resource, in this case, a parquet file in a blog storage resource - - 1. `features`: a list of features and their datatypes. With provided transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes. Without the provided transformation code (in this case, the generated `accounts` feature set spec, because it's precomputed), the system builds the query to map the features and datatypes to the source - - 1. `index_columns`: the join keys required to access values from the feature set + - `index_columns`: The join keys required to access values from the feature set. - See the [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-featureset-spec.md) to learn more. + To learn more, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and the [CLI (v2) feature set specification YAML schema](./reference-yaml-featureset-spec.md). As an extra benefit, persisting supports source control. - We don't need any transformation code here, because we reference precomputed features. + You don't need any transformation code here, because you reference precomputed features. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=dump-accts-fset-spec)] ## Locally experiment with unregistered features -As you develop features, you might want to locally test and validate them, before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`), and a feature set registered in the feature store (`transactions`), generates training data for the ML model. +As you develop features, you might want to locally test and validate them before you register them with the feature store or run training pipelines in the cloud. A combination of a local unregistered feature set (`accounts`) and a feature set registered in the feature store (`transactions`) generates training data for the machine learning model. -1. Select features for the model +1. Select features for the model. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-unreg-features-for-model)] -1. Locally generate training data +1. Locally generate training data. This step generates training data for illustrative purposes. As an option, you can locally train models here. Later steps in this tutorial explain how to train a model in the cloud. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=gen-training-data-locally)] -1. Register the `accounts` feature set with the feature store +1. Register the `accounts` feature set with the feature store. - After you locally experiment with different feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store. + After you locally experiment with feature definitions, and they seem reasonable, you can register a feature set asset definition with the feature store. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=reg-accts-fset)] -1. Get the registered feature set, and sanity test it +1. Get the registered feature set and test it. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=sample-accts-fset-data)] ## Run a training experiment -In this step, you select a list of features, run a training pipeline, and register the model. You can repeat this step until the model performs as you'd like. +In the following steps, you select a list of features, run a training pipeline, and register the model. You can repeat these steps until the model performs as you want. -1. (Optional) Discover features from the feature store UI +1. Optionally, discover features from the feature store UI. - Part 1 of this tutorial covered this, when you registered the transactions feature set. Since you also have an accounts feature set, you can browse the available features: + The first tutorial covered this step, when you registered the `transactions` feature set. Because you also have an `accounts` feature set, you can browse through the available features: - * Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home?flight=FeatureStores). - * In the left nav, select `feature stores` - * The list of feature stores that you can access appears. Select the feature store that you created earlier. + 1. Go to the [Azure Machine Learning global landing page](https://ml.azure.com/home). + 1. On the left pane, select **Feature stores**. + 1. In the list of feature stores, select the feature store that you created earlier. - You can see the feature sets and entity that you created. Select the feature sets to browse the feature definitions. You can use the global search box to search for feature sets across feature stores. + The UI shows the feature sets and entity that you created. Select the feature sets to browse through the feature definitions. You can use the global search box to search for feature sets across feature stores. -1. (Optional) Discover features from the SDK +1. Optionally, discover features from the SDK. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=discover-features-from-sdk)] -1. Select features for the model, and export the model as a feature-retrieval spec +1. Select features for the model, and export the model as a feature retrieval specification. - In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model shipping agility increases if you save the selected features as a feature-retrieval spec, and use the spec in the mlops/cicd flow for training and inference. + In the previous steps, you selected features from a combination of registered and unregistered feature sets, for local experimentation and testing. You can now experiment in the cloud. Your model-shipping agility increases if you save the selected features as a feature retrieval specification, and then use the specification in the machine learning operations (MLOps) or continuous integration and continuous delivery (CI/CD) flow for training and inference. -1. Select features for the model +1. Select features for the model. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=select-reg-features)] -1. Export selected features as a feature-retrieval spec +1. Export selected features as a feature retrieval specification. - > [!NOTE] - > A **feature retrieval spec** is a portable definition of the feature list associated with a model. It can help streamline ML model development and operationalization. It will become an input to the training pipeline which generates the training data. Then, it will be packaged with the model. The inference phase uses it to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy. + A feature retrieval specification is a portable definition of the feature list that's associated with a model. It can help streamline the development and operationalization of a machine learning model. It will become an input to the training pipeline that generates the training data. Then, it will be packaged with the model. ++ The inference phase uses the feature retrieval to look up the features. It becomes a glue that integrates all phases of the machine learning lifecycle. Changes to the training/inference pipeline can stay at a minimum as you experiment and deploy. - Use of the feature retrieval spec and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the spec should be **feature_retrieval_spec.yaml** when it's packaged with the model. This way, the system can recognize it. + Use of the feature retrieval specification and the built-in feature retrieval component is optional. You can directly use the `get_offline_features()` API, as shown earlier. The name of the specification should be *feature_retrieval_spec.yaml* when it's packaged with the model. This way, the system can recognize it. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=export-as-frspec)] ## Train in the cloud with pipelines, and register the model -In this step, you manually trigger the training pipeline. In a production scenario, a ci/cd pipeline could trigger it, based on changes to the feature-retrieval spec in the source repository. You can register the model if it's satisfactory. +In this procedure, you manually trigger the training pipeline. In a production scenario, a CI/CD pipeline could trigger it, based on changes to the feature retrieval specification in the source repository. You can register the model if it's satisfactory. -1. Run the training pipeline +1. Run the training pipeline. The training pipeline has these steps: - 1. Feature retrieval: For its input, this built-in component takes the feature retrieval spec, the observation data, and the timestamp column name. It then generates the training data as output. It runs these steps as a managed spark job. - - 1. Training: Based on the training data, this step trains the model, and then generates a model (not yet registered) - - 1. Evaluation: This step validates whether or not the model performance and quality fall within a threshold (in our case, it's a placeholder/dummy step for illustration purposes) - - 1. Register the model: This step registers the model + 1. Feature retrieval: For its input, this built-in component takes the feature retrieval specification, the observation data, and the time-stamp column name. It then generates the training data as output. It runs these steps as a managed Spark job. ++ 1. Training: Based on the training data, this step trains the model and then generates a model (not yet registered). ++ 1. Evaluation: This step validates whether the model performance and quality fall within a threshold. (In this tutorial, it's a placeholder step for illustration purposes.) ++ 1. Register the model: This step registers the model. > [!NOTE]- > In part 2 of this tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior will be the same, even if you use the `get_offline_features()` API. + > In the second tutorial, you ran a backfill job to materialize data for the `transactions` feature set. The feature retrieval step reads feature values from the offline store for this feature set. The behavior is the same, even if you use the `get_offline_features()` API. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/3. Experiment and train models using features.ipynb?name=run-training-pipeline)] - 1. Inspect the training pipeline and the model + 1. Inspect the training pipeline and the model. - 1. Open the above pipeline, and run "web view" in a new window to see the pipeline steps. + 1. Open the pipeline. Run the web view in a new window to display the pipeline steps. -1. Use the feature retrieval spec in the model artifacts +1. Use the feature retrieval specification in the model artifacts: - 1. In the left nav of the current workspace, select `Models` - 1. Select open in a new tab or window - 1. Select **fraud_model** - 1. In the top nav, select Artifacts + 1. On the left pane of the current workspace, select **Models**. + 1. Select **Open in a new tab or window**. + 1. Select **fraud_model**. + 1. Select **Artifacts**. - The feature retrieval spec is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval spec during experimentation. Now it became part of the model definition. In the next tutorial, you'll see how inferencing uses it. + The feature retrieval specification is packaged along with the model. The model registration step in the training pipeline handled this step. You created the feature retrieval specification during experimentation. Now it's part of the model definition. In the next tutorial, you'll see how inferencing uses it. ## View the feature set and model dependencies -1. View the list of feature sets associated with the model +1. View the list of feature sets associated with the model. - In the same models page, select the `feature sets` tab. This tab shows both the `transactions` and the `accounts` feature sets on which this model depends. + On the same **Models** page, select the **Feature sets** tab. This tab shows both the `transactions` and `accounts` feature sets on which this model depends. -1. View the list of models that use the feature sets +1. View the list of models that use the feature sets: - 1. Open the feature store UI (explained earlier in this tutorial) - 1. Select `Feature sets` on the left nav - 1. Select a feature set - 1. Select the `Models` tab + 1. Open the feature store UI (explained earlier in this tutorial). + 1. On the left pane, select **Feature sets**. + 1. Select a feature set. + 1. Select the **Models** tab. - You can see the list of models that use the feature sets. The feature retrieval spec determined this list when the model was registered. + The feature retrieval specification determined this list when the model was registered. -## Cleanup +## Clean up -The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources +The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources. ## Next steps -* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) -* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) -* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md) -* Reference: [YAML reference](./reference-yaml-overview.md) +* Go to the next tutorial in the series: [Enable recurrent materialization and run batch inference](./tutorial-enable-recurrent-materialization-run-batch-inference.md). +* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). +* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md). +* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md). +* View the [YAML reference](./reference-yaml-overview.md). |
machine-learning | Tutorial Get Started With Feature Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md | Title: "Tutorial #1: develop and register a feature set with managed feature store (preview)"- -description: Managed Feature Store tutorial part 1. + Title: "Tutorial 1: Develop and register a feature set with managed feature store (preview)" ++description: This is part 1 of a tutorial series on managed feature store. -# Tutorial #1: develop and register a feature set with managed feature store (preview) +# Tutorial 1: Develop and register a feature set with managed feature store (preview) -This tutorial series shows how features seamlessly integrate all phases of the ML lifecycle: prototyping, training and operationalization. +This tutorial series shows how features seamlessly integrate all phases of the machine learning lifecycle: prototyping, training, and operationalization. -Azure Machine Learning managed feature store lets you discover, create and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic feature store concepts, see [what is managed feature store](./concept-what-is-managed-feature-store.md) and [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). +You can use Azure Machine Learning managed feature store to discover, create, and operationalize features. The machine learning lifecycle includes a prototyping phase, where you experiment with various features. It also involves an operationalization phase, where models are deployed and inference steps look up feature data. Features serve as the connective tissue in the machine learning lifecycle. To learn more about basic concepts for managed feature store, see [What is managed feature store?](./concept-what-is-managed-feature-store.md) and [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). -This tutorial is the first part of a four part series. Here, you'll learn how to: +This tutorial is the first part of a four-part series. Here, you learn how to: > [!div class="checklist"]-> * Create a new minimal feature store resource -> * Develop and locally test a feature set with feature transformation capability -> * Register a feature store entity with the feature store -> * Register the feature set that you developed with the feature store -> * Generate a sample training dataframe using the features you created +> * Create a new, minimal feature store resource. +> * Develop and locally test a feature set with feature transformation capability. +> * Register a feature store entity with the feature store. +> * Register the feature set that you developed with the feature store. +> * Generate a sample training DataFrame by using the features that you created. -> [!IMPORTANT] -> This feature is currently in public preview. This preview version is provided without a service-level agreement, and it's not recommended for production workloads. Certain features might not be supported, or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +This tutorial series has two tracks: -## Prerequisites --> [!NOTE] -> This tutorial series has two tracks: -> * SDK only track: Uses only Python SDKs. Choose this track for pure, Python-based development and deployment. -> * SDK & CLI track: This track uses the CLI for CRUD operations (create, update, and delete), and the Python SDK for feature set development and testing only. This is useful in CI / CD, or GitOps, scenarios, where CLI/yaml is preferred. +* The SDK-only track uses only Python SDKs. Choose this track for pure, Python-based development and deployment. +* The SDK and CLI track uses the Python SDK for feature set development and testing only, and it uses the CLI for CRUD (create, read, update, and delete) operations. This track is useful in continuous integration and continuous delivery (CI/CD) or GitOps scenarios, where CLI/YAML is preferred. -Before you proceed with this article, make sure you cover these prerequisites: -* An Azure Machine Learning workspace. See [Quickstart: Create workspace resources](./quickstart-create-resources.md) article for more information about workspace creation. +## Prerequisites -* To proceed with this article, your user account must be assigned the owner or contributor role to the resource group where the feature store is created +Before you proceed with this tutorial, be sure to cover these prerequisites: - (Optional): If you use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group +* An Azure Machine Learning workspace. For more information about workspace creation, see [Quickstart: Create workspace resources](./quickstart-create-resources.md). -## Set up +* On your user account, the Owner or Contributor role for the resource group where the feature store is created. -### Prepare the notebook environment for development + If you choose to use a new resource group for this tutorial, you can easily delete all the resources by deleting the resource group. -> [!NOTE] -> This tutorial uses an Azure Machine Learning Spark notebook for development. +## Prepare the notebook environment -1. In the Azure Machine Learning studio environment, first select **Notebooks** in the left nav, and then select the **Samples** tab. Navigate to the **featurestore_sample** directory +This tutorial uses an Azure Machine Learning Spark notebook for development. - **Samples -> SDK v2 -> sdk -> python -> featurestore_sample** +1. In the Azure Machine Learning studio environment, select **Notebooks** on the left pane, and then select the **Samples** tab. - and then select **Clone**, as shown in this screenshot: +1. Browse to the *featurestore_sample* directory (select **Samples** > **SDK v2** > **sdk** > **python** > **featurestore_sample**), and then select **Clone**. - :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot showing selection of the featurestore_sample directory in Azure Machine Learning studio UI."::: + :::image type="content" source="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" lightbox="media/tutorial-get-started-with-feature-store/clone-featurestore-example-notebooks.png" alt-text="Screenshot that shows selection of the sample directory in Azure Machine Learning studio."::: -1. The **Select target directory** panel opens next. Select the User directory, in this case **testUser**, and then select **Clone**, as shown in this screenshot: +1. The **Select target directory** panel opens. Select the user directory (in this case, **testUser**), and then select **Clone**. - :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot showing selection of the target directory location in Azure Machine Learning studio UI for the featurestore_sample resource."::: + :::image type="content" source="media/tutorial-get-started-with-feature-store/select-target-directory.png" lightbox="media/tutorial-get-started-with-feature-store/select-target-directory.png" alt-text="Screenshot that shows selection of the target directory location in Azure Machine Learning studio for the sample resource."::: -1. To configure the notebook environment, you must upload the **conda.yml** file. Select **Notebooks** in the left nav, and then select the **Files** tab. Navigate to the **env** directory +1. To configure the notebook environment, you must upload the *conda.yml* file: - **Users -> testUser -> featurestore_sample -> project -> env** + 1. Select **Notebooks** on the left pane, and then select the **Files** tab. + 1. Browse to the *env* directory (select **Users** > **testUser** > **featurestore_sample** > **project** > **env**), and then select the *conda.yml* file. In this path, *testUser* is the user directory. + 1. Select **Download**. - and select the **conda.yml** file. In this navigation, **testUser** is the user directory. Select **Download**, as shown in this screenshot: + :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot that shows selection of the Conda YAML file in Azure Machine Learning studio."::: - :::image type="content" source="media/tutorial-get-started-with-feature-store/download-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/download-conda-file.png" alt-text="Screenshot showing selection of the conda.yml file in Azure Machine Learning studio UI."::: +1. In the Azure Machine Learning environment, open the notebook, and then select **Configure session**. -1. At the Azure Machine Learning environment, open the notebook, and select **Configure Session**, as shown in this screenshot: + :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot that shows selections for configuring a session for a notebook."::: - :::image type="content" source="media/tutorial-get-started-with-feature-store/open-configure-session.png" lightbox="media/tutorial-get-started-with-feature-store/open-configure-session.png" alt-text="Screenshot showing Open Configure Session for this notebook."::: +1. On the **Configure session** panel, select **Python packages**. -1. At the **Configure Session** panel, select **Python packages**. To upload the Conda file, select **Upload Conda file**, and **Browse** to the directory that hosts the Conda file. Select **conda.yml**, and then select **Open**, as shown in this screenshot: +1. Upload the Conda file: + 1. On the **Python packages** tab, select **Upload Conda file**. + 1. Browse to the directory that hosts the Conda file. + 1. Select **conda.yml**, and then select **Open**. - :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot showing the directory hosting the Conda file."::: + :::image type="content" source="media/tutorial-get-started-with-feature-store/open-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/open-conda-file.png" alt-text="Screenshot that shows the directory that hosts the Conda file."::: -1. Select **Apply**, as shown in this screenshot: +1. Select **Apply**. - :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot showing the Conda file upload."::: + :::image type="content" source="media/tutorial-get-started-with-feature-store/upload-conda-file.png" lightbox="media/tutorial-get-started-with-feature-store/upload-conda-file.png" alt-text="Screenshot that shows the Conda file upload."::: ## Start the Spark session Before you proceed with this article, make sure you cover these prerequisites: [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=root-dir)] -### [SDK Track](#tab/SDK-track) +### [SDK track](#tab/SDK-track) -Not applicable +Not applicable. -### [SDK and CLI Track](#tab/SDK-and-CLI-track) +### [SDK and CLI track](#tab/SDK-and-CLI-track) ### Set up the CLI -1. Install the Azure Machine Learning extension +1. Install the Azure Machine Learning extension. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)] -1. Authentication +1. Authenticate. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)] -1. Set the default subscription +1. Set the default subscription. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)] > [!NOTE]-> Feature store Vs Project workspace: You'll use a feature store to reuse features across projects. You'll use a project workspace (an Azure Machine Learning workspace) to train and inference models, by leveraging features from feature stores. Many project workspaces can share and reuse the same feature store. +> You use a feature store to reuse features across projects. You use a project workspace (an Azure Machine Learning workspace) to train inference models, by taking advantage of features from feature stores. Many project workspaces can share and reuse the same feature store. -### [SDK Track](#tab/SDK-track) +### [SDK track](#tab/SDK-track) This tutorial uses two SDKs:-* The Feature Store CRUD SDK -* You use the same MLClient (package name azure-ai-ml) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for feature store CRUD operations for feature store, feature set, and feature store entity. -* The feature store core SDK - - This SDK (azureml-featurestore) is intended for feature set development and consumption. Later steps in this tutorial describe these operations: - - * Feature set specification development - * Feature data retrieval - * List and Get registered feature sets - * Generate and resolve feature retrieval specs - * Generate training and inference data using point-in-time joins +* *Feature store CRUD SDK* ++ You use the same `MLClient` (package name `azure-ai-ml`) SDK that you use with the Azure Machine Learning workspace. A feature store is implemented as a type of workspace. As a result, this SDK is used for CRUD operations for feature stores, feature sets, and feature store entities. ++* *Feature store core SDK* ++ This SDK (`azureml-featurestore`) is for feature set development and consumption. Later steps in this tutorial describe these operations: ++ * Develop a feature set specification. + * Retrieve feature data. + * List or get a registered feature set. + * Generate and resolve feature retrieval specifications. + * Generate training and inference data by using point-in-time joins. ++This tutorial doesn't require explicit installation of those SDKs, because the earlier Conda YAML instructions cover this step. -This tutorial doesn't require explicit installation of those SDKs, because the earlier **conda YAML** instructions cover this step. +### [SDK and CLI track](#tab/SDK-and-CLI-track) -### [SDK and CLI Track](#tab/SDK-and-CLI-track) +This tutorial uses both the feature store core SDK and the CLI for CRUD operations. It uses the Python SDK only for feature set development and testing. This approach is useful for GitOps or CI/CD scenarios, where CLI/YAML is preferred. -This tutorial uses both the Feature store core SDK, and the CLI, for CRUD operations. It only uses the Python SDK for Feature set development and testing. This approach is useful for GitOps or CI / CD scenarios, where CLI / yaml is preferred. +Here are general guidelines: -* Use the CLI for CRUD operations on feature store, feature set, and feature store entities -* Feature store core SDK: This SDK (`azureml-featurestore`) is meant for feature set development and consumption. This tutorial covers these operations: +* Use the CLI for CRUD operations on feature stores, feature sets, and feature store entities. +* The feature store core SDK (`azureml-featurestore`) is for feature set development and consumption. This tutorial covers these operations: - * List / Get a registered feature set - * Generate / resolve a feature retrieval spec - * Execute a feature set definition, to generate a Spark dataframe - * Generate training with a point-in-time join + * List or get a registered feature set + * Generate or resolve a feature retrieval specification + * Execute a feature set definition, to generate a Spark DataFrame + * Generate training by using point-in-time joins -This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The **conda.yaml** file includes them in an earlier step. +This tutorial doesn't need explicit installation of these resources, because the instructions cover these steps. The *conda.yml* file includes them in an earlier step. ## Create a minimal feature store -1. Set feature store parameters -- Set the name, location, and other values for the feature store +1. Set feature store parameters, including name, location, and other values. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=fs-params)] -1. Create the feature store +1. Create the feature store. - ### [SDK Track](#tab/SDK-track) + ### [SDK track](#tab/SDK-track) [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs)] - ### [SDK and CLI Track](#tab/SDK-and-CLI-track) + ### [SDK and CLI track](#tab/SDK-and-CLI-track) [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)] -1. Initialize an Azure Machine Learning feature store core SDK client +1. Initialize a feature store core SDK client for Azure Machine Learning. - As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features + As explained earlier in this tutorial, the feature store core SDK client is used to develop and consume features. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fs-core-sdk)] ## Prototype and develop a feature set -We'll build a feature set named `transactions` that has rolling, window aggregate-based features +In the following steps, you build a feature set named `transactions` that has rolling, window aggregate-based features: -1. Explore the transactions source data +1. Explore the `transactions` source data. - > [!NOTE] - > This notebook uses sample data hosted in a publicly-accessible blob container. It can only be read into Spark with a `wasbs` driver. When you create feature sets using your own source data, host them in an adls gen2 account, and use an `abfss` driver in the data path. + This notebook uses sample data hosted in a publicly accessible blob container. It can be read into Spark only through a `wasbs` driver. When you create feature sets by using your own source data, host them in an Azure Data Lake Storage Gen2 account, and use an `abfss` driver in the data path. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=explore-txn-src-data)] -1. Locally develop the feature set -- A feature set specification is a self-contained feature set definition that you can locally develop and test. Here, we create these rolling window aggregate features: +1. Locally develop the feature set. - * transactions three-day count - * transactions amount three-day sum - * transactions amount three-day avg - * transactions seven-day count - * transactions amount seven-day sum - * transactions amount seven-day avg + A feature set specification is a self-contained definition of a feature set that you can locally develop and test. Here, you create these rolling window aggregate features: - **Action:** + * `transactions three-day count` + * `transactions amount three-day sum` + * `transactions amount three-day avg` + * `transactions seven-day count` + * `transactions amount seven-day sum` + * `transactions amount seven-day avg` - - Review the feature transformation code file: `featurestore/featuresets/transactions/transformation_code/transaction_transform.py`. Note the rolling aggregation defined for the features. This is a spark transformer. + Review the feature transformation code file: *featurestore/featuresets/transactions/transformation_code/transaction_transform.py*. Note the rolling aggregation defined for the features. This is a Spark transformer. - See [feature store concepts](./concept-what-is-managed-feature-store.md) and **transformation concepts** to learn more about the feature set and transformations. + To learn more about the feature set and transformations, see [What is managed feature store?](./concept-what-is-managed-feature-store.md). [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=develop-txn-fset-locally)] -1. Export as a feature set spec +1. Export as a feature set specification. ++ To register the feature set specification with the feature store, you must save that specification in a specific format. - To register the feature set spec with the feature store, you must save that spec in a specific format. + Review the generated `transactions` feature set specification. Open this file from the file tree to see the specification: *featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml*. - **Action:** Review the generated `transactions` feature set spec: Open this file from the file tree to see the spec: `featurestore/featuresets/accounts/spec/FeaturesetSpec.yaml` + The specification contains these elements: - The spec contains these elements: - - 1. `source`: a reference to a storage resource. In this case, it's a parquet file in a blob storage resource. - 1. `features`: a list of features and their datatypes. If you provide transformation code (see the Day 2 section), the code must return a dataframe that maps to the features and datatypes. - 1. `index_columns`: the join keys required to access values from the feature set + * `source`: A reference to a storage resource. In this case, it's a Parquet file in a blob storage resource. + * `features`: A list of features and their datatypes. If you provide transformation code (see the "Day 2" section), the code must return a DataFrame that maps to the features and datatypes. + * `index_columns`: The join keys required to access values from the feature set. - To learn more about the spec, see [top level feature store entities document](./concept-top-level-entities-in-managed-feature-store.md) and the [feature set spec yaml reference](./reference-yaml-feature-set.md). + To learn more about the specification, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) and [CLI (v2) feature set YAML schema](./reference-yaml-feature-set.md). - Persisting the feature set spec offers another benefit: the feature set spec can be source controlled. + Persisting the feature set specification offers another benefit: the feature set specification can be source controlled. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=dump-transactions-fs-spec)] -## Register a feature-store entity +## Register a feature store entity ++As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities include accounts and customers. Entities are typically created once and then reused across feature sets. To learn more, see [Understanding top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). ++### [SDK track](#tab/SDK-track) -As a best practice, entities help enforce use of the same join key definition across feature sets that use the same logical entities. Examples of entities can include accounts, customers, etc. Entities are typically created once, and then reused across feature sets. To learn more, see [feature store concepts](./concept-top-level-entities-in-managed-feature-store.md). +1. Initialize the feature store CRUD client. - ### [SDK Track](#tab/SDK-track) + As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. - 1. Initialize the Feature Store CRUD client + In this code sample, the client is scoped at feature store level. - As explained earlier in this tutorial, the MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at feature store level. + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)] - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=init-fset-crud-client)] +1. Register the `account` entity with the feature store. - 1. Register the `account` entity with the feature store + Create an `account` entity that has the join key `accountID` of type `string`. - Create an account entity that has the join key `accountID`, of type string. + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)] - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity)] +### [SDK and CLI track](#tab/SDK-and-CLI-track) - ### [SDK and CLI Track](#tab/SDK-and-CLI-track) +1. Initialize the feature store CRUD client. - 1. Initialize the Feature Store CRUD client + As explained earlier in this tutorial, `MLClient` is used for creating, reading, updating, and deleting a feature store asset. The notebook code cell sample shown here searches for the feature store that you created in an earlier step. Here, you can't reuse the same `ml_client` value that you used earlier in this tutorial, because it's scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. - As explained earlier in this tutorial, MLClient is used for feature store asset CRUD (create, update, and delete). The notebook code cell sample shown here searches for the feature store we created in an earlier step. Here, we can't reuse the same ml_client we used earlier in this tutorial, because the earlier ml_client is scoped at the resource group level. Proper scoping is a prerequisite for feature store creation. In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID`, of type string. + In this code sample, the client is scoped at the feature store level, and it registers the `account` entity with the feature store. Additionally, it creates an account entity that has the join key `accountID` of type `string`. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)] + [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)] - + ## Register the transaction feature set with the feature store -First, register a feature set asset with the feature store. You can then reuse that asset, and easily share it. Feature set asset registration offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities. +Use the following code to register a feature set asset with the feature store. You can then reuse that asset and easily share it. Registration of a feature set asset offers managed capabilities, including versioning and materialization. Later steps in this tutorial series cover managed capabilities. - ### [SDK Track](#tab/SDK-track) +### [SDK track](#tab/SDK-track) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset)] - ### [SDK and CLI Track](#tab/SDK-and-CLI-track) +### [SDK and CLI track](#tab/SDK-and-CLI-track) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)] +[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)] - + ## Explore the feature store UI -* Open the [Azure Machine Learning global landing page](https://ml.azure.com/home). -* Select `Feature stores` in the left nav -* From this list of accessible feature stores, select the feature store you created earlier in this tutorial. +Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse through the feature store: -> [!NOTE] -> Feature store asset creation and updates can happen only through the SDK and CLI. You can use the UI to search or browse the feature store. +1. Open the [Azure Machine Learning global landing page](https://ml.azure.com/home). +1. Select **Feature stores** on the left pane. +1. From the list of accessible feature stores, select the feature store that you created earlier in this tutorial. -## Generate a training data dataframe using the registered feature set +## Generate a training data DataFrame by using the registered feature set -1. Load observation data +1. Load observation data. - Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource. Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Since we use it for training, it also has an appended target variable (**is_fraud**). + Observation data typically involves the core data used for training and inferencing. This data joins with the feature data to create the full training data resource. ++ Observation data is data captured during the event itself. Here, it has core transaction data, including transaction ID, account ID, and transaction amount values. Because you use it for training, it also has an appended target variable (**is_fraud**). [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=load-obs-data)] -1. Get the registered feature set, and list its features +1. Get the registered feature set, and list its features. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=get-txn-fset)] [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=print-txn-fset-sample-values)] -1. Select features, and generate training data -- Here, we select the features that become part of the training data. Then, we use the feature store SDK to generate the training data itself. +1. Select the features that become part of the training data. Then, use the feature store SDK to generate the training data itself. [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_only/1. Develop a feature set and register with managed feature store.ipynb?name=select-features-and-gen-training-data)] A point-in-time join appends the features to the training data. -This tutorial built the training data with features from feature store. Optional: you can save the training data to storage for later use, or you can run model training on it directly. +This tutorial built the training data with features from the feature store. Optionally, you can save the training data to storage for later use, or you can run model training on it directly. -## Cleanup +## Clean up -The Tutorial #4 [clean up step](./tutorial-enable-recurrent-materialization-run-batch-inference.md#cleanup) describes how to delete the resources +The [fourth tutorial in the series](./tutorial-enable-recurrent-materialization-run-batch-inference.md#clean-up) describes how to delete the resources. ## Next steps -* [Part 2: enable materialization and back fill feature data](./tutorial-enable-materialization-backfill-data.md) -* Understand concepts: [feature store concepts](./concept-what-is-managed-feature-store.md), [top level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md) -* [Understand identity and access control for feature store](./how-to-setup-access-control-feature-store.md) -* [View feature store troubleshooting guide](./troubleshooting-managed-feature-store.md) -* Reference: [YAML reference](./reference-yaml-overview.md) +* Go to the next tutorial in the series: [Enable materialization and backfill feature data](./tutorial-enable-materialization-backfill-data.md). +* Learn about [feature store concepts](./concept-what-is-managed-feature-store.md) and [top-level entities in managed feature store](./concept-top-level-entities-in-managed-feature-store.md). +* Learn about [identity and access control for managed feature store](./how-to-setup-access-control-feature-store.md). +* View the [troubleshooting guide for managed feature store](./troubleshooting-managed-feature-store.md). +* View the [YAML reference](./reference-yaml-overview.md). |
machine-learning | Concept Automated Ml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-automated-ml.md | Classification is a common machine learning task. Classification is a type of su The main goal of classification models is to predict which categories new data will fall into based on learnings from its training data. Common classification examples include fraud detection, handwriting recognition, and object detection. Learn more and see an example at [Create a classification model with automated ML (v1)](../tutorial-first-experiment-automated-ml.md). -See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn) +See examples of classification and automated machine learning in these Python notebooks: [Fraud Detection](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb), [Marketing Prediction](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb), and [Newsgroup Data Classification](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn) ### Regression Similar to classification, regression tasks are also a common supervised learnin Different from classification where predicted output values are categorical, regression models predict numerical output values based on independent predictors. In regression, the objective is to help establish the relationship among those independent predictor variables by estimating how one variable impacts the others. For example, automobile price based on features like, gas mileage, safety rating, etc. Learn more and see an example of [regression with automated machine learning (v1)](how-to-auto-train-models-v1.md). -See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization), +See examples of regression and automated machine learning for predictions in these Python notebooks: [CPU Performance Prediction](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/regression-explanation-featurization), ### Time-series forecasting Advanced forecasting configuration includes: * rolling window aggregate features -See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb). +See examples of regression and automated machine learning for predictions in these Python notebooks: [Sales Forecasting](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb), [Demand Forecasting](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb), and [Forecasting GitHub's Daily Active Users](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb). ### Computer vision See the [how-to (v1)](how-to-configure-auto-train.md#ensemble-configuration) for With Azure Machine Learning, you can use automated ML to build a Python model and have it converted to the ONNX format. Once the models are in the ONNX format, they can be run on a variety of platforms and devices. Learn more about [accelerating ML models with ONNX](../concept-onnx.md). -See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms). +See how to convert to ONNX format [in this Jupyter notebook example](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-bank-marketing-all-features). Learn which [algorithms are supported in ONNX (v1)](../how-to-configure-auto-train.md#supported-algorithms). The ONNX runtime also supports C#, so you can use the model built automatically in your C# apps without any need for recoding or any of the network latencies that REST endpoints introduce. Learn more about [using an AutoML ONNX model in a .NET application with ML.NET](../how-to-use-automl-onnx-model-dotnet.md) and [inferencing ONNX models with the ONNX runtime C# API](https://onnxruntime.ai/docs/api/csharp-api.html). How-to articles provide additional detail into what functionality automated ML o ### Jupyter notebook samples -Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). +Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). ### Python SDK reference |
machine-learning | How To Auto Train Forecast | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-forecast.md | The table shows resulting feature engineering that occurs when window aggregatio ![target rolling window](../media/how-to-auto-train-forecast/target-roll.svg) -View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb). +View a Python code example applying the [target rolling window aggregate feature](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb). ### Short series handling mse = mean_squared_error( rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) ``` -In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). +In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). ### Prediction into the future -The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). +The [forecast_quantiles()](/python/api/azureml-train-automl-client/azureml.train.automl.model_proxy.modelproxy#forecast-quantiles-x-values--typing-any--y-values--typing-union-typing-any--nonetype-none--forecast-destination--typing-union-typing-any--nonetype-none--ignore-data-errors--boolfalse--azureml-data-abstract-dataset-abstractdataset) function allows specifications of when predictions should start, unlike the `predict()` method, which is typically used for classification and regression tasks. The forecast_quantiles() method by default generates a point forecast or a mean/median forecast which doesn't have a cone of uncertainty around it. Learn more in the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb). In the following example, you first replace all values in `y_pred` with `NaN`. The forecast origin is at the end of training data in this case. However, if you replaced only the second half of `y_pred` with `NaN`, the function would leave the numerical values in the first half unmodified, but forecast the `NaN` values in the second half. The function returns both the forecasted values and the aligned features. fitted_model.forecast_quantiles( test_dataset, label_query, forecast_destination=pd.Timestamp(2019, 1, 8)) ``` -You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example. +You can calculate model metrics like, root mean squared error (RMSE) or mean absolute percentage error (MAPE) to help you estimate the models performance. See the Evaluate section of the [Bike share demand notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) for an example. After the overall model accuracy has been determined, the most realistic next step is to use the model to forecast unknown future values. The following diagram shows the workflow for the many models solution. ![Many models concept diagram](../media/how-to-auto-train-forecast/many-models.svg) -The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example +The following code demonstrates the key parameters users need to set up their many models run. See the [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) for a many models forecasting example ```python from azureml.train.automl.runtime._many_models.many_models_parameters import ManyModelsTrainParameters To further visualize this, the leaf levels of the hierarchy contain all the time The hierarchical time series solution is built on top of the Many Models Solution and share a similar configuration setup. -The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example. +The following code demonstrates the key parameters to set up your hierarchical time series forecasting runs. See the [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb), for an end to end example. ```python hts_parameters = HTSTrainParameters( ## Example notebooks -See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including: +See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml) for detailed code examples of advanced forecasting configuration including: -* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) -* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) -* [configurable lags](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) -* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) +* [holiday detection and featurization](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) +* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) +* [configurable lags](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) +* [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) ## Next steps |
machine-learning | How To Auto Train Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models.md | Automated ML supports model training for computer vision tasks like image classi To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. For more information, see [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md). - * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. + * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. > [!NOTE] > Only Python 3.7 and 3.8 are compatible with automated ML support for computer vision tasks. For a detailed description on task specific hyperparameters, please refer to [Hy If you want to use tiling, and want to control tiling behavior, the following parameters are available: `tile_grid_size`, `tile_overlap_ratio` and `tile_predictions_nms_thresh`. For more details on these parameters please check [Train a small object detection model using AutoML](../how-to-use-automl-small-object-detect.md). ## Example notebooks-Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models. +Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models. ## Next steps |
machine-learning | How To Auto Train Models V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-models-v1.md | If you don't have an Azure subscription, create a free account before you begin. This article is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages, -* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment). +* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment). * Run `pip install azureml-opendatasets azureml-widgets` to get the required packages. ## Download and prepare data |
machine-learning | How To Auto Train Nlp Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models.md | You can seamlessly integrate with the [Azure Machine Learning data labeling](../ To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is pre-configured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md) for more information. - * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. + * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. [!INCLUDE [automl-sdk-version](../includes/machine-learning-automl-sdk-version.md)] Doing so, schedules distributed training of the NLP models and automatically sca ## Example notebooks See the sample notebooks for detailed code examples for each NLP task. -* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb) +* [Multi-class text classification](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multiclass/automl-nlp-text-classification-multiclass.ipynb) * [Multi-label text classification](-https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb) -* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb) +https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-multilabel/automl-nlp-text-classification-multilabel.ipynb) +* [Named entity recognition](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/automl-nlp-ner/automl-nlp-ner.ipynb) ## Next steps + Learn more about [how and where to deploy a model](../how-to-deploy-online-endpoints.md). |
machine-learning | How To Configure Auto Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-features.md | In order to invoke BERT, set `enable_dnn: True` in your automl_settings and use Automated ML takes the following steps for BERT. -1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb). +1. **Preprocessing and tokenization of all text columns**. For example, the "StringCast" transformer can be found in the final model's featurization summary. An example of how to produce the model's featurization summary can be found in [this notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/classification-text-dnn/auto-ml-classification-text-dnn.ipynb). 2. **Concatenate all text columns into a single text column**, hence the `StringConcatTransformer` in the final model. |
machine-learning | How To Configure Auto Train | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train.md | For this article you need, To install the SDK you can either, * Create a compute instance, which automatically installs the SDK and is preconfigured for ML workflows. See [Create and manage an Azure Machine Learning compute instance](../how-to-create-compute-instance.md) for more information. - * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. + * [Install the `automl` package yourself](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment), which includes the [default installation](/python/api/overview/azure/ml/install#default-install) of the SDK. [!INCLUDE [automl-sdk-version](../includes/machine-learning-automl-sdk-version.md)] Use data streaming algorithms <br> [(studio UI experiments)](../h Next determine where the model will be trained. An automated ML training experiment can run on the following compute options. - * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example. + * **Choose a local compute**: If your scenario is about initial explorations or demos using small data and short trains (i.e. seconds or a couple of minutes per child run), training on your local computer might be a better choice. There is no setup time, the infrastructure resources (your PC or VM) are directly available. See [this notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb) for a local compute example. * **Choose a remote ML compute cluster**: If you are training with larger datasets like in production training creating models which need longer trains, remote compute will provide much better end-to-end time performance because `AutoML` will parallelize trains across the cluster's nodes. On a remote compute, the start-up time for the internal infrastructure will add around 1.5 minutes per child run, plus additional minutes for the cluster infrastructure if the VMs are not yet up and running.[Azure Machine Learning Managed Compute](../concept-compute-target.md#azure-machine-learning-compute-managed) is a managed service that enables the ability to train machine learning models on clusters of Azure virtual machines. Compute instance is also supported as a compute target. - * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks. + * An **Azure Databricks cluster** in your Azure subscription. You can find more details in [Set up an Azure Databricks cluster for automated ML](how-to-configure-databricks-automl-environment.md). See this [GitHub site](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-databricks) for examples of notebooks with Azure Databricks. Consider these factors when choosing your compute target: |
machine-learning | How To Configure Databricks Automl Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-databricks-automl-environment.md | In AutoML config, when using Azure Databricks add the following parameters: ## ML notebooks that work with Azure Databricks Try it out:-+ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.** ++ While many sample notebooks are available, **only [these sample notebooks](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-databricks) work with Azure Databricks.** + Import these samples directly from your workspace. See below: ![Select Import](../media/how-to-configure-environment/azure-db-screenshot.png) |
machine-learning | How To Deploy And Where | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-and-where.md | For more information on `az ml model register`, see the [reference documentation You can register a model by providing the local path of the model. You can provide the path of either a folder or a single file on your local machine. <!-- pyhton nb call -->-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=register-model-from-local-file-code)] To include multiple files in the model registration, set `model_path` to the path of a folder that contains the files. The two things you need to accomplish in your entry script are: For your initial deployment, use a dummy entry script that prints the data it receives. Save this file as `echo_score.py` inside of a directory called `source_dir`. This dummy script returns the data you send to it, so it doesn't use the model. But it is useful for testing that the scoring script is running. You can use any [Azure Machine Learning inference curated environments](../conce A minimal inference configuration can be written as: Save this file with the name `dummyinferenceconfig.json`. Save this file with the name `dummyinferenceconfig.json`. The following example demonstrates how to create a minimal environment with no pip dependencies, using the dummy scoring script you defined above. -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=inference-configuration-code)] For more information on environments, see [Create and manage environments for training and deployment](../how-to-use-environments.md). For more information, see the [deployment schema](reference-azure-machine-learni The following Python demonstrates how to create a local deployment configuration: -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deployment-configuration-code)] az ml model deploy -n myservice \ # [Python SDK](#tab/python) -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-code)] -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-print-logs)] For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice). curl -v -X POST -H "content-type:application/json" \ # [Python SDK](#tab/python) <!-- python nb call -->-[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-into-model-code)] curl -v -X POST -H "content-type:application/json" \ Now it's time to actually load your model. First, modify your entry script: Save this file as `score.py` inside of `source_dir`. Notice the use of the `AZUREML_MODEL_DIR` environment variable to locate your re [!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)] Save this file as `inferenceconfig.json` az ml model deploy -n myservice \ # [Python SDK](#tab/python) -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-code)] -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-model-print-logs)] For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice). curl -v -X POST -H "content-type:application/json" \ # [Python SDK](#tab/python) -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=send-post-request-code)] Change your deploy configuration to correspond to the compute target you've chos The options available for a deployment configuration differ depending on the compute target you choose. Save this file as `re-deploymentconfig.json`. For more information, see [this reference](reference-azure-machine-learning-cli. # [Python SDK](#tab/python) -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=deploy-model-on-cloud-code)] az ml service get-logs -n myservice \ # [Python SDK](#tab/python) -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-code)] -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=re-deploy-service-print-logs)] For more information, see the documentation for [Model.deploy()](/python/api/azureml-core/azureml.core.model.model#deploy-workspace--name--models--inference-config-none--deployment-config-none--deployment-target-none--overwrite-false-) and [Webservice](/python/api/azureml-core/azureml.core.webservice.webservice). For more information, see the documentation for [Model.deploy()](/python/api/azu When you deploy remotely, you may have key authentication enabled. The example below shows how to get your service key with Python in order to make an inference request. -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-web-service-code)] -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=call-remote-webservice-print-logs)] The following table describes the different service states: [!INCLUDE [cli v1](../includes/machine-learning-cli-v1.md)] -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/2.deploy-local-cli.ipynb?name=delete-resource-code)] ```azurecli-interactive az ml service delete -n myservice Read more about [deleting a webservice](/cli/azure/ml(v1)/computetarget/create#a # [Python SDK](#tab/python) -[!Notebook-python[] (~/azureml-examples-main/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)] +[!Notebook-python[] (~/azureml-examples-archive/v1/python-sdk/tutorials/deploy-local/1.deploy-local.ipynb?name=delete-resource-code)] To delete a deployed web service, use `service.delete()`. To delete a registered model, use `model.delete()`. |
machine-learning | How To Deploy Fpga Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-fpga-web-service.md | Before you can deploy to FPGAs, convert the model to the [ONNX](https://onnx.ai/ ### Containerize and deploy the model -Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image. +Next, create a Docker image from the converted model and all dependencies. This Docker image can then be deployed and instantiated. Supported deployment targets include Azure Kubernetes Service (AKS) in the cloud or an edge device such as [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md). You can also add tags and descriptions for your registered Docker image. ```python from azureml.core.image import Image Next, create a Docker image from the converted model and all dependencies. This #### Deploy to a local edge server -All [Azure Azure Stack Edge devices](../../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models). +All [Azure Stack Edge devices](../../databox-online/azure-stack-edge-overview.md) contain an FPGA for running the model. Only one model can be running on the FPGA at one time. To run a different model, just deploy a new container. Instructions and sample code can be found in [this Azure Sample](https://github.com/Azure-Samples/aml-hardware-accelerated-models). ### Consume the deployed model |
machine-learning | How To Inference Onnx Automl Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-inference-onnx-automl-image-models.md | arguments = ['--model_name', 'maskrcnn_resnet50_fpn', # enter the maskrcnn mode -Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory. +Download and keep the `ONNX_batch_model_generator_automl_for_images.py` file in the current directory and submit the script. Use [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) to submit the script `ONNX_batch_model_generator_automl_for_images.py` available in the [azureml-examples GitHub repository](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml), to generate an ONNX model of a specific batch size. In the following code, the trained model environment is used to submit this script to generate and save the ONNX model to the outputs directory. ```python script_run_config = ScriptRunConfig(source_directory='.', script='ONNX_batch_model_generator_automl_for_images.py', Every ONNX model has a predefined set of input and output formats. # [Multi-class image classification](#tab/multi-class) -This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass). +This example applies the model trained on the [fridgeObjects](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip) dataset with 134 images and 4 classes/labels to explain ONNX model inference. For more information on training an image classification task, see the [multi-class image classification notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass). ### Input format The output is an array of logits for all the classes/labels. # [Multi-label image classification](#tab/multi-label) -This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel). +This example uses the model trained on the [multi-label fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on model training for multi-label image classification, see the [multi-label image classification notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multilabel). ### Input format The output is an array of logits for all the classes/labels. # [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn) -This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection). +This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains Faster R-CNN models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection). ### Input format The following table describes boxes, labels and scores returned for each sample # [Object detection with YOLO](#tab/object-detect-yolo) -This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection). +This object detection example uses the model trained on the [fridgeObjects detection dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) of 128 images and 4 classes/labels to explain ONNX model inference. This example trains YOLO models to demonstrate inference steps. For more information on training object detection models, see the [object detection notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection). ### Input format Each cell in the list indicates box detections of a sample with shape `(n_boxes, # [Instance segmentation](#tab/instance-segmentation) -For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation). +For this instance segmentation example, you use the Mask R-CNN model that has been trained on the [fridgeObjects dataset](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip) with 128 images and 4 classes/labels to explain ONNX model inference. For more information on training of the instance segmentation model, see the [instance segmentation notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-instance-segmentation). >[!IMPORTANT] > Only Mask R-CNN is supported for instance segmentation tasks. The input and output formats are based on Mask R-CNN only. batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape batch, channel, height_onnx, width_onnx ``` -For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection). +For preprocessing required for YOLO, refer to [yolo_onnx_preprocessing_utils.py](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection). ```python import glob |
machine-learning | How To Prepare Datasets For Automl Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-prepare-datasets-for-automl-images.md | If you already have a data labeling project and you want to use that data, you c ## Use conversion scripts -If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). +If you have labeled data in popular computer vision data formats, like VOC or COCO, [helper scripts](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/coco2jsonl.py) to generate JSONL files for training and validation data are available in [notebook examples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). If your data doesn't follow any of the previously mentioned formats, you can use your own script to generate JSON Lines files based on schemas defined in [Schema for JSONL files for AutoML image experiments](../reference-automl-images-schema.md). |
machine-learning | How To Train Distributed Gpu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-distributed-gpu.md | Make sure your code follows these tips: ### Horovod example -* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod) +* [azureml-examples: TensorFlow distributed training using Horovod](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/tensorflow/mnist-distributed-horovod) ### DeepSpeed Make sure your code follows these tips: ### DeepSpeed example -* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/deepspeed/cifar) +* [azureml-examples: Distributed training with DeepSpeed on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/deepspeed/cifar) ### Environment variables from Open MPI run = Experiment(ws, 'experiment_name').submit(run_config) ### Pytorch per-process-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/pytorch/cifar-distributed)+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/pytorch/cifar-distributed) ### <a name="per-node-launch"></a> Using torch.distributed.launch (per-node-launch) run = Experiment(ws, 'experiment_name').submit(run_config) ### PyTorch per-node-launch example -- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/pytorch/cifar-distributed)+- [azureml-examples: Distributed training with PyTorch on CIFAR-10](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/pytorch/cifar-distributed) ### PyTorch Lightning TF_CONFIG='{ ### TensorFlow example -- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/workflows/train/tensorflow/mnist-distributed)+- [azureml-examples: Distributed TensorFlow training with MultiWorkerMirroredStrategy](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/workflows/train/tensorflow/mnist-distributed) ## <a name="infiniband"></a> Accelerating distributed GPU training with InfiniBand |
machine-learning | How To Train Pytorch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-pytorch.md | ws = Workspace.from_config() ### Get the data -The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb). +The dataset consists of about 120 training images each for turkeys and chickens, with 100 validation images for each class. We'll download and extract the dataset as part of our training script `pytorch_train.py`. The images are a subset of the [Open Images v5 Dataset](https://storage.googleapis.com/openimages/web/https://docsupdatetracker.net/index.html). For more steps on creating a JSONL to train with your own data, see this [Jupyter notebook](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb). ### Prepare training script |
machine-learning | How To Train With Custom Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md | print(compute_target.get_status().serialize()) ## Configure your training job -For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning. +For this tutorial, use the training script *train.py* on [GitHub](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/workflows/train/fastai/pets/src/train.py). In practice, you can take any custom training script and run it, as is, with Azure Machine Learning. Create a `ScriptRunConfig` resource to configure your job for running on the desired [compute target](how-to-set-up-training-targets.md). |
machine-learning | How To Trigger Published Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-trigger-published-pipeline.md | published_pipeline = PublishedPipeline.get(ws, id="<pipeline-id-here>") published_pipeline.endpoint ``` -## Create a Logic App +## Create a logic app in Azure -Now create an [Azure Logic App](../../logic-apps/logic-apps-overview.md) instance. If you wish, [use an integration service environment (ISE)](../../logic-apps/connect-virtual-network-vnet-isolated-environment.md) and [set up a customer-managed key](../../logic-apps/customer-managed-keys-integration-service-environment.md) for use by your Logic App. --Once your Logic App has been provisioned, use these steps to configure a trigger for your pipeline: +Now create an [Azure Logic App](../../logic-apps/logic-apps-overview.md) instance. After your logic app is provisioned, use these steps to configure a trigger for your pipeline: 1. [Create a system-assigned managed identity](../../logic-apps/create-managed-service-identity.md) to give the app access to your Azure Machine Learning Workspace. |
machine-learning | How To Use Automl Small Object Detect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automl-small-object-detect.md | The following are the parameters you can use to control the tiling feature. ## Example notebooks -See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model. +See the [object detection sample notebook](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) for detailed code examples of setting up and training an object detection model. >[!NOTE] > All images in this article are made available in accordance with the permitted use section of the [MIT licensing agreement](https://choosealicense.com/licenses/mit/). |
machine-learning | Tutorial Auto Train Image Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-auto-train-image-models.md | You'll write code using the Python SDK in this tutorial and learn the following * Complete the [Quickstart: Get started with Azure Machine Learning](../quickstart-create-resources.md#create-the-workspace) if you don't already have an Azure Machine Learning workspace. -* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook. +* Download and unzip the [**odFridgeObjects.zip*](https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection/auto-ml-image-object-detection.ipynb) section of the notebook. -This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages, +This tutorial is also available in the [azureml-examples repository on GitHub](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/image-object-detection) if you wish to run it in your own [local environment](how-to-configure-environment.md). To get the required packages, * Run `pip install azureml`-* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment) +* [Install the full `automl` client](https://github.com/Azure/azureml-examples/blob/v1-archive/v1/python-sdk/tutorials/automl-with-azureml/README.md#setup-using-a-local-conda-environment) ## Compute target setup In this automated machine learning tutorial, you did the following tasks: * [Learn how to set up AutoML to train computer vision models with Python](../how-to-auto-train-image-models.md). * [Learn how to configure incremental training on computer vision models](../how-to-auto-train-image-models.md#incremental-training-optional). * See [what hyperparameters are available for computer vision tasks](../reference-automl-images-hyperparameters.md).-* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models. +* Review detailed code examples and use cases in the [GitHub notebook repository for automated machine learning samples](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/automl-with-azureml). Please check the folders with 'image-' prefix for samples specific to building computer vision models. > [!NOTE] > Use of the fridge objects dataset is available through the license under the [MIT License](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE). |
machine-learning | Tutorial Pipeline Python Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md | The above code specifies a dataset that is based on the output of a pipeline ste The code that you've executed so far has create and controlled Azure resources. Now it's time to write code that does the first step in the domain. -If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/main/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`. +If you're following along with the example in the [Azure Machine Learning Examples repo](https://github.com/Azure/azureml-examples/tree/v1-archive/v1/python-sdk/tutorials/using-pipelines), the source file is already available as `keras-mnist-fashion/prepare.py`. If you're working from scratch, create a subdirectory called `keras-mnist-fashion/`. Create a new file, add the following code to it, and name the file `prepare.py`. Once the data has been converted from the compressed format to CSV files, it can With larger pipelines, it's a good practice to put each step's source code in a separate directory (`src/prepare/`, `src/train/`, and so on) but for this tutorial, just use or create the file `train.py` in the same `keras-mnist-fashion/` source directory. Most of this code should be familiar to ML developers: |
managed-grafana | How To Create Api Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-api-keys.md | -# Generate and manage Grafana API keys in Azure Managed Grafana +# Create and manage Grafana API keys in Azure Managed Grafana (Deprecated) -> [!NOTE] -> This document is deprecated as the API keys feature has been replaced by a new feature in Grafana 9.1. Go to [Service accounts](./how-to-service-accounts.md) to access the current recommended method to create and manage API keys. --> [!TIP] -> To switch to using service accounts, in Grafana instances created before the release of Grafana 9.1, go to **Configuration > API keys and select Migrate to service accounts now**. Select **Yes, migrate now**. Each existing API keys will be automatically migrated into a service account with a token. The service account will be created with the same permission as the API Key and current API keys will continue to work as before. +> [!IMPORTANT] +> This document is deprecated as the API keys feature has been replaced by [service accounts](./how-to-service-accounts.md) in Grafana 9.1. To switch to using service accounts, in Grafana instances created before the release of Grafana 9.1, go to **Configuration > API keys and select Migrate to service accounts now**. Select **Yes, migrate now**. Each existing API keys will be automatically migrated into a service account with a token. The service account will be created with the same permission as the API Key and current API keys will continue to work as before. In this guide, learn how to generate and manage API keys, and start making API calls to the Grafana server. Grafana API keys will enable you to create integrations between Azure Managed Grafana and other services. |
managed-grafana | How To Grafana Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-grafana-enterprise.md | Title: Subscribe to Grafana Enterprise -description: Activate Grafana Enterprise (preview) to access Grafana Enterprise plugins within Azure Managed Grafana +description: Activate Grafana Enterprise to access Grafana Enterprise plugins within Azure Managed Grafana -# Enable Grafana Enterprise (preview) +# Enable Grafana Enterprise -In this guide, learn how to activate the Grafana Enterprise (preview) add-on in Azure Managed Grafana, update your Grafana Enterprise plan, and access [Grafana Enterprise plugins](https://grafana.com/docs/plugins/). +In this guide, learn how to activate the Grafana Enterprise add-on in Azure Managed Grafana, update your Grafana Enterprise plan, and access [Grafana Enterprise plugins](https://grafana.com/docs/plugins/). The Grafana Enterprise plans offered through Azure Managed Grafana enable users to access Grafana Enterprise plugins to do more with Azure Managed Grafana. You can enable access to Grafana Enterprise plugins by selecting a Grafana Enter > [!NOTE] > The Grafana Enterprise monthly plan is a paid plan, owned and charged by Grafana Labs, through Azure Marketplace. Go to [Azure Managed Grafana pricing](https://azure.microsoft.com/pricing/details/managed-grafana/) for details. -> [!IMPORTANT] -> Grafana Enterprise is currently in preview within Azure Managed Grafana. -> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free). |
managed-grafana | Troubleshoot Managed Grafana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md | This issue can happen if: 1. Your account is a foreign account: the Grafana instance isn't registered in your home tenant. 1. If you recently addressed this problem and have been assigned a sufficient Grafana role, you may need to wait for some time before the cookie expires and get refreshed. This process normally takes 5 min. If in doubts, delete all cookies or start a private browser session to force a fresh new cookie with new role information. +## Authorized users don't show up in Grafana Users configuration ++After you add a user to a Managed Grafana's built-in RBAC role, such as Grafana Viewer, you don't see that user listed in the Grafana's **Configuration** UI page right away. This behavior is *by design*. Managed Grafana's RBAC roles are stored in the Azure AD (AAD). For performance reasons, Managed Grafana doesn't automatically synchronize users assigned to the built-in roles to every instance. There is no notification for changes in RBAC assignments. Querying AAD periodically to get current assignments adds much extra load to the AAD service. ++There's no "fix" for this in itself. After a user signs into your Grafana instance, the user shows up in the **Users** tab under Grafana **Configuration**. You can see the corresponding role that user has been assigned to. + ## Azure Managed Grafana dashboard panel doesn't display any data One or several Managed Grafana dashboard panels show no data. |
managed-instance-apache-cassandra | Best Practice Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md | For more information refer to [Virtual Machine and disk performance](../virtual- ### Network performance -In most cases network performance is sufficient. However, if you are frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics: +In most cases network performance is sufficient. However, if you're frequently streaming data (such as frequent horizontal scale-up/scale down) or there are huge ingress/egress data movements, this can become a problem. You may need to evaluate the network performance of your SKU. For example, the `Standard_DS14_v2` SKU supports 12,000 Mb/s, compare this to the byte-in/out in the metrics: :::image type="content" source="./media/best-practice-performance/metrics-network.png" alt-text="Screenshot of network metrics." lightbox="./media/best-practice-performance/metrics-network.png" border="true"::: If you only see the network elevated for a small number of nodes, you might have ### Too many connected clients -Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this does not exceed tolerable limits. +Deployments should be planned and provisioned to support the maximum number of parallel requests required for the desired latency of an application. For a given deployment, introducing more load to the system above a minimum threshold increases overall latency. Monitor the number of connected clients to ensure this doesn't exceed tolerable limits. :::image type="content" source="./media/best-practice-performance/metrics-connections.png" alt-text="Screenshot of connected client metrics." lightbox="./media/best-practice-performance/metrics-connections.png" border="true"::: ### Disk space -In most cases, there is sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space. +In most cases, there's sufficient disk space as default deployments are optimized for IOPS, which leads to low utilization of the disk. Nevertheless, we advise occasionally reviewing disk space metrics. Cassandra accumulates a lot of disk and then reduces it when compaction is triggered. Hence it is important to review disk usage over longer periods to establish trends - like compaction unable to recoup space. > [!NOTE] > In order to ensure available space for compaction, disk utilization should be kept to around 50%. Our default formula assigns half the VM's memory to the JVM with an upper limit In most cases memory gets reclaimed effectively by the Java garbage collector, but especially if the CPU is often above 80% there aren't enough CPU cycles for the garbage collector left. So any CPU performance problems should be addresses before memory problems. -If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you are on a SKU with limited memory. In most cases, you will need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query. +If the CPU hovers below 70%, and the garbage collection isn't able to reclaim memory, you might need more JVM memory. This is especially the case if you're on a SKU with limited memory. In most cases, you'll need to review your queries and client settings and reduce `fetch_size` along with what is chosen in `limit` within your CQL query. If you indeed need more memory, you can: You might encounter this warning in the [CassandraLogs](monitor-clusters.md#crea `Writing large partition <table> (105.426MiB) to sstable <file>` -This indicates a problem in the data model. Here is a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed. +This indicates a problem in the data model. Here's a [stack overflow article](https://stackoverflow.com/questions/74024443/how-do-i-analyse-and-solve-writing-large-partition-warnings-in-cassandra) that goes into more detail. This can cause severe performance issues and needs to be addressed. ++## Specialized optimizations +### Compression +Cassandra allows the selection of an appropriate compression algorithm when a table is created (see [Compression](https://cassandra.apache.org/doc/latest/cassandra/operating/compression.html)) The default is LZ4 which is excellent +for throughput and CPU but consumes more space on disk. Using Zstd (Cassandra 4.0 and up) saves about ~12% space with +minimal CPU overhead. ++### Optimizing memtable heap space +Our default is to use 1/4 of the JVM heap for [memtable_heap_space](https://cassandra.apache.org/doc/latest/cassandra/configuration/cass_yaml_file.html#memtable_heap_space) +in the cassandra.yaml. For write oriented application and/or on SKUs with small memory +this can lead to frequent flushing and fragmented sstables thus requiring more compaction. +In such cases increasing it to at least 4048 might be beneficial but requires careful benchmarking +to make sure other operations (e.g. reads) aren't affected. ## Next steps In this article, we laid out some best practices for optimal performance. You can now start working with the cluster: > [!div class="nextstepaction"]-> [Create a cluster using Azure Portal](create-cluster-portal.md) +> [Create a cluster using Azure Portal](create-cluster-portal.md) |
managed-instance-apache-cassandra | Monitor Clusters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/monitor-clusters.md | Use the [Azure Monitor REST API](/rest/api/monitor/diagnosticsettings/createorup ## Audit whitelist > ![NOTE]-> This article contains references to the term *whitelist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. By default, audit logging creates a record for every login attempt and CQL query. The result can be rather overwhelming and increase overhead. You can use the audit whitelist feature in Cassandra 3.11 to set what operations *don't* create an audit record. The audit whitelist feature is enabled by default in Cassandra 3.11. To learn how to configure your whitelist, see [Role-based whitelist management](https://github.com/Ericsson/ecaudit/blob/release/c2.2/doc/role_whitelist_management.md). |
mariadb | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md | |
mariadb | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
migrate | Common Questions Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-appliance.md | The appliance can be deployed using a couple of methods: - The appliance can be deployed using a template for servers running in VMware or Hyper-V environment ([OVA template for VMware](how-to-set-up-appliance-vmware.md) or [VHD for Hyper-V](how-to-set-up-appliance-hyper-v.md)). - If you don't want to use a template, you can deploy the appliance for VMware or Hyper-V environment using a [PowerShell installer script](deploy-appliance-script.md). - In Azure Government, you should deploy the appliance using a PowerShell installer script. Refer to the steps of deployment [here](deploy-appliance-script-government.md).-- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script.Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md).+- For physical or virtualized servers on-premises or any other cloud, you always deploy the appliance using a PowerShell installer script. Refer to the steps of deployment [here](how-to-set-up-appliance-physical.md). ## How does the appliance connect to Azure? |
migrate | Concepts Business Case Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md | Cost components for running on-premises servers. For TCO calculations, an annual | | | | | Compute | Hardware | Server Hardware (Host machines) | Total hardware acquisition cost is calculated using a cost per core linear regression formula: Cost per core = 16.232*(Hyperthreaded core: memory in GB ratio) + 113.87. Hyperthreaded cores = 2*(cores) | | Software - SQL Server licensing | License cost | Calculated per two core pack license pricing of 2019 Enterprise or Standard. |+| | SQL Server - Extended Security Update (ESU) | License cost | Calculated for 3 years after the end of support of SQL server license as follows:<br/><br/> ESU (Year 1) ΓÇô 75% of the license cost <br/><br/> ESU (Year 2) ΓÇô 100% of the license cost <br/><br/> ESU (Year 3) ΓÇô 125% of the license cost <br/><br/> | | | | Software Assurance | Calculated per year as in settings. | | | Software - Windows Server licensing | License cost | Calculated per two core pack license pricing of Windows Server. |+| | Windows Server - Extended Security Update (ESU) | License cost | Calculated for 3 years after the end of support of Windows server license: <br/><br/> ESU (Year 1) ΓÇô 75% of the license cost <br/><br/> ESU (Year 2) ΓÇô 100% of the license cost <br/><br/> ESU (Year 3) ΓÇô 125% of the license cost <br/><br/>| | | | Software Assurance | Calculated per year as in settings. |-| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost + support + management software cost) | License cost for vSphere Standard license + Production support for vSphere Standard license + Management software cost for VSphere Standard + production support cost of management software. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.| +| | Virtualization software for servers running in VMware environment | Virtualization Software (VMware license cost + support + management software cost) | License cost for vSphere Standard license + Production support for vSphere Standard license + Management software cost for vSphere Standard + production support cost of management software. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.| | | Virtualization software for servers running in Microsoft Hyper-V environment| Virtualization Software (management software cost + software assurance) | Management software cost for System Center + software assurance. _Not included- other hypervisor software cost_ or _Antivirus / Monitoring Agents_.| | Storage | Storage Hardware | | The total storage hardware acquisition cost is calculated by multiplying the Total volume of storage attached to per GB cost. Default is USD 2 per GB per month. | | | Storage Maintenance | | Default is 10% of storage hardware acquisition cost. | Cost components for running on-premises servers. For TCO calculations, an annual | **Operating Asset Expense (OPEX) (B)** | | | | | Network maintenance | Per year | | | | Storage maintenance | Per year | Power draw per Server, Average price per KW per month based on location. | |-| License Support | License support cost for virtualization + Windows Server + SQL Server + Linux OS | | VMware licenses aren't retained; Windows, SQL and Hyper-V management software licenses are retained based on AHUB option in Azure. | +| License Support | License support cost for virtualization + Windows Server + SQL Server + Linux OS + Windows server extended security update (ESU) + SQL Server extended security update (ESU) | | VMware licenses aren't retained; Windows, SQL and Hyper-V management software licenses are retained based on AHUB option in Azure. | | Security | Per year | Per server annual security/protection cost. | | | Datacenter Admin cost | Number of people * hourly cost * 730 hours | Cost per hour based on location. | | |
migrate | Concepts Dependency Visualization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-dependency-visualization.md | The differences between agentless visualization and agent-based visualization ar **Requirement** | **Agentless** | **Agent-based** | | -**Support** | Available for VMware VMs in general availability (GA). | In general availability (GA). +**Support** | Generally Available for VMware VMs, Hyper-V VMs, Physical servers, or servers running on other public clouds like AWS and GCP. | In general availability (GA). **Agent** | No agents needed on servers you want to analyze. | Agents required on each on-premises server that you want to analyze. **Log Analytics** | Not required. | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency analysis.<br/><br/> You associate a Log Analytics workspace with a project. The workspace must reside in the East US, Southeast Asia, or West Europe regions. The workspace must be in a region in which [Service Map is supported](../azure-monitor/vm/vminsights-configure-workspace.md#supported-regions). **Process** | Captures TCP connection data. After discovery, it gathers data at intervals of five minutes. | Service Map agents installed on a server gather data about TCP processes, and inbound/outbound connections for each process. |
migrate | Concepts Vmware Agentless Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-vmware-agentless-migration.md | Delta replication cycles are scheduled as follows: - First delta replication cycle is scheduled immediately after the initial replication cycle completes - Next delta replication cycles are scheduled according to the following logic: - min[max[(Previous delta replication cycle time/2), 1 hour], 12 hours] + min[max[1 hour, (Previous delta replication cycle time/2)], 12 hours] That is, the next delta replication will be scheduled no sooner than one hour and no later than 12 hours. For example, if a VM takes four hours for a delta replication cycle, the next delta replication cycle is scheduled in two hours, and not in the next hour. |
migrate | How To Automate Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-automate-migration.md | |
migrate | How To Create Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-assessment.md | Run an assessment as follows: :::image type="content" source="./media/tutorial-assess-vmware-azure-vm/assess-group.png" alt-text="Screenshot of adding VMs to a group."::: -1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**. +1. Select the appliance, and select the VMs you want to add to the group. Then select **Next**. We recommend that you prioritize migrations for servers in extended support/out of support. 1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment. An Azure VM assessment describes: :::image type="content" source="./media/how-to-create-assessment/assessment-summary.png" alt-text="Screenshot of an Assessment summary."::: +### Review support status ++The assessment summary displays the support status of the Operating system licenses. ++1. Select the graph in the **Supportability** section to view a list of the assessed VMs. +2. The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which shows the type of support status, duration of support, and the recommended steps to secure their workloads. + - To view the remaining duration of support, that is, the number of months for which the license is valid, +select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++ ### Review Azure readiness 1. In **Azure readiness**, verify whether servers are ready for migration to Azure. |
migrate | How To Create Azure Sql Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-create-azure-sql-assessment.md | Run an assessment as follows: 3. Review the assessment summary. You can also edit the assessment settings or recalculate the assessment. +### Review support status ++The assessment summary displays the support status of the database instance licenses. ++1. Select the graph in the **Supportability** section to view a list of the assessed VMs. +2. The **Database instance license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which shows the type of support status, duration of support, and the recommended steps to secure their workloads. + - To view the remaining duration of support, that is, the number of months for which the license is valid, +select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++ ### Discovered entities This indicates the number of SQL servers, instances, and databases that were assessed in this assessment. |
migrate | How To View A Business Case | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md | There are four major reports that you need to review: - Estimated year on year cashflow savings based on the estimated migration completed that year. - Savings from unique Azure benefits like Azure Hybrid Benefit. - Discovery insights covering the scope of the business case.+ - Support status of the operating system and database licenses. - **On-premises vs Azure**: This report covers the breakdown of the total cost of ownership by cost categories and insights on savings. - **Azure IaaS**: This report covers the Azure and on-premises footprint of the servers and workloads recommended for migrating to Azure IaaS. - **Azure PaaS**: This report covers the Azure and on-premises footprint of the workloads recommended for migrating to Azure PaaS. As you plan to migrate to Azure in phases, this line chart shows your cashflow p - The future state cost shows how your net cashflow will be as you migrate some percentage to Azure per year as in the 'Azure cost' assumptions, while your infrastructure is growing 5% per year. ### Savings with Azure Hybrid Benefits-Currently, this card shows a static percentage of max savings you could get with Azure hybrid Benefits. +This card shows a static percentage of maximum savings you could get with Azure hybrid Benefits. ++### Savings with Extended security updates +It shows the potential savings with respect to extended security update license. It is the cost of extended security update license required to run Windows Server and SQL Server securely after the end of support of its licenses on-premises. Extended security updates are offered at no additional cost on Azure. + ### Discovery insights-It covers the total severs scoped in the business case computation, virtualization distribution, utilization insights and distribution of servers based on workloads running on them. +It covers the total servers scoped in the business case computation, virtualization distribution, utilization insights, support status of the licenses, and distribution of servers based on workloads running on them. -### Utilization insights +#### Utilization insights It covers which servers are ideal for cloud, servers that can be decommissioned on-premises, and servers that can't be classified based on resource utilization/performance data: - Ideal for cloud: These servers are best fit for migrating to Azure and comprises of active and idle servers: - Active servers: These servers delivered business value by being on and had their CPU and memory utilization above 5% and network utilization above 2%. It covers which servers are ideal for cloud, servers that can be decommissioned - Zombie: The CPU, memory and network utilization were 0% with no performance data collection issues. - These servers were on but don't have adequate metrics available: - Unknown: Many servers can land in this section if the discovery is still ongoing or has some unaddressed discovery issues.+ -## On-premises vs Azure report +## On-premises vs Azure report It covers cost components for on-premises and Azure, savings, and insights to understand the savings better. :::image type="content" source="./media/how-to-view-a-business-case/comparison-inline.png" alt-text="Screenshot of on-premises and Azure comparison." lightbox="./media/how-to-view-a-business-case/comparison-expanded.png"::: It covers cost components for on-premises and Azure, savings, and insights to un **Azure tab** This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits.+- IaaS cost estimate: + - **Estimated cost by target**: This card includes the cost based on the target. + - **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit. + - **Savings** - This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year. - Azure VM: - **Estimated cost by savings options**: This card includes compute cost for Azure VMs. It is recommended that all idle servers are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance or 3 year Azure Savings Plan to maximize savings. - **Recommended VM family**: This card covers the VM sizes recommended. The ones marked Unknown are the VMs that have some readiness issues and no SKUs could be found for them. This section assumes instance to SQL Server on Azure VM migration recommendation - On-premises footprint of the servers recommended to be migrated to Azure IaaS. - Contribution of Zombie servers in the on-premises cost. - Distribution of servers by OS, virtualization, and activity state.+- Distribution by support status of OS licenses and OS versions. ## Azure PaaS report **Azure tab** This section contains the cost estimate by recommended target (Annual cost and also includes Compute, Storage, Network, labor components) and savings from Hybrid benefits.+- PaaS cost estimate: + - **Estimated cost by target**: This card includes the cost based on the target. + - **Compute and license cost**: This card shows the comparison of compute and license cost when using Azure hybrid benefit and without Azure hybrid benefit. + - **Savings** - This card displays the estimated maximum savings when using Azure hybrid benefit and with extended security updates over a period of one year. - Azure SQL: - **Estimated cost by savings options**: This card includes compute cost for Azure SQL MI. It is recommended that all idle SQL instances are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings. - **Distribution by recommended service tier** : This card covers the recommended service tier. This section contains the cost estimate by recommended target (Annual cost and a - On-premises footprint of the servers recommended to be migrated to Azure PaaS. - Contribution of Zombie SQL instances in the on-premises cost.+- Distribution by support status of OS licenses and OS versions. - Distribution of SQL instances by SQL version and activity state. + ## Next steps - [Learn more](concepts-business-case-calculation.md) about how business cases are calculated. |
migrate | Migrate Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-appliance.md | The following table summarizes the Azure Migrate appliance requirements for VMwa **Project limits** | An appliance can only be registered with a single project.<br> A single project can have multiple registered appliances. **Discovery limits** | An appliance can discover up to 10,000 severs running across multiple vCenter Servers.<br>A single appliance can connect to up to 10 vCenter Servers. **Supported deployment** | Deploy as new server running on vCenter Server using OVA template.<br><br> Deploy on an existing server running Windows Server 2022 using PowerShell installer script.-**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2140333)<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2022 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server. +**OVA template** | Download from project or from [here](https://go.microsoft.com/fwlink/?linkid=2191954).<br><br> Download size is 11.9 GB.<br><br> The downloaded appliance template comes with a Windows Server 2022 evaluation license, which is valid for 180 days.<br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance using OVA template, or you activate the operating system license of the appliance server. **OVA verification** | [Verify](tutorial-discover-vmware.md#verify-security) the OVA template downloaded from project by checking the hash values. **PowerShell script** | Refer to this [article](./deploy-appliance-script.md#set-up-the-appliance-for-vmware) on how to deploy an appliance using the PowerShell installer script.<br/><br/> **Hardware and network requirements** | The appliance should run on server with Windows Server 2022, 32-GB RAM, 8 vCPUs, around 80 GB of disk storage, and an external virtual switch.<br/> The appliance requires internet access, either directly or through a proxy.<br/><br/> If you deploy the appliance using OVA template, you need enough resources on the vCenter Server to create a server that meets the hardware requirements.<br/><br/> If you run the appliance on an existing server, make sure that it is running Windows Server 2022, and meets hardware requirements.<br/>_(Currently the deployment of appliance is only supported on Windows Server 2022.)_ |
migrate | Migrate Replication Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md | The replication appliance is deployed when you set up agent-based migration of V ## Appliance requirements -When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2022 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements. +When you set up the replication appliance using the OVA template provided in the Azure Migrate hub, the appliance runs Windows Server 2016 and complies with the support requirements. If you set up the replication appliance manually on a physical server, then make sure that it complies with the requirements. **Component** | **Requirement** | RAM | 16 GB Number of disks | Two: The OS disk and the process server cache disk. Free disk space (cache) | 600 GB **Software settings** |-Operating system | Windows Server 2022 or Windows Server 2012 R2 -License | The appliance comes with a Windows Server 2022 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM. +Operating system | Windows Server 2016 or Windows Server 2012 R2 +License | The appliance comes with a Windows Server 2016 evaluation license, which is valid for 180 days. <br>If the evaluation period is close to expiry, we recommend that you download and deploy a new appliance, or that you activate the operating system license of the appliance VM. Operating system locale | English (en-us) TLS | TLS 1.2 should be enabled. .NET Framework | .NET Framework 4.6 or later should be installed on the machine (with strong cryptography enabled. |
migrate | Migrate Servers To Azure Using Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-servers-to-azure-using-private-link.md | With discovery completed, you can begin replication of Hyper-V VMs to Azure. 1. In **Replication storage account**, select the Azure storage account in which replicated data will be stored in Azure. -1. Next, [**create a private endpoint for the storage account**](migrate-servers-to-azure-using-private-link.md#create-a-private-endpoint-for-the-storage-account-1) and [**grant permissions to the Recovery Services vault managed identity**](migrate-servers-to-azure-using-private-link.md#grant-access-permissions-to-the-recovery-services-vault) to access the storage account required by Azure Migrate. This is mandatory before you proceed. +1. Next, [**create a private endpoint for the storage account**](https://learn.microsoft.com/azure/migrate/migrate-servers-to-azure-using-private-link?pivots=agentlessvmware#create-a-private-endpoint-for-the-storage-account) and [**grant permissions to the Recovery Services vault managed identity**](https://learn.microsoft.com/azure/migrate/migrate-servers-to-azure-using-private-link?pivots=agentbased#grant-access-permissions-to-the-recovery-services-vault-1) to access the storage account required by Azure Migrate. This is mandatory before you proceed. - For Hyper-V VM migrations to Azure, if the replication storage account is of *Premium* type, you must select another storage account of *Standard* type for the cache storage account. In this case, you must create private endpoints for both the replication and cache storage account. |
migrate | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md | Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
migrate | Tutorial Assess Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-sql.md | Even when SQL Server credentials are not available, this report will provide rig 1. **Migrate all SQL databases to Azure SQL Database** In this strategy, you can see how you can migrate individual databases to Azure SQL Database and review the readiness and cost estimates. +### Review support status ++This indicates the support status of SQL servers, instances, and databases that were assessed in this assessment. ++The Supportability section displays the support status of the SQL licenses. +The Discovery details section gives a graphic representation of the number of discovered SQL instances and their SQL editions. ++1. Select the graph in the **Supportability** section to view a list of the assessed SQL instances. +2. The **Database instance license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which shows the type of support status, duration of support, and the recommended steps to secure their workloads. + - To view the remaining duration of support, that is, the number of months for which the license is valid, +select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++ ### Review readiness You can review readiness reports for different migration strategies: |
migrate | Tutorial Assess Vmware Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-vmware-azure-vm.md | An assessment describes: To view an assessment: 1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to **Azure VM assessment**.-2. In **Assessments**, select an assessment to open it. As an example (estimations and costs for example only): -- ![Screenshot of Assessment summary.](./media/tutorial-assess-vmware-azure-vm/assessment-summary.png) --3. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment. - +2. In **Assessments**, select an assessment to open it. +4. Review the assessment summary. You can also edit the assessment properties, or recalculate the assessment. + - The Azure readiness graph displays the status of the VM. + - The Supportability section displays the distribution by OS license support status and the distribution by Windows Server version. + - The Savings option section displays the estimated savings on moving to Azure. ### Review readiness |
migrate | Tutorial Assess Webapps Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps-hyper-v.md | - Title: Tutorial to assess web apps for migration to Azure App Service for Hyper-V VMs -description: Learn how to create assessment for Azure App Service for Hyper-V VMs in Azure Migrate ---- Previously updated : 06/29/2023-----# Tutorial: Assess ASP.NET web apps for migration to Azure App Service for Hyper-V VMs --As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity. -This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool. --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Run an assessment based on web apps configuration data. -> * Review an Azure App Service assessment --> [!NOTE] -> Tutorials show the quickest path for trying out a scenario, and use default options where possible. --## Prerequisites --- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-hyper-v.md)-- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.--## Run an assessment --Run an assessment as follows: --1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**. -- :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Screenshot of Overview page for Azure Migrate."::: --2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure App Service**. -- :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Screenshot of dropdown to choose assessment type as Azure App Service."::: --3. In **Create assessment**, you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**. --4. Select **Edit** to review the assessment properties. -- :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Screenshot of Edit button from where assessment properties can be customized."::: --5. Here's what's included in Azure App Service assessment properties: -- **Property** | **Details** - | - **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. - **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans. - - In **Savings options (compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure compute cost. - - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources. - - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. - - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage. - - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable. - **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. - **Currency** | The billing currency for your account. - **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. - **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings. --1. In **Create assessment**, select **Next**. -1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. -1. In **Select or create a group**, select **Create New** and specify a group name. -1. Select the appliance, and select the servers that you want to add to the group. Select **Next**. -1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment. -1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh. -- :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Screenshot of Refresh discovery and assessment tool data."::: --1. Select the number next to Azure App Service assessment. -- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Screenshot of Navigating to created assessment."::: --1. Select the assessment name, which you wish to view. --## Review an assessment --**To view an assessment**: --1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Azure App Service assessment. -2. Select the assessment name, which you wish to view. -- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="Screenshot of App Service assessment overview."::: --3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment. --#### Azure App Service readiness --This indicates the distribution of the assessed web apps. You can drill down to understand the details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md). -You can also view the recommended App Service SKU and plan for migrating to Azure App Service. --#### Azure App Service cost details --An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses. --### Review readiness --1. Select **Azure App Service readiness**. -- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Screenshot of Azure App Service readiness details."::: --1. Review Azure App Service readiness column in table, for the assessed web apps: - 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type. - 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance. - 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance. - 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app. -1. Review the recommended SKU for the web apps, which is determined as per the matrix below: -- **Isolation required** | **Reserved instance** | **App Service plan/ SKU** - | | - Yes | Yes | I1 - Yes | No | I1 - No | Yes | P1v3 - No | No | P1v2 -- **Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates** - | | - Ready | Yes | Yes - Ready with conditions | Yes | Yes - Not ready | No | No - Unknown | No | No --1. Select the App Service plan link in the Azure App Service readiness table to see the App Service plan details such as compute resources and other web apps that are part of the same plan. --### Review cost estimates --The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). The apps that you add into this App Service plan run on the compute resources defined by your App Service plan. -To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. The number of web apps allocated to each plan instance is shown below. --**App Service plan** | **Web apps per App Service plan** - | -I1 | 8 -P1v2 | 8 -P1v3 | 16 ---## Next steps --- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).-- [Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated. |
migrate | Tutorial Assess Webapps Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps-physical.md | - Title: Tutorial to assess web apps for migration to Azure App Service for Physical machines -description: Learn how to create assessment for Azure App Service for Physical machines in Azure Migrate ---- Previously updated : 06/29/2023-----# Tutorial: Assess ASP.NET web apps for migration to Azure App Service for Physical machines --As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity. -This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool. --In this tutorial, you learn how to: --> [!div class="checklist"] -> * Run an assessment based on web apps configuration data. -> * Review an Azure App Service assessment --> [!NOTE] -> Tutorials show the quickest path for trying out a scenario, and use default options where possible. --## Prerequisites --- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-physical.md)-- If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article.--## Run an assessment --Run an assessment as follows: --1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**. -- :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Screenshot of Overview page for Azure Migrate."::: --2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure App Service**. -- :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Screenshot of Dropdown to choose assessment type as Azure App Service."::: --3. In **Create assessment**, you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**. --4. Select **Edit** to review the assessment properties. -- :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Screenshot of Edit button from where assessment properties can be customized."::: --5. Here's what's included in Azure App Service assessment properties: -- **Property** | **Details** - | - **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify. - **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans. - - In **Savings options (compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure compute cost. - - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources. - - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time. - - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage. - - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable. - **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer. - **Currency** | The billing currency for your account. - **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%. - **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Leave the settings for reserved instances, and discount (%) properties with their default settings. --1. In **Create assessment**, select **Next**. -1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment. -1. In **Select or create a group**, select **Create New** and specify a group name. -1. Select the appliance, and select the servers that you want to add to the group. Select **Next**. -1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment. -1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh. -- :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Screenshot of Refresh discovery and assessment tool data."::: --1. Select the number next to Azure App Service assessment. -- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Screenshot of Navigation to created assessment."::: --1. Select the assessment name, which you wish to view. --## Review an assessment --**To view an assessment**: --1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Azure App Service assessment. -2. Select the assessment name, which you wish to view. -- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="Screenshot of App Service assessment overview."::: --3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment. --#### Azure App Service readiness --This indicates the distribution of the assessed web apps. You can drill down to understand the details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md). -You can also view the recommended App Service SKU and plan for migrating to Azure App Service. --#### Azure App Service cost details --An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses. --### Review readiness --1. Select **Azure App Service readiness**. -- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Screenshot of Azure App Service readiness details."::: --1. Review Azure App Service readiness column in table, for the assessed web apps: - 1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type. - 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance. - 1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance. - 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app. -1. Review the recommended SKU for the web apps, which is determined as per the matrix below: -- **Isolation required** | **Reserved instance** | **App Service plan/ SKU** - | | - Yes | Yes | I1 - Yes | No | I1 - No | Yes | P1v3 - No | No | P1v2 -- **Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates** - | | - Ready | Yes | Yes - Ready with conditions | Yes | Yes - Not ready | No | No - Unknown | No | No --1. Select the App Service plan link in the Azure App Service readiness table to see the App Service plan details such as compute resources and other web apps that are part of the same plan. --### Review cost estimates --The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). The apps that you add into this App Service plan run on the compute resources defined by your App Service plan. -To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. The number of web apps allocated to each plan instance is shown below. --**App Service plan** | **Web apps per App Service plan** - | -I1 | 8 -P1v2 | 8 -P1v3 | 16 ---## Next steps --- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md).-- [Learn more](concepts-azure-webapps-assessment-calculation.md) about how Azure App Service assessments are calculated. |
migrate | Tutorial Assess Webapps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md | -In this tutorial, you learn how to: +In this tutorial, you learn how to: > [!div class="checklist"] > * Run an assessment based on web apps configuration data. In this tutorial, you learn how to: ## Prerequisites - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/pricing/free-trial/) before you begin.-- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance, [follow this tutorial](tutorial-discover-vmware.md)+- Before you follow this tutorial to assess your web apps for migration to Azure App Service, make sure you've discovered the web apps you want to assess using the Azure Migrate appliance for [VMware](tutorial-discover-vmware.md), [Hyper-V](tutorial-discover-hyper-v.md), or [Physical servers](tutorial-discover-physical.md). - If you want to try out this feature in an existing project, ensure that you have completed the [prerequisites](how-to-discover-sql-existing-project.md) in this article. ## Run an assessment |
migrate | Tutorial Discover Hyper V | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-hyper-v.md | After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard. 2. In **Azure Migrate - Servers** > **Azure Migrate: Discovery and assessment** page, click the icon that displays the count for **Discovered servers**. +#### View support status ++You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections. ++The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. ++To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. ++To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++ ## Next steps - [Assess servers on Hyper-V environment](tutorial-assess-hyper-v.md) for migration to Azure VMs. |
migrate | Tutorial Discover Physical | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-physical.md | After discovery finishes, you can verify that the servers appear in the portal. 1. Open the Azure Migrate dashboard. 2. In **Azure Migrate - Servers** > **Azure Migrate: Discovery and assessment** page, select the icon that displays the count for **Discovered servers**. +#### View support status ++You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections. ++The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. ++To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. ++To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++ ## Delete servers After the discovery has been initiated, you can delete any of the added servers from the appliance configuration manager by searching for the server name in the **Add discovery source** table and by selecting **Delete**. |
migrate | Tutorial Discover Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md | To start vCenter Server discovery, select **Start discovery**. After the discove :::image type="content" source="./media/tutorial-discover-vmware/discovery-assessment-tile.png" alt-text="Screenshot that shows how to refresh data in discovery and assessment tile."::: +Details such as OS license support status, inventory, database instances, etc are displayed. ++#### View support status ++You can gain deeper insights into the support posture of your environment from the **Discovered servers** and **Discovered database instances** sections. ++The **Operating system license support status** column displays the support status of the Operating system, whether it is in mainstream support, extended support, or out of support. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. ++To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. ++The **Database instances** displays the number of instances discovered by Azure Migrate. Select the number of instances to view the database instance details. The **Database instance license support status** displays the support status of the database instance. Selecting the support status opens a pane on the right which provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. ++To view the remaining duration until end of support, that is, the number of months for which the license is valid, select **Columns** > **Support ends in** > **Submit**. The **Support ends in** column displays the duration in months. + + ## Next steps - Learn how to [assess servers to migrate to Azure VMs](./tutorial-assess-vmware-azure-vm.md). |
migrate | Tutorial Migrate Vmware Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md | |
migrate | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md | +## Update (August 2023) +- Azure Migrate now helps you gain deeper insights into the support posture of your IT estate by providing insights into Windows server and SQL Server license support information. You can stay ahead of license support deadlines with *Support ends in* information that helps to understand the time left until the end of support for respective servers and databases. +- Azure Migrate also provides clear guidance regarding actionable steps that can be taken to secure servers and databases in extended support or out of support. +- Envision Extended Security Update (ESU) savings for out of support Windows Server and SQL Server licenses using Azure Migrate Business case. + ## Update (July 2023) - Discover Azure Migrate from Operations Manager console: Operations Manager 2019 UR3 and later allows you to discover Azure Migrate from console. You can now generate a complete inventory of your on-premises environment without appliance. This can be used in Azure Migrate to assess machines at scale. [Learn more](https://support.microsoft.com/topic/discover-azure-migrate-for-operations-manager-04b33766-f824-4e99-9065-3109411ede63). - Public Preview: Upgrade your Windows OS during Migration using the Migration and modernization tool in your VMware environment. [Learn more](how-to-upgrade-windows.md). |
mysql | Concepts Backup Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md | The Backup and Restore blade in the Azure portal provides a complete list of the In Azure Database for MySQL, performing a restore creates a new server from the original server's backups. There are two types of restore available: - Point-in-time restore: is available with either backup redundancy option and creates a new server in the same region as your original server.-- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to the geo-paired region. Geo-restore to other regions is not supported currently. +- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to either a geo-paired region or any other azure supported region where flexible server is available. Please note, feature of geo-restore to other regions is currently supported in public preview. ++> [!NOTE] +> Universal Geo Restore (Geo-restore to other regions which is different from a paired region) in Azure Database for MySQL - Flexible Server is currently in **public preview**. Few regions that are currently not supported for universal geo-restore feature in public preview are "Brazil South", "USGov Virginia" and "West US 3". The estimated time for the recovery of the server depends on several factors: The estimated time of recovery depends on several factors including the database ## Geo-restore -You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is not supported currently. +You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is supported currently in public preview. Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss. The estimated time for the recovery of the server depends on several factors: - Learn about [business continuity](./concepts-business-continuity.md) - Learn about [zone redundant high availability](./concepts-high-availability.md) - Learn about [backup and recovery](./concepts-backup-restore.md)+++ |
mysql | Concepts Networking Public | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-networking-public.md | Granting permission to an IP address is called a firewall rule. If a connection You can consider enabling connections from all Azure data center IP addresses if a fixed outgoing IP address isn't available for your Azure service. -> [!IMPORTANT] -> The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users. +> [!IMPORTANT] +> - The **Allow public access from Azure services and resources within Azure** option configures the firewall to allow all connections from Azure, including connections from the subscriptions of other customers. When selecting this option, ensure your login and user permissions limit access to only authorized users. +> - You can create a maximum of 500 IP firewall rules. +> Learn how to enable and manage public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md). Consider the following points when access to the Microsoft Azure Database for My - Learn how to enable public access (allowed IP addresses) using the [Azure portal](how-to-manage-firewall-portal.md) or [Azure CLI](how-to-manage-firewall-cli.md) - Learn how to [use TLS](how-to-connect-tls-ssl.md)++ |
mysql | How To Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-azure-ad.md | |
mysql | How To Data Encryption Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-data-encryption-portal.md | In this tutorial, you learn how to: - Configure data encryption for restoration. - Configure data encryption for replica servers. + > [!NOTE] +> Azure key vault access configuration now supports two types of permission models - [Azure role-based access control](../../role-based-access-control/overview.md) and [Vault access policy](../../key-vault/general/assign-access-policy.md). The tutorial describes configuring data encryption for Azure Database for MySQL - Flexible server using Vault access policy. However, you can choose to use Azure RBAC as permission model to grant access to Azure Key Vault. To do so, you need any built-in or custom role that has below three permissions and assign it through "role assignments" using Access control (IAM) tab in the keyvault: a) KeyVault/vaults/keys/wrap/action b) KeyVault/vaults/keys/unwrap/action c) KeyVault/vaults/keys/read +++ ## Prerequisites - An Azure account with an active subscription. After your Azure Database for MySQL - Flexible Server is encrypted with a custom - [Customer managed keys data encryption](concepts-customer-managed-key.md) - [Data encryption with Azure CLI](how-to-data-encryption-cli.md)++ |
mysql | How To Move Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-move-regions.md | Title: Move Azure regions - Azure portal - Azure Database for MySQL - Flexible Server description: Move an Azure Database for MySQL - Flexible Server from one Azure region to another using the Azure portal.+++ Last updated : 08/23/2023 -- Previously updated : 04/08/2022 -#Customer intent: As an Azure service administrator, I want to move my service resources to another Azure region. # Move an Azure Database for MySQL - Flexible Server to another region by using the Azure portal There are various scenarios for moving an existing Azure Database for MySQL - Flexible Server from one region to another. For example, you might want to move a production server to another region as part of your disaster recovery planning. -You can use Azure Database for MySQL - Flexible Server's [geo-restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your flexible server. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region. +You can use Azure Database for MySQL - Flexible Server's [geo restore](concepts-backup-restore.md#geo-restore) feature to complete the move to another region. To do so, first ensure geo-redundancy is enabled for your flexible server. Next, trigger geo-restore for your geo-redundant server and move your server to the geo-paired region. -> [!NOTE] +> [!NOTE] > This article focuses on moving your server to a different region. If you want to move your server to a different resource group or subscription, refer to the [move](../../azure-resource-manager/management/move-resource-group-and-subscription.md) article. ## Prerequisites To move the Azure Database for MySQL - Flexible Server to the geo-paired region 1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. -2. Click **Overview** from the left panel. +1. Select **Overview** from the left panel. -3. From the overview page, click **Restore**. +1. From the overview page, select **Restore**. -4. Restore page will be shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options cannot be selected simultaneously. +1. Restore page is shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options can't be selected simultaneously. - :::image type="content" source="./media/how-to-restore-server-portal/georestore-flex.png" alt-text="Geo-restore option"::: + :::image type="content" source="./media/how-to-move-regions/geo-restore-flex.png" alt-text="Screenshot of Geo-restore option" lightbox="./media/how-to-move-regions/geo-restore-flex.png"::: - :::image type="content" source="./media/how-to-restore-server-portal/georestore-enabled-flex.png" alt-text="Enabling Geo-Restore"::: + :::image type="content" source="./media/how-to-move-regions/geo-restore-enabled-flex.png" alt-text="Screenshot of Enabling Geo-Restore" lightbox="./media/how-to-move-regions/geo-restore-enabled-flex.png"::: -5. Provide a new server name in the **Name** field in the Server details section. +1. Provide a new server name in the **Name** field in the Server details section. -6. When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity. +1. When primary region is down, one can't create geo-redundant servers in the respective geo-paired region as storage can't be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity. - :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-1.png" alt-text="Compute + Storage window"::: + :::image type="content" source="./media/how-to-move-regions/geo-restore-region-down-1.png" alt-text="Screenshot of Compute + Storage window" lightbox="./media/how-to-move-regions/geo-restore-region-down-1.png"::: - :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-2.png" alt-text="Disabling Geo-Redundancy"::: + :::image type="content" source="./media/how-to-move-regions/geo-restore-region-down-2.png" alt-text="Screenshot of Disabling Geo-Redundancy" lightbox="./media/how-to-move-regions/geo-restore-region-down-2.png"::: - :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-3.png" alt-text="Restoring as Locally redundant server"::: + :::image type="content" source="./media/how-to-move-regions/geo-restore-region-down-3.png" alt-text="Screenshot of Restoring as Locally redundant server" lightbox="./media/how-to-move-regions/geo-restore-region-down-3.png"::: -7. Select **Review + Create** to review your selections. +1. Select **Review + Create** to review your selections. -8. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes. +1. A notification is shown that the restore operation has been initiated. This operation may take a few minutes. -The new server created by geo-restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section. +The new server created by geo-restore has the same server admin sign-in name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section. ## Clean up source server In this tutorial, you moved an Azure Database for MySQL - Flexible Server from o - Learn more about [geo-restore](concepts-backup-restore.md#geo-restore) - Learn more about [Azure paired regions](overview.md#azure-regions) supported for Azure Database for MySQL - Flexible Server-- Learn more about [business continuity](concepts-business-continuity.md) options+- Learn more about [business continuity](concepts-business-continuity.md) options |
mysql | How To Restore Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-portal.md | Title: Restore an Azure Database for MySQL - Flexible Server with Azure portal. + Title: Restore MySQL - Flexible Server with Azure portal + description: This article describes how to perform restore operations in Azure Database for MySQL - Flexible Server through the Azure portal+++ Last updated : 08/22/2023 -- Previously updated : 07/26/2022 -# Point-in-time restore of a Azure Database for MySQL - Flexible Server using Azure portal +# Point-in-time restore of an Azure Database for MySQL - Flexible Server using Azure portal This article provides step-by-step procedure to perform point-in-time recoveries in flexible server using backups. Follow these steps to restore your flexible server using an earliest existing ba 1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. -2. Click **Overview** from the left panel. +1. Select **Overview** from the left panel. -3. From the overview page, click **Restore**. +1. From the overview page, select **Restore**. -4. Restore page will be shown with an option to choose between **Latest restore point** and Custom restore point. +1. Restore page is shown with an option to choose between **Latest restore point** and Custom restore point. -5. Select **Latest restore point**. +1. Select **Latest restore point**. -6. Provide a new server name in the **Restore to new server** field. +1. Provide a new server name in the **Restore to new server** field. - :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-latest.png" alt-text="Earliest restore time"::: + :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-latest.png" alt-text="Screenshot of earliest restore time." lightbox="./media/how-to-restore-server-portal/point-in-time-restore-latest.png"::: -7. Click **OK**. --8. A notification will be shown that the restore operation has been initiated. +1. Select **OK**. +1. A notification is shown that the restore operation has been initiated. ## Restore to a fastest restore point -Follow these steps to restore your flexible server using an existing full backup as the fastest restore point. +Follow these steps to restore your flexible server using an existing full backup as the fastest restore point. -1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. +1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. -2. Click **Overview** from the left panel. +1. Select **Overview** from the left panel. -3. From the overview page, click **Restore**. +1. From the overview page, select **Restore**. -4. Restore page will be shown with an option to choose between Latest restore point, Custom restore point and Fastest Restore Point. +1. Restore page is shown with an option to choose between Latest restore point, Custom restore point and Fastest Restore Point. -5. Select option **Select fastest restore point (Restore using full backup)**. +1. Select option **Select fastest restore point (Restore using full backup)**. -6. Select the desired full backup from the **Fastest Restore Point (UTC)** drop down list . - - :::image type="content" source="./media/how-to-restore-server-portal/fastest-restore-point.png" alt-text="Fastest Restore Point"::: +1. Select the desired full backup from the **Fastest Restore Point (UTC)** dropdown list. -7. Provide a new server name in the **Restore to new server** field. + :::image type="content" source="./media/how-to-restore-server-portal/fastest-restore-point.png" alt-text="Screenshot of Fastest Restore Point." lightbox="./media/how-to-restore-server-portal/fastest-restore-point.png"::: -8. Click **Review + Create**. +1. Provide a new server name in the **Restore to new server** field. -9. Post clicking **Create**, a notification will be shown that the restore operation has been initiated. +1. Select **Review + Create**. -## Restore from a full backup through the Backup and Restore blade +1. Post selecting **Create**, a notification is shown that the restore operation has been initiated. -Follow these steps to restore your flexible server using an existing full backup. +## Restore from a full backup through the Backup and Restore page -1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. +Follow these steps to restore your flexible server using an existing full backup. -2. Click **Backup and Restore** from the left panel. +1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. ++1. Select **Backup and Restore** from the left panel. -3. View Available Backups page will be shown with the option to restore from available full automated backups and on-demand backups taken for the server within the retention period. +1. View Available Backups page is shown with the option to restore from available full automated backups and on-demand backups taken for the server within the retention period. -4. Select the desired full backup from the list by clicking on corresponding **Restore** action. - - :::image type="content" source="./media/how-to-restore-server-portal/view-available-backups.png" alt-text="View Available Backups"::: +1. Select the desired full backup from the list by selecting on corresponding **Restore** action. -5. Restore page will be shown with the Fastest Restore Point option selected by default and the desired full backup timestamp selected on the View Available backups page. + :::image type="content" source="./media/how-to-restore-server-portal/view-available-backups.png" alt-text="Screenshot of view Available Backups." lightbox="./media/how-to-restore-server-portal/view-available-backups.png"::: -6. Provide a new server name in the **Restore to new server** field. +1. Restore page is shown with the Fastest Restore Point option selected by default and the desired full backup timestamp selected on the View Available backups page. -7. Click **Review + Create**. +1. Provide a new server name in the **Restore to new server** field. -8. Post clicking **Create**, a notification will be shown that the restore operation has been initiated. +1. Select **Review + Create**. +1. Post selecting **Create**, a notification is shown that the restore operation has been initiated. -## Geo-restore to latest restore point +## Geo restore to latest restore point 1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. -2. Click **Overview** from the left panel. +1. Select **Overview** from the left panel. -3. From the overview page, click **Restore**. +1. From the overview page, select **Restore**. -4. Restore page will be shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options cannot be selected simultaneously. +1. Restore page is shown with an option to choose **Geo-redundant restore**. If you have configured your server for geographically redundant backups, the server can be restored to the corresponding Azure paired region and the geo-redundant restore option can be enabled. Geo-redundant restore option restores the server to Latest UTC Now timestamp and hence after selection of Geo-redundant restore, the point-in-time restore options can't be selected simultaneously. - :::image type="content" source="./media/how-to-restore-server-portal/georestore-flex.png" alt-text="Geo-restore option"::: + :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-flex.png" alt-text="Screenshot of Geo-restore option." lightbox="./media/how-to-restore-server-portal/geo-restore-flex.png"::: - :::image type="content" source="./media/how-to-restore-server-portal/georestore-enabled-flex.png" alt-text="Enabling Geo-Restore"::: + :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-enabled-flex.png" alt-text="Screenshot of enabling Geo-Restore." lightbox="./media/how-to-restore-server-portal/geo-restore-enabled-flex.png"::: -5. Provide a new server name in the **Name** field in the Server details section. + :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-flex-location-dropdown.png" alt-text="Screenshot of location dropdown." lightbox="./media/how-to-restore-server-portal/geo-restore-flex-location-dropdown.png"::: -6. When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity. +1. Provide a new server name in the **Name** field in the Server details section. - :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-1.png" alt-text="Compute + Storage window"::: +1. When primary region is down, one can't create geo-redundant servers in the respective geo-paired region as storage can't be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity. + :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-region-down-1.png" alt-text="Screenshot of Compute + Storage window." lightbox="./media/how-to-restore-server-portal/geo-restore-region-down-1.png"::: - :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-2.png" alt-text="Disabling Geo-Redundancy"::: + :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-region-down-2.png" alt-text="Screenshot of Disabling Geo-Redundancy." lightbox="./media/how-to-restore-server-portal/geo-restore-region-down-2.png"::: - :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-3.png" alt-text="Restoring as Locally redundant server"::: + :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-region-down-3.png" alt-text="Screenshot of Restoring as Locally redundant server." lightbox="./media/how-to-restore-server-portal/geo-restore-region-down-3.png"::: -7. Select **Review + Create** to review your selections. +1. Select **Review + Create** to review your selections. -8. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes. +1. A notification is shown that the restore operation has been initiated. This operation may take a few minutes. -The new server created by geo-restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section. +The new server created by geo restore has the same server admin sign-in name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section. -## Using restore to move a server from Public access to Private access +## Use restore to move a server from Public access to Private access Follow these steps to restore your flexible server using an earliest existing backup. 1. In the [Azure portal](https://portal.azure.com/), choose your flexible server that you want to restore the backup from. -2. From the overview page, click **Restore**. +1. From the overview page, select **Restore**. -3. Restore page will be shown with an option to choose between Geo-restore or Point-in-time restore options. +1. Restore page is shown with an option to choose between geo restore or point-in-time restore options. -4. Choose either **Geo-restore** or a **Point-in-time restore** option. +1. Choose either **Geo restore** or a **Point-in-time restore** option. -5. Provide a new server name in the **Restore to new server** field. +1. Provide a new server name in the **Restore to new server** field. - :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-private-dns-zone.png" alt-text="view overview"::: + :::image type="content" source="./media/how-to-restore-server-portal/point-in-time-restore-private-dns-zone.png" alt-text="Screenshot of view overview." lightbox="./media/how-to-restore-server-portal/point-in-time-restore-private-dns-zone.png"::: -6. Go to the **Networking** tab to configure networking settings. +1. Go to the **Networking** tab to configure networking settings. -7. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** section, you can either select an already existing *virtual network* and *Subnet* that is delegated to *Microsoft.DBforMySQL/flexibleServers* or create a new one by clicking the *create virtual network* link. - > [!Note] - > Only virtual networks and subnets in the same region and subscription will be listed in the drop down. </br> - > The chosen subnet will be delegated to *Microsoft.DBforMySQL/flexibleServers*. It means that only Azure Database for MySQL - Flexible Servers can use that subnet.</br> +1. In the **Connectivity method**, select **Private access (VNet Integration)**. Go to **Virtual Network** section, you can either select an already existing *virtual network* and *Subnet* that is delegated to *Microsoft.DBforMySQL/flexibleServers* or Create a new one by selecting the *create virtual network* link. + > [!NOTE] + > Only virtual networks and subnets in the same region and subscription is listed in the dropdown list. </br> + > The chosen subnet is delegated to *Microsoft.DBforMySQL/flexibleServers*. It means that only Azure Database for MySQL - Flexible Servers can use that subnet.</br> - :::image type="content" source="./media/how-to-manage-virtual-network-portal/vnet-creation.png" alt-text="Vnet configuration"::: + :::image type="content" source="./media/how-to-manage-virtual-network-portal/vnet-creation.png" alt-text="Screenshot of Vnet configuration." lightbox="./media/how-to-manage-virtual-network-portal/vnet-creation.png"::: -8. Create a new or Select an existing **Private DNS Zone**. - > [!NOTE] +1. Create a new or Select an existing **Private DNS Zone**. + > [!NOTE] > Private DNS zone names must end with `mysql.database.azure.com`. </br> > If you do not see the option to create a new private dns zone, please enter the server name on the **Basics** tab.</br> > After the flexible server is deployed to a virtual network and subnet, you cannot move it to Public access (allowed IP addresses).</br> - :::image type="content" source="./media/how-to-manage-virtual-network-portal/private-dns-zone.png" alt-text="dns configuration"::: -9. Select **Review + create** to review your flexible server configuration. -10. Select **Create** to provision the server. Provisioning can take a few minutes. --11. A notification will be shown that the restore operation has been initiated. + :::image type="content" source="./media/how-to-manage-virtual-network-portal/private-dns-zone.png" alt-text="Screenshot of dns configuration." lightbox="./media/how-to-manage-virtual-network-portal/private-dns-zone.png"::: +1. Select **Review + create** to review your flexible server configuration. +1. Select **Create** to provision the server. Provisioning can take a few minutes. +1. A notification is shown that the restore operation has been initiated. ## Perform post-restore tasks After the restore is completed, you should perform the following tasks to get your users and applications back up and running: - If the new server is meant to replace the original server, redirect clients and client applications to the new server.-- Ensure appropriate VNet rules are in place for users to connect. These rules are not copied over from the original server.+- Ensure appropriate virtual network rules are in place for users to connect. These rules aren't copied over from the original server. - Ensure appropriate logins and database level permissions are in place. - Configure alerts as appropriate for the newly restore server. ## Next steps -Learn more about [business continuity](concepts-business-continuity.md) +- Learn more about [business continuity](concepts-business-continuity.md) |
mysql | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md | Last updated 05/24/2022 [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] -<iframe src="https://aka.ms/docs/player?id=492c7a41-5f0a-4482-828b-72be1b38e691" width="640" height="370"></iframe> +> [!VIDEO https://aka.ms/docs/player?id=492c7a41-5f0a-4482-828b-72be1b38e691] Azure Database for MySQL powered by the MySQL community edition is available in two deployment modes: |
mysql | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md | This article summarizes new releases and features in Azure Database for MySQL - > [!NOTE] > This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. +## August 2023 + +- **Universal Geo Restore in Azure Database for MySQL - Flexible Server (Public Preview)** +Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore) ++- **Generated Invisible Primary Key in Azure Database for MySQL - Flexible Server** +Azure Database for MySQL Flexible Server now supports generated invisible primary key (GIPK) mode for MySQL version 8.0. With this change, by default, the value of the server system variable "sql_generate_invisible_primary_key" is ON for all MySQL - Flexible Server on MySQL 8.0. With GIPK mode ON, MySQL generates an invisible primary key to any InnoDB table which is new created without an explicit primary key. Learn more about the GIPK mode: +[Generated Invisible Primary Keys](https://dev.mysql.com/doc/refman/8.0/en/create-table-gipks.html) +[Invisible Column Metadata](https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html#invisible-column-metadata) + ## July 2023 - **Autoscale IOPS in Azure Database for MySQL - Flexible Server (General Availability)** If you have questions about or suggestions for working with Azure Database for M - Learn more about [Azure Database for MySQL pricing](https://azure.microsoft.com/pricing/details/mysql/server/). - Browse the [public documentation](index.yml) for Azure Database for MySQL ΓÇô Flexible Server. - Review details on [troubleshooting common migration errors](../howto-troubleshoot-common-errors.md).++ |
mysql | Migrate Single Flexible In Place Auto Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md | -In-place automigration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used < 10 GiB and no complex features enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. +**In-place automigration** from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with **Basic or General Purpose SKU**, data storage used **< 10 GiB** and **no complex features (CMK, AAD, Read Replica, Private Link) enabled**. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. -The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than 5 mins of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration: +The in-place migration provides a highly resilient and self-healing offline migration experience during a planned maintenance window, with less than **5 mins** of downtime. It uses backup and restore technology for faster migration time. This migration removes the overhead to manually migrate your server and ensure you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows. Following described are the key phases of the migration: -* Target Flexible Server is deployed, inheriting all feature set and properties (including server parameters and firewall rules) from source Single Server. Source Single Server is set to read-only and backup from source Single Server is copied to the target Flexible Server. -* DNS switch and cutover are performed successfully within the planned maintenance window with minimal downtime, allowing maintenance of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats ΓÇô username@server_name and username are also supported on the migrated Flexible Server. -* The migrated Flexible Server is online and can now be managed via Azure portal/CLI. Stopped Single Server is deleted post days set as it's Backup Retention Period. +* **Target Flexible Server is deployed**, inheriting all feature set and properties (including server parameters and firewall rules) from source Single Server. Source Single Server is set to read-only and backup from source Single Server is copied to the target Flexible Server. +* **DNS switch and cutover** are performed successfully within the planned maintenance window with minimal downtime, allowing maintenance of the same connection string post-migration. Client applications seamlessly connect to the target flexible server without any user driven manual updates. In addition to both connection string formats (Single and Flexible Server) being supported on migrated Flexible Server, both username formats ΓÇô username@server_name and username are also supported on the migrated Flexible Server. +* The **migrated Flexible Server is online** and can now be managed via Azure portal/CLI. Stopped Single Server is deleted post days set as it's Backup Retention Period. > [!NOTE]-> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. +> In-place migration is only for Single Server database workloads with Basic or GP SKU, data storage used < 10 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure MySQL Import to migrate. ## Configure migration alerts and review migration schedule Servers eligible for in-place automigration are sent an advance notification by Following described are the ways to check and configure automigration notifications: * Subscription owners for Single Servers scheduled for automigration receive an email notification.-* Configure service health alerts to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). -* Check the in-place migration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal). +* Configure **service health alerts** to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). +* Check the in-place migration **notification on the Azure portal** by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal). Following described are the ways to review your migration schedule once you have received the in-place automigration notification: > [!NOTE] > The migration schedule will be locked 7 days prior to the scheduled migration window after which youΓÇÖll be unable to reschedule. -* The Single Server overview page for your instance displays a portal banner with information about your migration schedule. -* For Single Servers scheduled for automigration, a new Migration blade is lighted on the portal. You can review the migration schedule by navigating to the Migration blade of your Single Server instance. +* The S**ingle Server overview page** for your instance displays a portal banner with information about your migration schedule. +* For Single Servers scheduled for automigration, a new **Migration blade** is lighted on the portal. You can review the migration schedule by navigating to the Migration blade of your Single Server instance. * If you wish to defer the migration, you can defer by a month at a time by navigating to the Migration blade of your single server instance on the Azure portal and rescheduling the migration by selecting another migration window within a month.-* If your Single Server has General Purpose SKU, you have the other option to enable High Availability when reviewing the migration schedule. As High Availability can only be enabled during create time for a MySQL Flexible Server, it's highly recommended that you enable this feature when reviewing the migration schedule. +* If your Single Server has **General Purpose SKU**, you have the other option to enable **High Availability** when reviewing the migration schedule. As High Availability can only be enabled during create time for a MySQL Flexible Server, it's highly recommended that you enable this feature when reviewing the migration schedule. ## Pre-requisite checks for in-place auto-migration -* The Single Server instance should be in ready state and should not be in stopped state during the planned maintenance window for automigration to take place. -* For Single Server instance with SSL enabled, ensure you have both certificates (BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate before scheduled auto-migration by following steps [here](../single-server/concepts-certificate-rotation.md#create-a-combined-ca-certificate) to ensure business continuity post-migration. +* The Single Server instance should be in **ready state** and should not be in stopped state during the planned maintenance window for automigration to take place. +* For Single Server instance with **SSL enabled**, ensure you have both certificates (**BaltimoreCyberTrustRoot & DigiCertGlobalRootG2 Root CA**) available in the trusted root store. Additionally, if you have the certificate pinned to the connection string create a combined CA certificate before scheduled auto-migration by following steps [here](../single-server/concepts-certificate-rotation.md#create-a-combined-ca-certificate) to ensure business continuity post-migration. +* The MySQL engine doesn't guarantee any sort order if there is no 'SORT' clause present in queries. Post in-place automigration, you may observe a change in the sort order. If preserving sort order is crucial, ensure your queries are updated to include 'SORT' clause before the scheduled in-place automigration. ## How is the target MySQL Flexible Server auto-provisioned? Following described are the ways to review your migration schedule once you have | Memory Optimized | 32 | MemoryOptimized | Standard_E32ds_v4 | * The MySQL version, region, *storage size, subscription and resource group for the target Flexible Server is same as that of the source Single Server.-*For Single Servers with less than 20 GiB storage, the storage size is set to 20 GiB as that is the minimum storage limit on Azure Database for MySQL - Flexible Server. +* For Single Servers with less than 20 GiB storage, the storage size is set to 20 GiB as that is the minimum storage limit on Azure Database for MySQL - Flexible Server. * Both username formats ΓÇô username@server_name (Single Server) and username (Flexible Server) are supported on the migrated Flexible Server. * Both connection string formats ΓÇô Single Server and Flexible Server are supported on the migrated Flexible Server. Copy the following properties from the source Single Server to target Flexible S **Q. How can I set up or view in-place migration alerts?ΓÇï** -**A.** +**A.** Following are the ways you can set up alerts : * Configure service health alerts to receive in-place migration schedule and progress notifications via email/SMS by following steps [here](../single-server/concepts-planned-maintenance-notification.md#to-receive-planned-maintenance-notification). * Check the in-place migration notification on the Azure portal by following steps [here](../single-server/concepts-planned-maintenance-notification.md#check-planned-maintenance-notification-from-azure-portal). Copy the following properties from the source Single Server to target Flexible S **Q. What are some post-migration activities I need to perform?ΓÇï** -**A.** +**A.** Following are some post-migration activities : * Monitoring page settings (Alerts, Metrics, and Diagnostic settings) * Any Terraform/CLI scripts you host to manage your Single Server instance should be updated with Flexible Server references. |
mysql | Migrate Single Flexible Mysql Import Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-mysql-import-cli.md | iops | 500 | Number of IOPS to be allocated for the target Azure Database for My ## How long does MySQL Import take to migrate my Single Server instance? Below is the benchmarked performance based on storage size.+ | Single Server Storage Size | MySQL Import time | | - |:-:| | 1 GiB | 0 min 23 secs | Below is the benchmarked performance based on storage size. From the table above, as the storage size increases, the time required for data copying also increases, almost in a linear relationship. However, it's important to note that copy speed can be significantly impacted by network fluctuations. Therefore, the data provided here should be taken as a reference only. Below is the benchmarked performance based on varying number of tables for 10 GiB storage size.+ | Number of tables in Single Server instance | MySQL Import time |- | - |:-:| + | - | :-: | | 100 | 4 min 24 secs | | 200 | 4 min 40 secs | | 800 | 4 min 52 secs | |
mysql | Concepts Connection Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md | MySQL offers standard database driver connectivity for using MySQL with applicat | PHP | Windows, Linux | [MySQL native driver for PHP - mysqlnd](https://dev.mysql.com/downloads/connector/php-mysqlnd/) | [Download](https://secure.php.net/downloads.php) | | ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) | | ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) |-| JDBC | Platform independent | [MySQL Connector/J 5.1 Developer Guide](https://dev.mysql.com/doc/connector-j/5.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) | +| JDBC | Platform independent | [MySQL Connector/J 8.1 Developer Guide](https://dev.mysql.com/doc/connector-j/8.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) | | Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) | | Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) | | C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) | |
mysql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md | |
mysql | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md | |
network-watcher | Diagnose Vm Network Traffic Filtering Problem Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md | Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure CLI' + Title: 'Quickstart: Diagnose a VM traffic filter problem - Azure CLI' -description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher in Azure CLI. +description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using Azure Network Watcher IP flow verify in Azure CLI. Previously updated : 06/30/2023--#Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM. Last updated : 08/23/2023++#Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM. # Quickstart: Diagnose a virtual machine network traffic filter problem using the Azure CLI -Azure allows and denies network traffic to and from a virtual machine based on its [effective security rules](network-watcher-security-group-view-overview.md). These security rules come from the network security groups applied to the virtual machine's network interface and subnet. +In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](network-watcher-security-group-view-overview.md) for a network interface to determine why a security rule is allowing or denying traffic. -In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. If you don't have an Azure subscription, create a [free account](https://azure.m In this section, you create a virtual network and a subnet in the East US region. Then, you create a virtual machine in the subnet with a default network security group. -1. Before you can create a VM, you must create a resource group to contain the VM. Create a resource group with [az group create](/cli/azure/group). The following example creates a resource group named *myResourceGroup* in the *eastus* location: +1. Create a resource group using [az group create](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed. -```azurecli-interactive -az group create --name myResourceGroup --location eastus -``` + ```azurecli-interactive + # Create a resource group. + az group create --name 'myResourceGroup' --location 'eastus' + ``` -2. Create a VM with [az vm create](/cli/azure/vm). If SSH keys don't already exist in a default key location, the command creates them. To use a specific set of keys, use the `--ssh-key-value` option. The following example creates a VM named *myVm*: +1. Create a virtual network using [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). -```azurecli-interactive -az vm create \ - --resource-group myResourceGroup \ - --name myVm \ - --image UbuntuLTS \ - --generate-ssh-keys -``` + ```azurecli-interactive + # Create a virtual network and a subnet. + az network vnet create --resource-group 'myResourceGroup' --name 'myVNet' --subnet-name 'mySubnet' --subnet-prefixes 10.0.0.0/24 + ``` ++1. Create a default network security group using [az network nsg create](/cli/azure/network/nsg#az-network-nsg-create). ++ ```azurecli-interactive + # Create a default network security group. + az network nsg create --name 'myVM-nsg' --resource-group 'myResourceGroup' --location 'eastus' + ``` ++1. Create a virtual machine using [az vm create](/cli/azure/vm#az-vm-create). When prompted, enter a username and password. -The VM takes a few minutes to create. Don't continue with the remaining steps until the VM is created and the Azure CLI returns the output. + ```azurecli-interactive + # Create a Linux virtual machine using the latest Ubuntu 20.04 LTS image. + az vm create --resource-group 'myResourceGroup' --name 'myVM' --location 'eastus' --vnet-name 'myVNet' --subnet 'mySubnet' --public-ip-address '' --nsg 'myVM-nsg' --image 'Canonical:0001-com-ubuntu-server-focal:20_04-lts-gen2:latest' + ``` ## Test network communication using IP flow verify In this section, you use the IP flow verify capability of Network Watcher to test network communication to and from the virtual machine. -When you create a VM, Azure allows and denies network traffic to and from the VM, by default. You might override Azure's defaults later, allowing or denying additional types of traffic. To test whether traffic is allowed or denied to different destinations and from a source IP address, use the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command. +1. Use [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command to test outbound communication from **myVM** to **13.107.21.200** using IP flow verify (`13.107.21.200` is one of the public IP addresses used by `www.bing.com`): -Test outbound communication from the VM to one of the IP addresses for www.bing.com: -```azurecli-interactive -az network watcher test-ip-flow \ - --direction outbound \ - --local 10.0.0.4:60000 \ - --protocol TCP \ - --remote 13.107.21.200:80 \ - --vm myVm \ - --nic myVmVMNic \ - --resource-group myResourceGroup \ - --out table -``` + ```azurecli-interactive + # Start the IP flow verify session to test outbound flow to www.bing.com. + az network watcher test-ip-flow --direction 'outbound' --protocol 'TCP' --local '10.0.0.4:60000' --remote '13.107.21.200:80' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table' + ``` -After several seconds, the result returned informs you that access is allowed by a security rule named **DenyAllOutBound**. + After a few seconds, you get a similar output to the following example: -Test outbound communication from the VM to 172.31.0.100: + ```output + Access RuleName + -- + Allow defaultSecurityRules/AllowInternetOutBound + ``` -```azurecli-interactive -az network watcher test-ip-flow \ - --direction outbound \ - --local 10.0.0.4:60000 \ - --protocol TCP \ - --remote 172.31.0.100:80 \ - --vm myVm \ - --nic myVmVMNic \ - --resource-group myResourceGroup \ - --out table -``` + The test result indicates that access is allowed to **13.107.21.200** because of the default security rule **AllowInternetOutBound**. By default, Azure virtual machines can access the internet. -The result returned informs you that access is denied by a security rule named **DenyAllOutBound**. +1. Change **RemoteIPAddress** to **10.0.1.10** and repeat the test. **10.0.1.10** is a private IP address in **myVNet** address space. -Test inbound communication to the VM from 172.31.0.100: + ```azurecli-interactive + # Start the IP flow verify session to test outbound flow to 10.0.1.10. + az network watcher test-ip-flow --direction 'outbound' --protocol 'TCP' --local '10.0.0.4:60000' --remote '10.0.1.10:80' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table' + ``` -```azurecli-interactive -az network watcher test-ip-flow \ - --direction inbound \ - --local 10.0.0.4:80 \ - --protocol TCP \ - --remote 172.31.0.100:60000 \ - --vm myVm \ - --nic myVmVMNic \ - --resource-group myResourceGroup \ - --out table -``` + After a few seconds, you get a similar output to the following example: -The result returned informs you that access is denied because of a security rule named **DenyAllInBound**. Now that you know which security rules are allowing or denying traffic to or from a VM, you can determine how to resolve the problems. + ```output + Access RuleName + -- + Allow defaultSecurityRules/AllowVnetOutBound + ``` -## View details of a security rule + The result of the second test indicates that access is allowed to **10.0.1.10** because of the default security rule **AllowVnetOutBound**. By default, an Azure virtual machine can access all IP addresses in the address space of its virtual network. -To determine why the rules in the previous section are allowing or preventing communication, review the effective security rules for the network interface with the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command: +1. Change **RemoteIPAddress** to **10.10.10.10** and repeat the test. **10.10.10.10** is a private IP address that isn't in **myVNet** address space. -```azurecli-interactive -az network nic list-effective-nsg \ - --resource-group myResourceGroup \ - --name myVmVMNic -``` + ```azurecli-interactive + # Start the IP flow verify session to test outbound flow to 10.10.10.10. + az network watcher test-ip-flow --direction 'outbound' --protocol 'TCP' --local '10.0.0.4:60000' --remote '10.10.10.10:80' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table' + ``` -The output includes the following text for the **AllowInternetOutbound** rule that allowed outbound access to www.bing.com in a previous step under [Test network communication using IP flow verify](#test-network-communication-using-ip-flow-verify) section: + After a few seconds, you get a similar output to the following example: + + ```output + Access RuleName + -- + Allow defaultSecurityRules/DenyAllOutBound + ``` -```console -{ - "access": "Allow", - "additionalProperties": {}, - "destinationAddressPrefix": "Internet", - "destinationAddressPrefixes": [ - "Internet" - ], - "destinationPortRange": "0-65535", - "destinationPortRanges": [ - "0-65535" - ], - "direction": "Outbound", - "expandedDestinationAddressPrefix": [ - "1.0.0.0/8", - "2.0.0.0/7", - "4.0.0.0/6", - "8.0.0.0/7", - "11.0.0.0/8", - "12.0.0.0/6", - ... - ], - "expandedSourceAddressPrefix": null, - "name": "defaultSecurityRules/AllowInternetOutBound", - "priority": 65001, - "protocol": "All", - "sourceAddressPrefix": "0.0.0.0/0", - "sourceAddressPrefixes": [ - "0.0.0.0/0" - ], - "sourcePortRange": "0-65535", - "sourcePortRanges": [ - "0-65535" - ] -}, -``` + The result of the third test indicates that access is denied to **10.10.10.10** because of the default security rule **DenyAllOutBound**. -You can see in the previous output that **destinationAddressPrefix** is **Internet**. It's not clear how 13.107.21.200 relates to **Internet** though. You see several address prefixes listed under **expandedDestinationAddressPrefix**. One of the prefixes in the list is **12.0.0.0/6**, which encompasses the 12.0.0.1-15.255.255.254 range of IP addresses. Since 13.107.21.200 is within that address range, the **AllowInternetOutBound** rule allows the outbound traffic. Additionally, there are no higher priority (lower number) rules shown in the previous output that override this rule. To deny outbound communication to an IP address, you could add a security rule with a higher priority, that denies port 80 outbound to the IP address. +1. Change **direction** to **inbound**, the local port to **80**, and the remote port to **60000**, and then repeat the test. -When you ran the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command to test outbound communication to 172.131.0.100 in [Test network communication using IP flow verify](#test-network-communication-using-ip-flow-verify) section, the output informed you that the **DenyAllOutBound** rule denied the communication. The **DenyAllOutBound** rule equates to the **DenyAllOutBound** rule listed in the following output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command: + ```azurecli-interactive + # Start the IP flow verify session to test inbound flow from 10.10.10.10. + az network watcher test-ip-flow --direction 'inbound' --protocol 'TCP' --local '10.0.0.4:80' --remote '10.10.10.10:6000' --vm 'myVM' --nic 'myVmVMNic' --resource-group 'myResourceGroup' --out 'table' + ``` -```console -{ - "access": "Deny", - "additionalProperties": {}, - "destinationAddressPrefix": "0.0.0.0/0", - "destinationAddressPrefixes": [ - "0.0.0.0/0" - ], - "destinationPortRange": "0-65535", - "destinationPortRanges": [ - "0-65535" - ], - "direction": "Outbound", - "expandedDestinationAddressPrefix": null, - "expandedSourceAddressPrefix": null, - "name": "defaultSecurityRules/DenyAllOutBound", - "priority": 65500, - "protocol": "All", - "sourceAddressPrefix": "0.0.0.0/0", - "sourceAddressPrefixes": [ - "0.0.0.0/0" - ], - "sourcePortRange": "0-65535", - "sourcePortRanges": [ - "0-65535" - ] -} + After a few seconds, you get similar output to the following example: + + ```output + Access RuleName + -- + Allow defaultSecurityRules/DenyAllInBound + ``` ++ The result of the fourth test indicates that access is denied from **10.10.10.10** because of the default security rule **DenyAllInBound**. By default, all access to an Azure virtual machine from outside the virtual network is denied. ++## View details of a security rule ++To determine why the rules in the previous section allow or deny communication, review the effective security rules for the network interface of **myVM** virtual machine using the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command: ++```azurecli-interactive +# Get the effective security rules for the network interface of myVM. +az network nic list-effective-nsg --resource-group 'myResourceGroup' --name 'myVmVMNic' ``` -The rule lists **0.0.0.0/0** as the **destinationAddressPrefix**. The rule denies the outbound communication to 172.131.0.100 because the address is not within the **destinationAddressPrefix** of any of the other outbound rules in the output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command. To allow the outbound communication, you could add a security rule with a higher priority, that allows outbound traffic to port 80 at 172.131.0.100. +The returned output includes the following information for the **AllowInternetOutbound** rule that allowed outbound access to `www.bing.com`: -When you ran the [az network watcher test-ip-flow](/cli/azure/network/watcher#az-network-watcher-test-ip-flow) command in [Test network communication using IP flow verify](#test-network-communication-using-ip-flow-verify) section to test inbound communication from 172.131.0.100, the output informed you that the **DenyAllInBound** rule denied the communication. The **DenyAllInBound** rule equates to the **DenyAllInBound** rule listed in the following output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command: -```console +```output {- "access": "Deny", - "additionalProperties": {}, - "destinationAddressPrefix": "0.0.0.0/0", - "destinationAddressPrefixes": [ - "0.0.0.0/0" - ], - "destinationPortRange": "0-65535", - "destinationPortRanges": [ - "0-65535" - ], - "direction": "Inbound", - "expandedDestinationAddressPrefix": null, - "expandedSourceAddressPrefix": null, - "name": "defaultSecurityRules/DenyAllInBound", - "priority": 65500, - "protocol": "All", - "sourceAddressPrefix": "0.0.0.0/0", - "sourceAddressPrefixes": [ - "0.0.0.0/0" - ], - "sourcePortRange": "0-65535", - "sourcePortRanges": [ - "0-65535" - ] + "access": "Allow", + "destinationAddressPrefix": "Internet", + "destinationAddressPrefixes": [ + "Internet" + ], + "destinationPortRange": "0-65535", + "destinationPortRanges": [ + "0-65535" + ], + "direction": "Outbound", + "expandedDestinationAddressPrefix": [ + "1.0.0.0/8", + "2.0.0.0/7", + "4.0.0.0/9", + "4.144.0.0/12", + "4.160.0.0/11", + "4.192.0.0/10", + "5.0.0.0/8", + "6.0.0.0/7", + "8.0.0.0/7", + "11.0.0.0/8", + "12.0.0.0/8", + "13.0.0.0/10", + "13.64.0.0/11", + "13.104.0.0/13", + "13.112.0.0/12", + "13.128.0.0/9", + "14.0.0.0/7", + ... + ... + ... + "200.0.0.0/5", + "208.0.0.0/4" + ], + "name": "defaultSecurityRules/AllowInternetOutBound", + "priority": 65001, + "protocol": "All", + "sourceAddressPrefix": "0.0.0.0/0", + "sourceAddressPrefixes": [ + "0.0.0.0/0", + "0.0.0.0/0" + ], + "sourcePortRange": "0-65535", + "sourcePortRanges": [ + "0-65535" + ] }, ``` -The **DenyAllInBound** rule is applied because, as shown in the output, no other higher priority rule exists in the output from the [az network nic list-effective-nsg](/cli/azure/network/nic#az-network-nic-list-effective-nsg) command that allows port 80 inbound to the VM from 172.131.0.100. To allow the inbound communication, you could add a security rule with a higher priority that allows port 80 inbound from 172.131.0.100. +You can see in the output that address prefix **13.104.0.0/13** is among the address prefixes of **AllowInternetOutBound** rule. This prefix encompasses the IP address **13.107.21.200**, which you utilized to test outbound communication to `www.bing.com`. -The checks in this quickstart tested Azure configuration. If the checks return the expected results and you still have network problems, ensure that you don't have a firewall between your VM and the endpoint you're communicating with and that the operating system in your VM doesn't have a firewall that is allowing or denying communication. +Similarly, you can check the other rules to see the source and destination IP address prefixes under each rule. ## Clean up resources -When no longer needed, you can use [az group delete](/cli/azure/group) to remove the resource group and all of the resources it contains: +When no longer needed, use [az group delete](/cli/azure/group) to delete **myResourceGroup** resource group and all of the resources it contains: ```azurecli-interactive-az group delete --name myResourceGroup --yes +# Delete the resource group and all resources it contains. +az group delete --name 'myResourceGroup' --yes ``` ## Next steps In this quickstart, you created a VM and diagnosed inbound and outbound network traffic filters. You learned that network security group rules allow or deny traffic to and from a VM. Learn more about [security rules](../virtual-network/network-security-groups-overview.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json) and how to [create security rules](../virtual-network/manage-network-security-group.md?toc=%2fazure%2fnetwork-watcher%2ftoc.json#create-a-security-rule). -Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem-cli.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-cli.md). +Even with the proper network traffic filters in place, communication to a virtual machine can still fail, due to routing configuration. To learn how to diagnose virtual machine routing problems, see [Diagnose a virtual machine network routing problem](diagnose-vm-network-routing-problem-powershell.md). To diagnose outbound routing, latency, and traffic filtering problems with one tool, see [Troubleshoot connections with Azure Network Watcher](network-watcher-connectivity-powershell.md). |
network-watcher | Diagnose Vm Network Traffic Filtering Problem Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md | Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure PowerShell' + Title: 'Quickstart: Diagnose a VM traffic filter problem - Azure PowerShell' -description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher in Azure PowerShell. +description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using Azure Network Watcher IP flow verify in Azure PowerShell. Previously updated : 07/17/2023--#Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM. Last updated : 08/23/2023++#Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM. # Quickstart: Diagnose a virtual machine network traffic filter problem using Azure PowerShell -Azure allows and denies network traffic to and from a virtual machine based on its [effective security rules](network-watcher-security-group-view-overview.md). These security rules come from the network security groups applied to the virtual machine's network interface and subnet. --In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. +In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](network-watcher-security-group-view-overview.md) for a network interface to determine why a security rule is allowing or denying traffic. :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem-powershell/ip-flow-verify-quickstart-diagram.png" alt-text="Diagram shows the resources created in Network Watcher quickstart."::: In this section, you use the IP flow verify capability of Network Watcher to tes 1. Change **Direction** to **Inbound**, the **LocalPort** to **80**, and the **RemotePort** to **60000**, and then repeat the test. ```azurepowershell-interactive- # Start the IP flow verify session to test outbound flow to 10.10.10.10. + # Start the IP flow verify session to test inbound flow from 10.10.10.10. Test-AzNetworkWatcherIPFlow -Location 'eastus' -TargetVirtualMachineId $vm.Id -Direction 'Inbound' -Protocol 'TCP' -RemoteIPAddress '10.10.10.10' -RemotePort '60000' -LocalIPAddress '10.0.0.4' -LocalPort '80' ``` Similarly, you can check the other rules to see the source and destination IP ad When no longer needed, use [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) to delete the resource group and all of the resources it contains: ```azurepowershell-interactive-# Delete the resource group and all resources it contains. +# Delete the resource group and all resources it contains. Remove-AzResourceGroup -Name 'myResourceGroup' -Force ``` |
network-watcher | Diagnose Vm Network Traffic Filtering Problem | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md | Title: 'Quickstart: Diagnose a VM network traffic filter problem - Azure portal' + Title: 'Quickstart: Diagnose a VM traffic filter problem - Azure portal' -description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher in the Azure portal. +description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using Azure Network Watcher IP flow verify in the Azure portal. Previously updated : 07/17/2023--#Customer intent: I need to diagnose and troubleshoot a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM. Last updated : 08/23/2023+#Customer intent: I want to diagnose a virtual machine (VM) network traffic filter using IP flow verify to know which security rule is denying the traffic and causing the communication problem to the VM. # Quickstart: Diagnose a virtual machine network traffic filter problem using the Azure portal -Azure allows and denies network traffic to and from a virtual machine based on its [effective security rules](network-watcher-security-group-view-overview.md). These security rules come from the network security groups applied to the virtual machine's network interface and subnet. --In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the cause of a communication failure and learn how you can resolve it. +In this quickstart, you deploy a virtual machine and use Network Watcher [IP flow verify](network-watcher-ip-flow-verify-overview.md) to test the connectivity to and from different IP addresses. Using the IP flow verify results, you determine the security rule that's blocking the traffic and causing the communication failure and learn how you can resolve it. You also learn how to use the [effective security rules](network-watcher-security-group-view-overview.md) for a network interface to determine why a security rule is allowing or denying traffic. :::image type="content" source="./media/diagnose-vm-network-traffic-filtering-problem/ip-flow-verify-quickstart-diagram.png" alt-text="Diagram shows the resources created in Network Watcher quickstart."::: |
network-watcher | Monitor Vm Communication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/monitor-vm-communication.md | Title: 'Tutorial: Monitor network communication between two VMs - Azure portal'+ description: In this tutorial, learn how to monitor network communication between two Azure virtual machines with Azure Network Watcher's connection monitor capability. + Previously updated : 07/17/2023-- -# Customer intent: I need to monitor the communication between two virtual machines in Azure. If the communication fails, I need to be alerted and I want to know why it failed, so that I can resolve the problem. Last updated : 08/24/2023+#CustomerIntent: As an Azure administrator, I want to monitor the communication between two virtual machines in Azure so I can be alerted if the communication fails to take actions. I alow want to know why the communication failed, so that I can resolve the problem. # Tutorial: Monitor network communication between two virtual machines using the Azure portal In this tutorial, you learn how to: > * Monitor communication between the two virtual machines > * Diagnose a communication problem between the two virtual machines ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. + ## Prerequisites -- An Azure account with an active subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.+- An Azure account with an active subscription. ## Sign in to Azure |
network-watcher | Required Rbac Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md | Title: Azure RBAC permissions required to use Azure Network Watcher capabilities description: Learn which Azure role-based access control (Azure RBAC) permissions are required to use Azure Network Watcher capabilities.- + Previously updated : 04/03/2023-- Last updated : 08/18/2023 # Azure role-based access control permissions required to use Network Watcher capabilities Azure role-based access control (Azure RBAC) enables you to assign only the spec | Microsoft.Network/networkWatchers/write | Create or update a network watcher | | Microsoft.Network/networkWatchers/delete | Delete a network watcher | -## NSG flow logs +## Flow logs | Action | Description | | | - | | Microsoft.Network/networkWatchers/configureFlowLog/action | Configure a flow Log | | Microsoft.Network/networkWatchers/queryFlowLogStatus/action | Query status for a flow log |+Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action | Fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account | ## Connection troubleshoot Microsoft.Network/networkWatchers/packetCaptures/queryStatus/read | View the sta Network Watcher capabilities also require the following actions: -| Action(s) | Description | -| | - | -| Microsoft.Authorization/\*/Read | Used to fetch Azure role assignments and policy definitions | -| Microsoft.Resources/subscriptions/resourceGroups/Read | Used to enumerate all the resource groups in a subscription | -| Microsoft.Storage/storageAccounts/Read | Used to get the properties for the specified storage account | -| Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action| Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account | -| Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Used to log in to the VM, do a packet capture and upload it to storage account| -| Microsoft.Compute/virtualMachines/extensions/Read </br> Microsoft.Compute/virtualMachines/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary | -| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write| Used to access virtual machine scale sets, do packet captures and upload them to storage account| -| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Used to check if Network Watcher extension is present, and install if necessary | -| Microsoft.Insights/alertRules/* | Used to set up metric alerts | -| Microsoft.Support/* | Used to create and update support tickets from Network Watcher | +| Action(s) | Description | +| | - | +| Microsoft.Authorization/\*/Read | Fetch Azure role assignments and policy definitions | +| Microsoft.Resources/subscriptions/resourceGroups/Read | Enumerate all the resource groups in a subscription | +| Microsoft.Storage/storageAccounts/Read | Get the properties for the specified storage account | +| Microsoft.Storage/storageAccounts/listServiceSas/Action, </br> Microsoft.Storage/storageAccounts/listAccountSas/Action, <br> Microsoft.Storage/storageAccounts/listKeys/Action | Used to fetch shared access signatures (SAS) enabling [secure access to storage account](../storage/common/storage-sas-overview.md) and write to the storage account | +| Microsoft.Compute/virtualMachines/Read, </br> Microsoft.Compute/virtualMachines/Write| Log in to the VM, do a packet capture and upload it to storage account | +| Microsoft.Compute/virtualMachines/extensions/Read, </br> Microsoft.Compute/virtualMachines/extensions/Write | Check if Network Watcher extension is present, and install if necessary | +| Microsoft.Compute/virtualMachineScaleSets/Read, </br> Microsoft.Compute/virtualMachineScaleSets/Write | Access virtual machine scale sets, do packet captures and upload them to storage account | +| Microsoft.Compute/virtualMachineScaleSets/extensions/Read, </br> Microsoft.Compute/virtualMachineScaleSets/extensions/Write| Check if Network Watcher extension is present, and install if necessary | +| Microsoft.Insights/alertRules/* | Set up metric alerts | +| Microsoft.Support/* | Create and update support tickets from Network Watcher | |
network-watcher | Traffic Analytics Schema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md | Title: Traffic analytics schema and data aggregation description: Learn about schema and data aggregation in Azure Network Watcher traffic analytics to analyze flow logs. --- Previously updated : 04/11/2023 -++ Last updated : 08/25/2023+#CustomerIntent: As a administrator, I want learn about traffic analytics schema so I can easily use the queries and understand their output. # Schema and data aggregation in Azure Network Watcher traffic analytics Traffic analytics is a cloud-based solution that provides visibility into user a ## Data aggregation +# [**NSG flow logs**](#tab/nsg) + - All flow logs at a network security group between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t` are captured at one-minute intervals as blobs in a storage account. - Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes.-- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol` (TCP or UDP) (Note: source port is excluded for aggregation) are clubbed into a single flow by traffic analytics.-- This single record is decorated (details in the section below) and ingested in Log Analytics by traffic analytics. This process can take up to 1 hour max.+- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation). +- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour. - `FlowStartTime_t` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`.-- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Log Analytics user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob.+- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen by the network security group, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob. ++# [**VNet flow logs (preview)**](#tab/vnet) ++- All flow logs between `FlowIntervalStartTime` and `FlowIntervalEndTime` are captured at one-minute intervals as blobs in a storage account. +- Default processing interval of traffic analytics is 60 minutes, meaning that every hour, traffic analytics picks blobs from the storage account for aggregation. However, if a processing interval of 10 minutes is selected, traffic analytics will instead pick blobs from the storage account every 10 minutes. +- Flows that have the same `Source IP`, `Destination IP`, `Destination port`, `NSG name`, `NSG rule`, `Flow Direction`, and `Transport layer protocol (TCP or UDP)` are clubbed into a single flow by traffic analytics (Note: source port is excluded for aggregation). +- This single record is decorated (details in the section below) and ingested in Azure Monitor logs by traffic analytics. This process can take up to 1 hour. +- `FlowStartTime` field indicates the first occurrence of such an aggregated flow (same four-tuple) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. +- For any resource in traffic analytics, the flows indicated in the Azure portal are total flows seen, but in Azure Monitor logs, user sees only the single, reduced record. To see all the flows, use the `blob_id` field, which can be referenced from storage. The total flow count for that record matches the individual flows seen in the blob. ++ The following query helps you look at all subnets interacting with non-Azure public IPs in the last 30 days. TableWithBlobId The previous query constructs a URL to access the blob directly. The URL with placeholders is as follows: ```-https://{saName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{nsgName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json +https://{storageAccountName}@insights-logs-networksecuritygroupflowevent/resoureId=/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroup}/PROVIDERS/MICROSOFT.NETWORK/NETWORKSECURITYGROUPS/{networkSecurityGroupName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json ``` ## Traffic analytics schema +Traffic analytics is built on top of Azure Monitor logs, so you can run custom queries on data decorated by traffic analytics and set alerts. ++### NSG flow logs ++The following table lists the fields in the schema and what they signify for NSG flow logs. ++| Field | Format | Comments | +| -- | | -- | +| **TableName** | AzureNetworkAnalytics_CL | Table for traffic analytics data. | +| **SubType_s** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType_s** are for internal use. | +| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. | +| **TimeProcessed_t** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. | +| **FlowIntervalStartTime_t** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). | +| **FlowIntervalEndTime_t** | Date and time in UTC | Ending time of the flow log processing interval. | +| **FlowStartTime_t** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. This flow gets aggregated based on aggregation logic. | +| **FlowEndTime_t** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime_t` and `FlowIntervalEndTime_t`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). | +| **FlowType_s** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. | +| **SrcIP_s** | Source IP address | Blank in AzurePublic and ExternalPublic flows. | +| **DestIP_s** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. | +| **VMIP_s** | IP of the VM | Used for AzurePublic and ExternalPublic flows. | +| **DestPort_d** | Destination Port | Port at which traffic is incoming. | +| **L4Protocol_s** | - T <br> - U | Transport Protocol. T = TCP <br> U = UDP. | +| **L7Protocol_s** | Protocol Name | Derived from destination port. | +| **FlowDirection_s** | - I = Inbound <br> - O = Outbound | Direction of the flow: in or out of network security group per flow log. | +| **FlowStatus_s** | - A = Allowed <br> - D = Denied | Status of flow whether allowed or denied by the network security group per flow log. | +| **NSGList_s** | \<SUBSCRIPTIONID\>/\<RESOURCEGROUP_NAME\>/\<NSG_NAME\> | Network security group associated with the flow. | +| **NSGRules_s** | \<Index value 0>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | Network security group rule that allowed or denied this flow. | +| **NSGRule_s** | NSG_RULENAME | Network security group rule that allowed or denied this flow. | +| **NSGRuleType_s** | - User Defined <br> - Default | The type of network security group rule used by the flow. | +| **MACAddress_s** | MAC Address | MAC address of the NIC at which the flow was captured. | +| **Subscription_s** | Subscription of the Azure virtual network / network interface / virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). | +| **Subscription1_s** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. | +| **Subscription2_s** | Subscription ID | Subscription ID of virtual network/ network interface / virtual machine that the destination IP in the flow belongs to. | +| **Region_s** | Azure region of virtual network / network interface / virtual machine that the IP in the flow belongs to. | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). | +| **Region1_s** | Azure Region | Azure region of virtual network / network interface / virtual machine that the source IP in the flow belongs to. | +| **Region2_s** | Azure Region | Azure region of virtual network that the destination IP in the flow belongs to. | +| **NIC_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the VM sending or receiving the traffic. | +| **NIC1_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the source IP in the flow. | +| **NIC2_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. | +| **VM_s** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | Virtual Machine associated with the Network interface NIC_s. | +| **VM1_s** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual Machine associated with the source IP in the flow. | +| **VM2_s** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual Machine associated with the destination IP in the flow. | +| **Subnet_s** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the NIC_s. | +| **Subnet1_s** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the Source IP in the flow. | +| **Subnet2_s** | \<ResourceGroup_Name\>/<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the Destination IP in the flow. | +| **ApplicationGateway1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the Source IP in the flow. | +| **ApplicationGateway2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the Destination IP in the flow. | +| **ExpressRouteCircuit1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is sent from site via ExpressRoute. | +| **ExpressRouteCircuit2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is received from cloud by ExpressRoute. | +| **ExpressRouteCircuitPeeringType_s** | - AzurePrivatePeering <br> - AzurePublicPeering <br> - MicrosoftPeering | ExpressRoute peering type involved in the flow. | +| **LoadBalancer1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the Source IP in the flow. | +| **LoadBalancer2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the Destination IP in the flow. | +| **LocalNetworkGateway1_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the Source IP in the flow. | +| **LocalNetworkGateway2_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the Destination IP in the flow. | +| **ConnectionType_s** | - VNetPeering <br> - VpnGateway <br> - ExpressRoute | The onnection Type. | +| **ConnectionName_s** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ConnectionName\> | The connection Name. For flow type P2S, it is formatted as \<gateway name\>_\<VPN Client IP\>. | +| **ConnectingVNets_s** | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks are populated here. | +| **Country_s** | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field share the same country code. | +| **AzureRegion_s** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field share the Azure region. | +| **AllowedInFlows_d** | | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. | +| **DeniedInFlows_d** | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). | +| **AllowedOutFlows_d** | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). | +| **DeniedOutFlows_d** | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). | +| **FlowCount_d** | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. | +| **InboundPackets_d** | Represents packets sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. | +| **OutboundPackets_d** | Represents packets sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. | +| **InboundBytes_d** | Represents bytes sent from the destination to the source of the flow | Populated only for Version 2 of NSG flow log schema. | +| **OutboundBytes_d** | Represents bytes sent from the source to the destination of the flow | Populated only for Version 2 of NSG flow log schema. | +| **CompletedFlows_d**| | Populated with nonzero value only for Version 2 of NSG flow log schema. | +| **PublicIPs_s** | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | +| **SrcPublicIPs_s** | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | +| **DestPublicIPs_s** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | +| **IsFlowCapturedAtUDRHop_b** | - True <br> - False | If the flow was captured at a UDR hop, the value is True. | + > [!IMPORTANT]-> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing need to parse the `FlowDirection` field so that queries are simpler. These are changes in the updated schema: +> The traffic analytics schema was updated on August 22, 2019. The new schema provides source and destination IPs separately, removing the need to parse the `FlowDirection` field so that queries are simpler. The updated schema had the following changes: > > - `FASchemaVersion_s` updated from 1 to 2. > - Deprecated fields: `VMIP_s`, `Subscription_s`, `Region_s`, `NSGRules_s`, `Subnet_s`, `VM_s`, `NIC_s`, `PublicIPs_s`, `FlowCount_d` > - New fields: `SrcPublicIPs_s`, `DestPublicIPs_s`, `NSGRule_s`-> -> Deprecated fields are available until November 2022. -> -Traffic analytics is built on top of Log Analytics, so you can run custom queries on data decorated by traffic analytics and set alerts on the same. +### VNet flow logs (preview) -The following table lists the fields in the schema and what they signify. +The following table lists the fields in the schema and what they signify for VNet flow logs. | Field | Format | Comments | | -- | | -- |-| TableName | AzureNetworkAnalytics_CL | Table for traffic analytics data. | -| SubType_s | FlowLog | Subtype for the flow logs. Use only "FlowLog", other values of SubType_s are for internal workings of the product. | -| FASchemaVersion_s | 2 | Schema version. Doesn't reflect NSG flow log version. | -| TimeProcessed_t | Date and Time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. | -| FlowIntervalStartTime_t | Date and Time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). | -| FlowIntervalEndTime_t | Date and Time in UTC | Ending time of the flow log processing interval. | -| FlowStartTime_t | Date and Time in UTC | First occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. This flow gets aggregated based on aggregation logic. | -| FlowEndTime_t | Date and Time in UTC | Last occurrence of the flow (which will get aggregated) in the flow log processing interval between ΓÇ£FlowIntervalStartTime_tΓÇ¥ and ΓÇ£FlowIntervalEndTime_tΓÇ¥. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as ΓÇ£BΓÇ¥ in the raw flow record). | -| FlowType_s | * IntraVNet <br> * InterVNet <br> * S2S <br> * P2S <br> * AzurePublic <br> * ExternalPublic <br> * MaliciousFlow <br> * Unknown Private <br> * Unknown | Definition in notes below the table. | -| SrcIP_s | Source IP address | Will be blank in case of AzurePublic and ExternalPublic flows. | -| DestIP_s | Destination IP address | Will be blank in case of AzurePublic and ExternalPublic flows. | -| VMIP_s | IP of the VM | Used for AzurePublic and ExternalPublic flows. | -| DestPort_d | Destination Port | Port at which traffic is incoming. | -| L4Protocol_s | * T <br> * U | Transport Protocol. T = TCP <br> U = UDP. | -| L7Protocol_s | Protocol Name | Derived from destination port. | -| FlowDirection_s | * I = Inbound<br> * O = Outbound | Direction of the flow in/out of NSG as per flow log. | -| FlowStatus_s | * A = Allowed by NSG Rule <br> * D = Denied by NSG Rule | Status of flow allowed/nblocked by NSG as per flow log. | -| NSGList_s | \<SUBSCRIPTIONID>\/<RESOURCEGROUP_NAME>\/<NSG_NAME> | Network Security Group (NSG) associated with the flow. | -| NSGRules_s | \<Index value 0)>\|\<NSG_RULENAME>\|\<Flow Direction>\|\<Flow Status>\|\<FlowCount ProcessedByRule> | NSG rule that allowed or denied this flow. | -| NSGRule_s | NSG_RULENAME | NSG rule that allowed or denied this flow. | -| NSGRuleType_s | * User Defined * Default | The type of NSG Rule used by the flow. | -| MACAddress_s | MAC Address | MAC address of the NIC at which the flow was captured. | -| Subscription_s | Subscription of the Azure virtual network/ network interface/ virtual machine is populated in this field | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). | -| Subscription1_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. | -| Subscription2_s | Subscription ID | Subscription ID of virtual network/ network interface/ virtual machine to which the destination IP in the flow belongs to. | -| Region_s | Azure region of virtual network/ network interface/ virtual machine to which the IP in the flow belongs to | Applicable only for FlowType = S2S, P2S, AzurePublic, ExternalPublic, MaliciousFlow, and UnknownPrivate flow types (flow types where only one side is Azure). | -| Region1_s | Azure Region | Azure region of virtual network/ network interface/ virtual machine to which the source IP in the flow belongs to. | -| Region2_s | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. | -| NIC_s | \<resourcegroup_Name>\/\<NetworkInterfaceName> | NIC associated with the VM sending or receiving the traffic. | -| NIC1_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the source IP in the flow. | -| NIC2_s | <resourcegroup_Name>/\<NetworkInterfaceName> | NIC associated with the destination IP in the flow. | -| VM_s | <resourcegroup_Name>\/\<NetworkInterfaceName> | Virtual Machine associated with the Network interface NIC_s. | -| VM1_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the source IP in the flow. | -| VM2_s | <resourcegroup_Name>/\<VirtualMachineName> | Virtual Machine associated with the destination IP in the flow. | -| Subnet_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the NIC_s. | -| Subnet1_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Source IP in the flow. | -| Subnet2_s | <ResourceGroup_Name>/<VNET_Name>/\<SubnetName> | Subnet associated with the Destination IP in the flow. | -| ApplicationGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Source IP in the flow. | -| ApplicationGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<ApplicationGatewayName> | Application gateway associated with the Destination IP in the flow. | -| LoadBalancer1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Source IP in the flow. | -| LoadBalancer2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LoadBalancerName> | Load balancer associated with the Destination IP in the flow. | -| LocalNetworkGateway1_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Source IP in the flow. | -| LocalNetworkGateway2_s | \<SubscriptionID>/\<ResourceGroupName>/\<LocalNetworkGatewayName> | Local network gateway associated with the Destination IP in the flow. | -| ConnectionType_s | Possible values are VNetPeering, VpnGateway, and ExpressRoute | Connection Type. | -| ConnectionName_s | \<SubscriptionID>/\<ResourceGroupName>/\<ConnectionName> | Connection Name. For flow type P2S, it will be formatted as \<gateway name\>_\<VPN Client IP\>. | -| ConnectingVNets_s | Space separated list of virtual network names | In case of hub and spoke topology, hub virtual networks will be populated here. | -| Country_s | Two letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs_s field will share the same country code. | -| AzureRegion_s | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs_s field will share the Azure region. | -| AllowedInFlows_d | | Count of inbound flows that were allowed. This represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. | -| DeniedInFlows_d | | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). | -| AllowedOutFlows_d | | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). | -| DeniedOutFlows_d | | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). | -| FlowCount_d | Deprecated. Total flows that matched the same four-tuple. In case of flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. | -| InboundPackets_d | Represents packets sent from the destination to the source of the flow | This field is only populated for Version 2 of NSG flow log schema. | -| OutboundPackets_d | Represents packets sent from the source to the destination of the flow | This field is only populated for Version 2 of NSG flow log schema. | -| InboundBytes_d | Represents bytes sent from the destination to the source of the flow | This field is only populated Version 2 of NSG flow log schema. | -| OutboundBytes_d | Represents bytes sent from the source to the destination of the flow | This field is only populated Version 2 of NSG flow log schema. | -| CompletedFlows_d | | This field is only populated with nonzero value for Version 2 of NSG flow log schema. | -| PublicIPs_s | <PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | -| SrcPublicIPs_s | <SOURCE_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | -| DestPublicIPs_s | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | +| **TableName** | NTANetAnalytics | Table for traffic analytics data. | +| **SubType** | FlowLog | Subtype for the flow logs. Use only **FlowLog**, other values of **SubType** are for internal use. | +| **FASchemaVersion** | 3 | Schema version. Doesn't reflect NSG flow log version. | +| **TimeProcessed** | Date and time in UTC | Time at which the traffic analytics processed the raw flow logs from the storage account. | +| **FlowIntervalStartTime** | Date and time in UTC | Starting time of the flow log processing interval (time from which flow interval is measured). | +| **FlowIntervalEndTime**| Date and time in UTC | Ending time of the flow log processing interval. | +| **FlowStartTime** | Date and time in UTC | First occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. This flow gets aggregated based on aggregation logic. | +| **FlowEndTime** | Date and time in UTC | Last occurrence of the flow (which gets aggregated) in the flow log processing interval between `FlowIntervalStartTime` and `FlowIntervalEndTime`. In terms of flow log v2, this field contains the time when the last flow with the same four-tuple started (marked as **B** in the raw flow record). | +| **FlowType** | - IntraVNet <br> - InterVNet <br> - S2S <br> - P2S <br> - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow <br> - Unknown Private <br> - Unknown | See [Notes](#notes) for definitions. | +| **SrcIP** | Source IP address | Blank in AzurePublic and ExternalPublic flows. | +| **DestIP** | Destination IP address | Blank in AzurePublic and ExternalPublic flows. | +| **TargetResourceId** | ResourceGroupName/ResourceName | The ID of the resource at which flow logging and traffic analytics is enabled. | +| **TargetResourceType** | VirtualNetwork/Subnet/NetworkInterface | Type of resource at which flow logging and traffic analytics is enabled (virtual network, subnet, NIC or network security group).| +| **FlowLogResourceId** | ResourceGroupName/NetworkWatcherName/FlowLogName | The resource ID of the flow log. | +| **DestPort** | Destination Port | Port at which traffic is incoming. | +| **L4Protocol** | - T <br> - U | Transport Protocol. **T** = TCP <br> **U** = UDP | +| **L7Protocol** | Protocol Name | Derived from destination port. | +| **FlowDirection** | - **I** = Inbound <br> - **O** = Outbound | Direction of the flow: in or out of the network security group per flow log. | +| **FlowStatus** | - **A** = Allowed <br> - **D** = Denied | Status of flow: allowed or denied by network security group per flow log. | +| **NSGList** | \<SUBSCRIPTIONID\>/\<RESOURCEGROUP_NAME\>/\<NSG_NAME\> | Network security group associated with the flow. | +| **NSGRule** | NSG_RULENAME | Network security group rule that allowed or denied the flow. | +| **NSGRuleType** | - User Defined <br> - Default | The type of network security group rule used by the flow. | +| **MACAddress** | MAC Address | MAC address of the NIC at which the flow was captured. | +| **SrcSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the source IP in the flow belongs to. | +| **DestSubscription** | Subscription ID | Subscription ID of virtual network / network interface / virtual machine that the destination IP in the flow belongs to. | +| **SrcRegion** | Azure Region | Azure region of virtual network / network interface / virtual machine to which the source IP in the flow belongs to. | +| **DestRegion** | Azure Region | Azure region of virtual network to which the destination IP in the flow belongs to. | +| **SecNIC** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the source IP in the flow. | +| **DestNIC** | \<resourcegroup_Name\>/\<NetworkInterfaceName\> | NIC associated with the destination IP in the flow. | +| **SrcVM** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the source IP in the flow. | +| **DestVM** | \<resourcegroup_Name\>/\<VirtualMachineName\> | Virtual machine associated with the destination IP in the flow. | +| **SrcSubnet** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the source IP in the flow. | +| **DestSubnet** | \<ResourceGroup_Name\>/\<VirtualNetwork_Name\>/\<SubnetName\> | Subnet associated with the destination IP in the flow. | +| **SrcApplicationGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the source IP in the flow. | +| **DestApplicationGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ApplicationGatewayName\> | Application gateway associated with the destination IP in the flow. | +| **SrcExpressRouteCircuit** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is sent from site via ExpressRoute. | +| **DestExpressRouteCircuit** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ExpressRouteCircuitName\> | ExpressRoute circuit ID - when flow is received from cloud by ExpressRoute. | +| **ExpressRouteCircuitPeeringType** | - AzurePrivatePeering <br> - AzurePublicPeering <br> - MicrosoftPeering | ExpressRoute peering type involved in the flow. | +| **SrcLoadBalancer** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the source IP in the flow. | +| **DestLoadBalancer** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LoadBalancerName\> | Load balancer associated with the destination IP in the flow. | +| **SrcLocalNetworkGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the source IP in the flow. | +| **DestLocalNetworkGateway** | \<SubscriptionID\>/\<ResourceGroupName\>/\<LocalNetworkGatewayName\> | Local network gateway associated with the destination IP in the flow. | +| **ConnectionType** | - VNetPeering <br> - VpnGateway <br> - ExpressRoute | The connection type. | +| **ConnectionName** | \<SubscriptionID\>/\<ResourceGroupName\>/\<ConnectionName\> | The connection name. For flow type P2S, it's formatted as \<GatewayName>_\<VPNClientIP> | +| **ConnectingVNets** | Space separated list of virtual network names. | In hub and spoke topology, hub virtual networks are populated here. | +| **Country** | Two-letter country code (ISO 3166-1 alpha-2) | Populated for flow type ExternalPublic. All IP addresses in PublicIPs field share the same country code. | +| **AzureRegion** | Azure region locations | Populated for flow type AzurePublic. All IP addresses in PublicIPs field share the Azure region. | +| **AllowedInFlows**| - | Count of inbound flows that were allowed, which represents the number of flows that shared the same four-tuple inbound to the network interface at which the flow was captured. | +| **DeniedInFlows** | - | Count of inbound flows that were denied. (Inbound to the network interface at which the flow was captured). | +| **AllowedOutFlows** | - | Count of outbound flows that were allowed (Outbound to the network interface at which the flow was captured). | +| **DeniedOutFlows** | - | Count of outbound flows that were denied (Outbound to the network interface at which the flow was captured). | +| **FlowCount** | Deprecated. Total flows that matched the same four-tuple. In flow types ExternalPublic and AzurePublic, count includes the flows from various PublicIP addresses as well. | - | +| **PacketsDestToSrc** | Represents packets sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. | +| **PacketsSrcToDest** | Represents packets sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. | +| **BytesDestToSrc** | Represents bytes sent from the destination to the source of the flow | Populated only for the Version 2 of NSG flow log schema. | +| **BytesSrcToDest** | Represents bytes sent from the source to the destination of the flow | Populated only for the Version 2 of NSG flow log schema. | +| **CompletedFlows** | - | Populated with nonzero value only for the Version 2 of NSG flow log schema. | +| **SrcPublicIPs** | \<SOURCE_PUBLIC_IP\>\|\<FLOW_STARTED_COUNT\>\|\<FLOW_ENDED_COUNT\>\|\<OUTBOUND_PACKETS\>\|\<INBOUND_PACKETS\>\|\<OUTBOUND_BYTES\>\|\<INBOUND_BYTES\> | Entries separated by bars. | +| **DestPublicIPs** | <DESTINATION_PUBLIC_IP>\|\<FLOW_STARTED_COUNT>\|\<FLOW_ENDED_COUNT>\|\<OUTBOUND_PACKETS>\|\<INBOUND_PACKETS>\|\<OUTBOUND_BYTES>\|\<INBOUND_BYTES> | Entries separated by bars. | +| **FlowEncryption** | - Encrypted <br>- Unencrypted <br>- Unsupported hardware <br>- Software not ready <br>- Drop due to no encryption <br>- Discovery not supported <br>- Destination on same host <br>- Fall back to no encryption. | Encryption level of flows. | +| **IsFlowCapturedAtUDRHop** | - True <br> - False | If the flow was captured at a UDR hop, the value is True. | ++> [!NOTE] +> *NTANetAnalytics* in VNet flow logs replaces *AzureNetworkAnalytics_CL* used in NSG flow logs. ## Public IP details schema Traffic analytics provides WHOIS data and geographic location for all public IPs The following table details public IP schema: +# [**NSG flow logs**](#tab/nsg) + | Field | Format | Comments | | -- | | -- |-| TableName | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. | -| SubType_s | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. | -| FASchemaVersion_s | 2 | Schema version. It doesn't reflect NSG flow log version. | -| FlowIntervalStartTime_t | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). | -| FlowIntervalEndTime_t | Date and Time in UTC | End time of the flow log processing interval. | -| FlowType_s | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | Definition in notes below the table. | -| IP | Public IP | Public IP whose information is provided in the record. | -| Location | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). | -| PublicIPDetails | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. | -| ThreatType | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). | -| ThreatDescription | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. | -| DNSDomain | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. | +| **TableName** | AzureNetworkAnalyticsIPDetails_CL | Table that contains traffic analytics IP details data. | +| **SubType_s** | FlowLog | Subtype for the flow logs. **Use only "FlowLog"**, other values of SubType_s are for internal workings of the product. | +| **FASchemaVersion_s** | 2 | Schema version. Doesn't reflect NSG flow log version. | +| **FlowIntervalStartTime_t** | Date and Time in UTC | Start time of the flow log processing interval (time from which flow interval is measured). | +| **FlowIntervalEndTime_t** | Date and Time in UTC | End time of the flow log processing interval. | +| **FlowType_s** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. | +| **IP** | Public IP | Public IP whose information is provided in the record. | +| **Location** | Location of the IP | - For Azure Public IP: Azure region of virtual network/network interface/virtual machine to which the IP belongs OR Global for IP [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - For External Public IP and Malicious IP: 2-letter country code where IP is located (ISO 3166-1 alpha-2). | +| **PublicIPDetails** | Information about IP | - For AzurePublic IP: Azure Service owning the IP or Microsoft virtual public IP for [168.63.129.16](../virtual-network/what-is-ip-address-168-63-129-16.md). <br> - ExternalPublic/Malicious IP: WhoIS information of the IP. | +| **ThreatType** | Threat posed by malicious IP | **For Malicious IPs only**: One of the threats from the list of currently allowed values (described in the next table). | +| **ThreatDescription** | Description of the threat | **For Malicious IPs only**: Description of the threat posed by the malicious IP. | +| **DNSDomain** | DNS domain | **For Malicious IPs only**: Domain name associated with this IP. | ++# [**VNet flow logs (preview)**](#tab/vnet) ++| Field | Format | Comments | +| -- | | -- | +| **TableName**| NTAIpDetails | Table that contains traffic analytics IP details data. | +| **SubType**| FlowLog | Subtype for the flow logs. Use only **FlowLog**. Other values of SubType are for internal workings of the product. | +| **FASchemaVersion** | 2 | Schema version. Doesn't reflect NSG flow Log version. | +| **FlowIntervalStartTime**| Date and time in UTC | Start time of the flow log processing interval (the time from which flow interval is measured). | +| **FlowIntervalEndTime**| Date and time in UTC | End time of the flow log processing interval. | +| **FlowType** | - AzurePublic <br> - ExternalPublic <br> - MaliciousFlow | See [Notes](#notes) for definitions. | +| **IP**| Public IP | Public IP whose information is provided in the record. | +| **PublicIPDetails** | Information about IP | **For AzurePublic IP**: Azure Service owning the IP or **Microsoft Virtual Public IP** for the IP 168.63.129.16. <br> **ExternalPublic/Malicious IP**: WhoIS information of the IP. | +| **ThreatType** | Threat posed by malicious IP | *For Malicious IPs only*. One of the threats from the list of currently allowed values. For more information, see [Notes](#notes). | +| **DNSDomain** | DNS domain | *For Malicious IPs only*. Domain name associated with this IP. | +| **ThreatDescription** |Description of the threat | *For Malicious IPs only*. Description of the threat posed by the malicious IP. | +| **Location** | Location of the IP | **For Azure Public IP**: Azure region of virtual network / network interface / virtual machine to which the IP belongs or Global for IP 168.63.129.16. <br> **For External Public IP and Malicious IP**: two-letter country code (ISO 3166-1 alpha-2) where IP is located. | ++> [!NOTE] +> *NTAIPDetails* in VNet flow logs replaces *AzureNetworkAnalyticsIPDetails_CL* used in NSG flow logs. ++ List of threat types: List of threat types: | Phishing | Indicators relating to a phishing campaign. | | Proxy | Indicator of a proxy service. | | PUA | Potentially Unwanted Application. |-| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or will require manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. | +| WatchList | A generic bucket into which indicators are placed when it can't be determined exactly what the threat is or requires manual interpretation. `WatchList` should typically not be used by partners submitting data into the system. | ## Notes -- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to log analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow).+- In case of `AzurePublic` and `ExternalPublic` flows, customer owned Azure virtual machine IP is populated in `VMIP_s` field, while the Public IP addresses are populated in the `PublicIPs_s` field. For these two flow types, you should use `VMIP_s` and `PublicIPs_s` instead of `SrcIP_s` and `DestIP_s` fields. For AzurePublic and ExternalPublic IP addresses, we aggregate further, so that the number of records ingested to Log Analytics workspace is minimal. (This field will be deprecated soon and you should be using SrcIP_ and DestIP_s depending on whether the virtual machine was the source or the destination in the flow). - Some field names are appended with `_s` or `_d`, which don't signify source and destination but indicate the data types *string* and *decimal* respectively. - Based on the IP addresses involved in the flow, we categorize the flows into the following flow types: - `IntraVNet`: Both IP addresses in the flow reside in the same Azure virtual network. List of threat types: ## Next Steps -- To learn more about traffic analytics, see [Azure Network Watcher Traffic analytics](traffic-analytics.md).-- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics frequently asked questions.--+- To learn more about traffic analytics, see [Traffic analytics overview](traffic-analytics.md). +- See [Traffic analytics FAQ](traffic-analytics-faq.yml) for answers to traffic analytics most frequently asked questions. |
network-watcher | Vnet Flow Logs Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-cli.md | + + Title: Manage VNet flow logs - Azure CLI ++description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using the Azure CLI. ++++ Last updated : 08/16/2023++++# Create, change, enable, disable, or delete VNet flow logs using the Azure CLI ++> [!IMPORTANT] +> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md). ++In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using the Azure CLI. You can learn how to manage a VNet flow log using [PowerShell](vnet-flow-logs-powershell.md). ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- Insights provider. For more information, see [Register Insights provider](#register-insights-provider). ++- A virtual network. If you need to create a virtual network, see [Create a virtual network using the Azure CLI](../virtual-network/quick-create-cli.md). ++- An Azure storage account. If you need to create a storage account, see [Create a storage account using the Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli). ++- Bash environment in [Azure Cloud Shell](https://shell.azure.com) or the Azure CLI installed locally. To learn more about using Bash in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - Bash](../cloud-shell/quickstart.md). ++ - If you choose to install and use Azure CLI locally, this article requires the Azure CLI version 2.39.0 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure. ++## Register insights provider ++*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [az provider register](/cli/azure/provider#az-provider-register) to register it. ++```azurecli-interactive +# Register Microsoft.Insights provider. +az provider register --namespace Microsoft.Insights +``` ++## Enable VNet flow logs ++Use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log. ++```azurecli-interactive +# Create a VNet flow log. +az network watcher flow-log create --location eastus --resource-group myResourceGroup --name myVNetFlowLog --vnet myVNet --storage-account myStorageAccount +``` ++## Enable VNet flow logs and traffic analytics ++Use [az monitor log-analytics workspace create](/cli/azure/monitor/log-analytics/workspace#az-monitor-log-analytics-workspace-create) to create a traffic analytics workspace, and then use [az network watcher flow-log create](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-create) to create a VNet flow log that uses it. ++```azurecli-interactive +# Create a traffic analytics workspace. +az monitor log-analytics workspace create --name myWorkspace --resource-group myResourceGroup --location eastus ++# Create a VNet flow log. +az network watcher flow-log create --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --workspace myWorkspace --interval 10 --traffic-analytics true +``` ++## List all flow logs in a region ++Use [az network watcher flow-log list](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-list) to list all flow log resources in a particular region in your subscription. ++```azurecli-interactive +# Get all flow logs in East US region. +az network watcher flow-log list --location eastus --out table +``` ++## View VNet flow log resource ++Use [az network watcher flow-log show](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-show) to see details of a flow log resource. ++```azurecli-interactive +# Get the flow log details. +az network watcher flow-log show --name myVNetFlowLog --resource-group NetworkWatcherRG --location eastus +``` ++## Download a flow log ++To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). ++VNet flow log files saved to a storage account follow the logging path shown in the following example: ++``` +https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json +``` ++## Disable traffic analytics on flow log resource ++To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to a storage account, use [az network watcher flow-log update](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-update). ++```azurecli-interactive +# Update the VNet flow log. +az network watcher flow-log update --location eastus --name myVNetFlowLog --resource-group myResourceGroup --vnet myVNet --storage-account myStorageAccount --traffic-analytics false +``` ++## Delete a VNet flow log resource ++To delete a VNet flow log resource, use [az network watcher flow-log delete](/cli/azure/network/watcher/flow-log#az-network-watcher-flow-log-delete). ++```azurecli-interactive +# Delete the VNet flow log. +az network watcher flow-log delete --name myVNetFlowLog --location eastus +``` ++## Next steps ++- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md). +- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md). |
network-watcher | Vnet Flow Logs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md | + + Title: VNet flow logs (preview) ++description: Learn about VNet flow logs feature of Azure Network Watcher. ++++ Last updated : 08/16/2023+++# VNet flow logs (preview) ++> [!IMPORTANT] +> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Virtual network (VNet) flow logs is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through a virtual network. Flow data is sent to Azure Storage from where you can access it and export it to any visualization tool, security information and event management (SIEM) solution, or intrusion detection system (IDS) of your choice. Network Watcher VNet flow logs capability overcomes some of the existing limitations of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md). ++## Why use flow logs? ++It's vital to monitor, manage, and know your network so that you can protect and optimize it. You may need to know the current state of the network, who's connecting, and where users are connecting from. You may also need to know which ports are open to the internet, what network behavior is expected, what network behavior is irregular, and when sudden rises in traffic happen. ++Flow logs are the source of truth for all network activity in your cloud environment. Whether you're in a startup that's trying to optimize resources or a large enterprise that's trying to detect intrusion, flow logs can help. You can use them for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions, and more. ++## Common use cases ++#### Network monitoring ++- Identify unknown or undesired traffic. +- Monitor traffic levels and bandwidth consumption. +- Filter flow logs by IP and port to understand application behavior. +- Export flow logs to analytics and visualization tools of your choice to set up monitoring dashboards. ++#### Usage monitoring and optimization ++- Identify top talkers in your network. +- Combine with GeoIP data to identify cross-region traffic. +- Understand traffic growth for capacity forecasting. +- Use data to remove overly restrictive traffic rules. ++#### Compliance ++- Use flow data to verify network isolation and compliance with enterprise access rules. ++#### Network forensics and security analysis ++- Analyze network flows from compromised IPs and network interfaces. +- Export flow logs to any SIEM or IDS tool of your choice. ++## VNet flow logs compared to NSG flow logs ++Both VNet flow logs and [NSG flow logs](network-watcher-nsg-flow-logging-overview.md) record IP traffic but they differ in their behavior & capabilities. VNet flow logs simplify the scope of traffic monitoring by allowing you to enable logging at [virtual networks](../virtual-network/virtual-networks-overview.md), ensuring that traffic through all supported workloads within a virtual network are recorded. VNet flow logs also avoids the need to enable multi-level flow logging such as in cases of [NSG flow logs](network-watcher-nsg-flow-logging-overview.md#best-practices) where network security groups are configured at both subnet & NIC. ++In addition to existing support to identify allowed/denied traffic by [network security group rules](../virtual-network/network-security-groups-overview.md), VNet flow logs support identification of traffic allowed/denied by [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). VNet flow logs also support evaluating the encryption status of your network traffic in scenarios where [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md) is enabled. ++## How logging works ++Key properties of VNet flow logs include: ++- Flow logs operate at Layer 4 of the Open Systems Interconnection (OSI) model and record all IP flows going through a virtual network. +- Logs are collected at 1-minute intervals through the Azure platform and don't affect your Azure resources or network traffic. +- Logs are written in the JSON (JavaScript Object Notation) format. +- Each log record contains the network interface (NIC) the flow applies to, 5-tuple information, traffic direction, flow state, encryption state and throughput information. +- All traffic flows in your network are evaluated through the rules in the applicable [network security group rules](../virtual-network/network-security-groups-overview.md) or [Azure Virtual Network Manager security admin rules](../virtual-network-manager/concept-security-admins.md). For more information, see [Log format](#log-format). ++## Log format ++VNet flow logs have the following properties: ++- `time`: Time in UTC when the event was logged. +- `flowLogVersion`: Version of flow log schema. +- `flowLogGUID`: The resource GUID of the FlowLog resource. +- `macAddress`: MAC address of the network interface where the event was captured. +- `category`: Category of the event. The category is always `FlowLogFlowEvent`. +- `flowLogResourceID`: Resource ID of the FlowLog resource. +- `targetResourceID`: Resource ID of target resource associated to the FlowLog resource. +- `operationName`: Always `FlowLogFlowEvent`. +- `flowRecords`: Collection of flow records. + - `flows`: Collection of flows. This property has multiple entries for different ACLs. + - `aclID`: Identifier of the resource evaluating traffic, either a network security group or Virtual Network Manager. For cases like traffic denied by encryption, this value is `unspecified`. + - `flowGroups`: Collection of flow records at a rule level. + - `rule`: Name of the rule that allowed or denied the traffic. For traffic denied due to encryption, this value is `unspecified`. + - `flowTuples`: string that contains multiple properties for the flow tuple in a comma-separated format: + - `Time Stamp`: Time stamp of when the flow occurred in UNIX epoch format. + - `Source IP`: Source IP address. + - `Destination IP`: Destination IP address. + - `Source port`: Source port. + - `Destination port`: Destination Port. + - `Protocol`: Layer 4 protocol of the flow expressed in IANA assigned values. + - `Flow direction`: Direction of the traffic flow. Valid values are `I` for inbound and `O` for outbound. + - `Flow state`: State of the flow. Possible states are: + - `B`: Begin, when a flow is created. No statistics are provided. + - `C`: Continuing for an ongoing flow. Statistics are provided at 5-minute intervals. + - `E`: End, when a flow is terminated. Statistics are provided. + - `D`: Deny, when a flow is denied. + - `Flow encryption`: Encryption state of the flow. Possible values are: + - `X`: Encrypted. + - `NX`: Unencrypted. + - `NX_HW_NOT_SUPPORTED`: Unsupported hardware. + - `NX_SW_NOT_READY`: Software not ready. + - `NX_NOT_ACCEPTED`: Drop due to no encryption. + - `NX_NOT_SUPPORTED`: Discovery not supported. + - `NX_LOCAL_DST`: Destination on same host. + - `NX_FALLBACK`: Fall back to no encryption. + - `Packets sent`: Total number of packets sent from source to destination since the last update. + - `Bytes sent`: Total number of packet bytes sent from source to destination since the last update. Packet bytes include the packet header and payload. + - `Packets received`: Total number of packets sent from destination to source since the last update. + - `Bytes received`: Total number of packet bytes sent from destination to source since the last update. Packet bytes include packet header and payload. ++Traffic in your virtual networks is Unencrypted (NX) by default. For encrypted traffic, enable [virtual network encryption](../virtual-network/virtual-network-encryption-overview.md). ++`Flow encryption` has the following possible encryption statuses: ++| Encryption Status | Description | +| -- | -- | +| `X` | **Connection is encrypted**. Encryption is configured and the platform has encrypted the connection. | +| `NX` | **Connection is Unencrypted**. This event is logged in two scenarios: <br> - When encryption isn't configured. <br> - When an encrypted virtual machine communicates with an endpoint that lacks encryption (such as an internet endpoint). | +| `NX_HW_NOT_SUPPORTED` | **Unsupported hardware**. Encryption is configured, but the virtual machine is running on a host that doesn't support encryption. This issue can usually be the case where the FPGA isn't attached to the host, or could be faulty. Report this issue to Microsoft for investigation. | +| `NX_SW_NOT_READY` | **Software not ready**. Encryption is configured, but the software component (GFT) in the host networking stack isn't ready to process encrypted connections. This issue can happen when the virtual machine is booting for the first time / restarting / redeployed. It can also happen in the case where there's an update to the networking components on the host where virtual machine is running. In all these scenarios, the packet gets dropped. The issue should be temporary and encryption should start working once either the virtual machine is fully up and running or the software update on the host is complete. If the issue is seen for longer durations, report it to Microsoft for investigation. | +| `NX_NOT_ACCEPTED` | **Drop due to no encryption**. Encryption is configured on both source and destination endpoints with drop on unencrypted policy. If there's a failure to encrypt traffic, packet is dropped. | +| `NX_NOT_SUPPORTED` | **Discovery not supported**. Encryption is configured, but the encryption session wasn't established, as discovery isn't supported in the host networking stack. In this case, packet is dropped. If you encounter this issue, report it to Microsoft for investigation. | +| `NX_LOCAL_DST` | **Destination on same host**. Encryption is configured, but the source and destination virtual machines are running on the same Azure host. In this case, the connection isn't encrypted by design. | +| `NX_FALLBACK` | **Fall back to no encryption**. Encryption is configured with the allow unencrypted policy for both source and destination endpoints. Encryption was attempted, but ran into an issue. In this case, connection is allowed but it isn't encrypted. An example of this can be, the virtual machine initially landed on a node that supports encryption, but later, this support was disabled. | +++## Sample log record ++In the following example of VNet flow logs, multiple records that follow the property list described earlier. ++```json +{ + "records": [ + { + "time": "2022-09-14T09:00:52.5625085Z", + "flowLogVersion": 4, + "flowLogGUID": "abcdef01-2345-6789-0abc-def012345678", + "macAddress": "00224871C205", + "category": "FlowLogFlowEvent", + "flowLogResourceID": "/SUBSCRIPTIONS/00000000-0000-0000-0000-000000000000/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG", + "targetResourceID": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet", + "operationName": "FlowLogFlowEvent", + "flowRecords": { + "flows": [ + { + "aclID": "00000000-1234-abcd-ef00-c1c2c3c4c5c6", + "flowGroups": [ + { + "rule": "DefaultRule_AllowInternetOutBound", + "flowTuples": [ + "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0", + "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580", + "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0", + "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569", + "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0", + "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569", + "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0", + "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108", + "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0", + "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466" + ] + } + ] + }, + { + "aclID": "01020304-abcd-ef00-1234-102030405060", + "flowGroups": [ + { + "rule": "BlockHighRiskTCPPortsFromInternet", + "flowTuples": [ + "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0", + "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0" + ] + }, + { + "rule": "Internet", + "flowTuples": [ + "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0", + "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0", + "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0", + "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0", + "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0", + "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0", + "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0" + ] + } + ] + } + ] + } + } + ] +} ++``` +## Log tuple and bandwidth calculation +++Here's an example bandwidth calculation for flow tuples from a TCP conversation between **185.170.185.105:35370** and **10.2.0.4:23**: ++`1493763938,185.170.185.105,10.2.0.4,35370,23,6,I,B,NX,,,,` +`1493695838,185.170.185.105,10.2.0.4,35370,23,6,I,C,NX,1021,588096,8005,4610880` +`1493696138,185.170.185.105,10.2.0.4,35370,23,6,I,E,NX,52,29952,47,27072` ++For continuation (`C`) and end (`E`) flow states, byte and packet counts are aggregate counts from the time of the previous flow's tuple record. In the example conversation, the total number of packets transferred is 1021+52+8005+47 = 9125. The total number of bytes transferred is 588096+29952+4610880+27072 = 5256000. ++## Considerations for VNet flow logs ++### Storage account ++- **Location**: The storage account used must be in the same region as the virtual network. +- **Performance tier**: Currently, only standard-tier storage accounts are supported. +- **Self-managed key rotation**: If you change or rotate the access keys to your storage account, VNet flow logs stop working. To fix this problem, you must disable and then re-enable VNet flow logs. ++### Cost ++VNet flow logging is billed on the volume of logs produced. High traffic volume can result in large-flow log volume and the associated costs. ++Pricing of VNet flow logs doesn't include the underlying costs of storage. Using the retention policy feature with VNet flow logs means incurring separate storage costs for extended periods of time. ++If you want to retain data forever and don't want to apply any retention policy, set retention days to 0. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). ++## Pricing ++VNet flow logs are not currently billed. In future, VNet flow logs will be charged per gigabyte of "Network Logs Collected" and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with VNet flow logs, then existing traffic analytics pricing is applicable. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/). ++## Availability ++VNet flow logs is available in the following regions during the preview: ++- East US 2 EUAP +- Central US EUAP +- West Central US +- East US +- East US 2 +- West US +- West US 2 ++To sign up to obtain access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup). ++## Next steps ++- To learn how to create, change, enable, disable, or delete VNet flow logs, see [PowerShell](vnet-flow-logs-powershell.md) or [Azure CLI](vnet-flow-logs-cli.md) VNet flow logs articles. +- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md) and [Traffic analytics schema](traffic-analytics-schema.md). +- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md). +++ |
network-watcher | Vnet Flow Logs Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-powershell.md | + + Title: Manage VNet flow logs - PowerShell ++description: Learn how to create, change, enable, disable, or delete Azure Network Watcher VNet flow logs using Azure PowerShell. ++++ Last updated : 08/16/2023++++# Create, change, enable, disable, or delete VNet flow logs using Azure PowerShell ++> [!IMPORTANT] +> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Virtual network flow logging is a feature of Azure Network Watcher that allows you to log information about IP traffic flowing through an Azure virtual network. For more information about virtual network flow logging, see [VNet flow logs overview](vnet-flow-logs-overview.md). ++In this article, you learn how to create, change, enable, disable, or delete a VNet flow log using Azure PowerShell. You can learn how to manage a VNet flow log using the [Azure CLI](vnet-flow-logs-cli.md). ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- Insights provider. For more information, see [Register Insights provider](#register-insights-provider). ++- A virtual network. If you need to create a virtual network, see [Create a virtual network using PowerShell](../virtual-network/quick-create-powershell.md). ++- An Azure storage account. If you need to create a storage account, see [Create a storage account using PowerShell](../storage/common/storage-account-create.md?tabs=azure-powershell). ++- PowerShell environment in [Azure Cloud Shell](https://shell.azure.com) or Azure PowerShell installed locally. To learn more about using PowerShell in Azure Cloud Shell, see [Azure Cloud Shell Quickstart - PowerShell](../cloud-shell/quickstart-powershell.md). ++ - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell version 7.4.0 or later. Run `Get-InstalledModule -Name Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). Run `Connect-AzAccount` to sign in to Azure. ++## Register insights provider ++*Microsoft.Insights* provider must be registered to successfully log traffic in a virtual network. If you aren't sure if the *Microsoft.Insights* provider is registered, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) to register it. ++```azurepowershell-interactive +# Register Microsoft.Insights provider. +Register-AzResourceProvider -ProviderNamespace Microsoft.Insights +``` ++## Enable VNet flow logs ++Use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log. ++```azurepowershell-interactive +# Place the virtual network configuration into a variable. +$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup +# Place the storage account configuration into a variable. +$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup ++# Create a VNet flow log. +New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id +``` ++## Enable VNet flow logs and traffic analytics ++Use [New-AzOperationalInsightsWorkspace](/powershell/module/az.operationalinsights/new-azoperationalinsightsworkspace) to create a traffic analytics workspace, and then use [New-AzNetworkWatcherFlowLog](/powershell/module/az.network/new-aznetworkwatcherflowlog) to create a VNet flow log that uses it. ++```azurepowershell-interactive +# Place the virtual network configuration into a variable. +$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup +# Place the storage account configuration into a variable. +$storageAccount = Get-AzStorageAccount -Name myStorageAccount -ResourceGroupName myResourceGroup ++# Create a traffic analytics workspace and place its configuration into a variable. +$workspace = New-AzOperationalInsightsWorkspace -Name myWorkspace -ResourceGroupName myResourceGroup -Location EastUS ++# Create a VNet flow log. +New-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id -EnableTrafficAnalytics -TrafficAnalyticsWorkspaceId $workspace.ResourceId -TrafficAnalyticsInterval 10 +``` ++## List all flow logs in a region ++Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to list all flow log resources in a particular region in your subscription. ++```azurepowershell-interactive +# Get all flow logs in East US region. +Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG | format-table Name +``` ++## View VNet flow log resource ++Use [Get-AzNetworkWatcherFlowLog](/powershell/module/az.network/get-aznetworkwatcherflowlog) to see details of a flow log resource. ++```azurepowershell-interactive +# Get the flow log details. +Get-AzNetworkWatcherFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -Name myVNetFlowLog +``` ++## Download a flow log ++To access and download VNet flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md). ++VNet flow log files saved to a storage account follow the logging path shown in the following example: ++``` +https://{storageAccountName}.blob.core.windows.net/insights-logs-flowlogflowevent/flowLogResourceID=/SUBSCRIPTIONS/{subscriptionID}/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_{Region}/FLOWLOGS/{FlowlogResourceName}/y={year}/m={month}/d={day}/h={hour}/m=00/macAddress={macAddress}/PT1H.json +``` ++## Disable traffic analytics on flow log resource ++To disable traffic analytics on the flow log resource and continue to generate and save VNet flow logs to storage account, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog). ++```azurepowershell-interactive +# Place the virtual network configuration into a variable. +$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup +# Place the storage account configuration into a variable. +$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage ++# Update the VNet flow log. +Set-AzNetworkWatcherFlowLog -Enabled $true -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id +``` ++## Disable VNet flow logging ++To disable a VNet flow log without deleting it so you can re-enable it later, use [Set-AzNetworkWatcherFlowLog](/powershell/module/az.network/set-aznetworkwatcherflowlog). ++```azurepowershell-interactive +# Place the virtual network configuration into a variable. +$vnet = Get-AzVirtualNetwork -Name myVNet -ResourceGroupName myResourceGroup +# Place the storage account configuration into a variable. +$storageAccount = Get-AzStorageAccount -Name mynwstorageaccount -ResourceGroupName Storage ++# Disable the VNet flow log. +Set-AzNetworkWatcherFlowLog -Enabled $false -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG -StorageId $storageAccount.Id -TargetResourceId $vnet.Id +``` ++## Delete a VNet flow log resource ++To delete a VNet flow log resource, use [Remove-AzNetworkWatcherFlowLog](/powershell/module/az.network/remove-aznetworkwatcherflowlog). ++```azurepowershell-interactive +# Delete the VNet flow log. +Remove-AzNetworkWatcherFlowLog -Name myVNetFlowLog -NetworkWatcherName NetworkWatcher_eastus -ResourceGroupName NetworkWatcherRG +``` ++## Next steps ++- To learn about traffic analytics, see [Traffic analytics](traffic-analytics.md). +- To learn how to use Azure built-in policies to audit or enable traffic analytics, see [Manage traffic analytics using Azure Policy](traffic-analytics-policy-portal.md). |
networking | Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md | This section describes networking services in Azure that help protect your netwo [Azure DDoS Protection](../../ddos-protection/manage-ddos-protection.md) provides countermeasures against the most sophisticated DDoS threats. The service provides enhanced DDoS mitigation capabilities for your application and resources deployed in your virtual networks. Additionally, customers using Azure DDoS Protection have access to DDoS Rapid Response support to engage DDoS experts during an active attack. +Azure DDoS Protection consists of two tiers: ++- [DDoS Network Protection](../../ddos-protection/ddos-protection-overview.md#ddos-network-protection) Azure DDoS Network Protection, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. +- [DDoS IP Protection](../../ddos-protection/ddos-protection-overview.md#ddos-ip-protection) DDoS IP Protection is a pay-per-protected IP model. DDoS IP Protection contains the same core engineering features as DDoS Network Protection, but will differ in the following value-added + ### <a name="privatelink"></a>Azure Private Link |
networking | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md | Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
networking | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
notification-hubs | Create Notification Hub Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-portal.md | A namespace contains one or more notification hubs, so type a name for the hub i 1. Review the [**Availability Zones**](./notification-hubs-high-availability.md#zone-redundant-resiliency) option. If you chose a region that has availability zones, the check box is selected by default. Availability Zones is a paid feature, so an additional fee is added to your tier. > [!NOTE]- > Availability zones, and the ability to edit cross region disaster recovery options, are public preview features. Availability Zones is available for an additional cost; however, you will not be charged while the feature is in preview. For more information, see [High availability for Azure Notification Hubs](./notification-hubs-high-availability.md). + > The Availability Zones feature is currently in public preview. Availability Zones is available for an additional cost; however, you will not be charged while the feature is in preview. For more information, see [High availability for Azure Notification Hubs](./notification-hubs-high-availability.md). 1. Choose a **Disaster recovery** option: **None**, **Paired recovery region**, or **Flexible recovery region**. If you choose **Paired recovery region**, the failover region is displayed. If you select **Flexible recovery region**, use the drop-down to choose from a list of recovery regions. |
notification-hubs | Notification Hubs High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-high-availability.md | Last updated 07/17/2023 Android, Windows, etc.) from any back-end (cloud or on-premises). This article describes the configuration options to achieve the availability characteristics required by your solution. For more information about our SLA, see the [Notification Hubs SLA][]. > [!NOTE]-> The following features are available in preview: +> The following feature is available in preview: >-> - Ability to edit your cross region disaster recovery options > - Availability zones >-> If you're not participating in the preview, your failover region defaults to one of the [Azure paired regions][]. -> -> Availability zones support will incur an additional cost on top of existing tier pricing. You will not be charged to preview the feature. Once it becomes generally available, you will automatically be billed. +> Availability zones support will incur an additional cost on top of existing tier pricing. You will not be charged to preview the feature. Once it becomes generally available, you are automatically billed. Notification Hubs offers two availability configurations: metadata are replicated across data centers in the availability zone. In the eve New availability zones are being added regularly. The following regions currently support availability zones: -| Americas | Europe | Africa | Asia Pacific | -||-|-|--| -| West US 3 | West Europe | South Africa North | Australia East | -| East US 2 | France Central | | East Asia | -| West US 2 | Poland Central | | Qatar | -| Canada Central| UK South | | India Central | -| | North Europe | | | -| | Sweden Central | | | +| Americas | Europe | Africa | Asia Pacific | +||-|-|--| +| West US 3 | West Europe | South Africa North | Australia East | +| East US 2 | France Central | | East Asia | +| West US 2 | Poland Central | | Qatar | +| Canada Central| UK South | | India Central | +| | North Europe | | Japan East | +| | Sweden Central | | Korea Central | +| | Norway East | | | +| | Germany West Central | | | ### Enable availability zones |
openshift | Azure Redhat Openshift Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md | +## Version 4.12 - August 2023 ++We're pleased to announce the launch of OpenShift 4.12 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.12](https://docs.openshift.com/container-platform/4.12/release_notes/ocp-4-12-release-notes.html). ++> [!NOTE] +> Starting with ARO version 4.12, the support lifecycle for new versions will be set to 14 months from the day of general availability. That means that the end date for support of each version will no longer be dependent on the previous version (as shown in the table above for version 4.12.) This does not affect support for the previous version; two generally available (GA) minor versions of Red Hat OpenShift Container Platform will continue to be supported. +> + ## Update - June 2023 - Removed dependencies on service endpoints |
openshift | Howto Create A Storageclass | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-a-storageclass.md | Title: Create an Azure Files StorageClass on Azure Red Hat OpenShift 4 description: Learn how to create an Azure Files StorageClass on Azure Red Hat OpenShift Previously updated : 10/16/2020 Last updated : 08/28/2023 keywords: aro, openshift, az aro, red hat, cli, azure file In this article, youΓÇÖll create a StorageClass for Azure Red Hat OpenShift 4 th > * Setup the prerequisites and install the necessary tools > * Create an Azure Red Hat OpenShift 4 StorageClass with the Azure File provisioner -If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +If you choose to install and use the CLI locally, this tutorial requires that you're running the Azure CLI version 2.6.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). ## Before you begin Deploy an Azure Red Hat OpenShift 4 cluster into your subscription, see [Create ### Set up Azure storage account -This step will create a resource group outside of the Azure Red Hat OpenShift (ARO) clusterΓÇÖs resource group. This resource group will contain the Azure Files shares that are created by Azure Red Hat OpenShiftΓÇÖs dynamic provisioner. +This step creates a resource group outside of the Azure Red Hat OpenShift (ARO) clusterΓÇÖs resource group. This resource group contains the Azure Files shares that created Azure Red Hat OpenShiftΓÇÖs dynamic provisioner. ```azurecli AZURE_FILES_RESOURCE_GROUP=aro_azure_files az role assignment create --role Contributor --scope /subscriptions/mySubscripti ### Set ARO cluster permissions -The OpenShift persistent volume binder service account will require the ability to read secrets. Create and assign an OpenShift cluster role to achieve this. +The OpenShift persistent volume binder service account requires the ability to read secrets. Create and assign an OpenShift cluster role to achieve this. ```azurecli ARO_API_SERVER=$(az aro list --query "[?contains(name,'$CLUSTER')].[apiserverProfile.url]" -o tsv) oc adm policy add-cluster-role-to-user azure-secret-reader system:serviceaccount ## Create StorageClass with Azure Files provisioner -This step will create a StorageClass with an Azure Files provisioner. Within the StorageClass manifest, the details of the storage account are required so that the ARO cluster knows to look at a storage account outside of the current resource group. +This step creates a StorageClass with an Azure Files provisioner. Within the StorageClass manifest, the details of the storage account are required so that the ARO cluster knows to look at a storage account outside of the current resource group. -During storage provisioning, a secret named by secretName is created for the mounting credentials. In a multi-tenancy context, it is strongly recommended to set the value for secretNamespace explicitly, otherwise the storage account credentials may be read by other users. +During storage provisioning, a secret named by secretName is created for the mounting credentials. In a multi-tenancy context, it's strongly recommended to set the value for secretNamespace explicitly, otherwise the storage account credentials may be read by other users. ```bash cat << EOF >> azure-storageclass-azure-file.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file-provisioner: kubernetes.io/azure-file +provisioner: file.csi.azure.com mountOptions: - dir_mode=0777 - file_mode=0777 EOF oc create -f azure-storageclass-azure-file.yaml ``` -Mount options for Azure Files will generally be dependent on the workload that you are deploying and the requirements of the application. Specifically for Azure files, there are additional parameters that you should consider using. +Mount options for Azure Files will generally be dependent on the workload that you're deploying and the requirements of the application. Specifically for Azure files, there are other parameters that you should consider using. Mandatory parameters: - "mfsymlinks" to map symlinks to a form the client can use |
openshift | Openshift Service Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/openshift-service-definitions.md | No monitoring of these private network connections is provided by Red Hat SRE. M Azure Red Hat OpenShift customers can specify their own DNS servers. For more information, see [Configure custom DNS for your Azure Red Hat OpenShift cluster](./howto-custom-dns.md). +### Container Network Interface ++Azure Red Hat OpenShift comes with OVN (Open Virtual Network) as the Container Network Interface (CNI). Replacing the CNI is not a supported operation. For more information, see [OVN-Kubernetes network provider for Azure Red Hat OpenShift clusters](concepts-ovn-kubernetes.md). + ## Storage The following sections provide information about Azure Red Hat OpenShift storage. |
openshift | Support Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md | When a new minor version is introduced, the oldest minor version is deprecated a ## Release and deprecation process -You can reference upcoming version releases and deprecations on the Azure Red Hat OpenShift Release Calendar. +You can reference upcoming version releases and deprecations on the [Azure Red Hat OpenShift release calendar](#azure-red-hat-openshift-release-calendar). For new minor versions of Red Hat OpenShift Container Platform: * The Azure Red Hat OpenShift SRE team publishes a pre-announcement with the planned date of a new version release, and respective old version deprecation, in the [Azure Red Hat OpenShift Release notes](https://github.com/Azure/OpenShift/releases) at least 30 days prior to removal. See the following guide for the [past Red Hat OpenShift Container Platform (upst |4.9|November 2021| February 1 2022|4.11 GA| |4.10|March 2022| June 21 2022|4.12 GA| |4.11|August 2022| March 2 2023|4.13 GA|+|4.12|January 2023| August 19 2023|October 19 2024| +> [!IMPORTANT] +> Starting with ARO version 4.12, the support lifecycle for new versions will be set to 14 months from the day of general availability. That means that the end date for support of each version will no longer be dependent on the previous version (as shown in the table above for version 4.12.) This does not affect support for the previous version; two generally available (GA) minor versions of Red Hat OpenShift Container Platform will continue to be supported, as [explained previously](#red-hat-openshift-container-platform-version-support-policy). +> ## FAQ **What happens when a user upgrades an OpenShift cluster with a minor version that is not supported?** |
operator-nexus | Concepts Network Fabric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-network-fabric.md | Key capabilities offered in Azure Operator Nexus Network Fabric: * **Network Policy Automation** - Automating the management of consistent network policies across the fabric to ensure security, performance, and access controls are enforced uniformly. -* **Networking features built for Operators** - Support for unique features like multicast, SCTP, and jumbo frames. +* **Networking features built for Operators** - Support for unique features like multicast, SCTP, and jumbo frames. |
operator-nexus | Concepts Nexus Kubernetes Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-kubernetes-cluster.md | to learn about Kubernetes. ## Nexus Kubernetes cluster -Nexus Kubernetes cluster (NAKS) is an Operator Nexus version of AKS for on-premises use. It is optimized to automate creation of containers to +Nexus Kubernetes cluster (NKS) is an Operator Nexus version of Kubernetes for on-premises use. It is optimized to automate creation of containers to run tenant network function workloads. Like any Kubernetes cluster, Nexus Kubernetes cluster has two |
operator-nexus | Concepts Observability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-observability.md | -<! IMG ![ Operator Nexus Logging, Monitoring and Alerting (LMA) Framework](Docs/media/log-monitoring-analytics-framework.png) IMG > :::image type="content" source="media/log-monitoring-analytics-framework.png" alt-text="Screenshot of Operator Nexus Logging, Monitoring and Alerting (LMA) Framework."::: - Figure: Operator Nexus Logging, Monitoring and Alerting (LMA) Framework The key highlights of Operator Nexus observability framework are: The logs from Operator Nexus platform are stored in the following tables: The 'InsightMetrics' table in the Logs section contains the metrics collected from Bare Metal Machines and the undercloud Kubernetes cluster. In addition, a few selected metrics collected from the undercloud can be observed by opening the Metrics tab from the Azure Monitor menu. -<! IMG ![Azure Monitor Metrics Selection](Docs/media/azure-monitor-metrics-selection.png) IMG > :::image type="content" source="media/azure-monitor-metrics-selection.png" alt-text="Screenshot of Azure Monitor Metrics Selection."::: Figure: Azure Monitor Metrics Selection You can use the sample Azure Resource Manager alarm templates for [Operator Nexu ## Log Analytic Workspace -A [LAW](../azure-monitor/logs/log-analytics-workspace-overview.md) +A [Log Analytics Workspace (LAW)](../azure-monitor/logs/log-analytics-workspace-overview.md) is a unique environment to log data from Azure Monitor and other Azure services. Each workspace has its own data repository and configuration but may combine data from multiple services. Each workspace consists of multiple data tables. |
operator-nexus | Concepts Resource Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-resource-types.md | -<! IMG ![Resource Types](Docs/media/resource-types.png) IMG > :::image type="content" source="media/resource-types.png" alt-text="Screenshot of Resource Types."::: Figure: Resource model The Operator Nexus Cluster (or Instance) platform components include the infrast ### Network Fabric Controller -Network Fabric Controller (NFC) is an Operator Nexus resource which runs in your subscription in your desired resource group and [Virtual Network](../virtual-network/virtual-networks-overview.md). The Network Fabric Controller acts as a bridge between the Azure control plane and your on-premises infrastructure to manage the lifecycle and configuration of the Network Devices in a Network Fabric instance. +Network Fabric Controller (NFC) is an Operator Nexus resource that runs in your subscription in your desired resource group and [Virtual Network](../virtual-network/virtual-networks-overview.md). The Network Fabric Controller acts as a bridge between the Azure control plane and your on-premises infrastructure to manage the lifecycle and configuration of the Network Devices in a Network Fabric instance. -The Network Fabric Controller achieves this by establishing a private connectivity channel between your Azure environment and on-premises using [Azure ExpressRoute](../expressroute/expressroute-introduction.md) and other supporting resources which are deployed in a managed resource group. The NFC is typically the first resource which you would create to establish this connectivity to bootstrap and configure your management and workload networks. +The Network Fabric Controller achieves this by establishing a private connectivity channel between your Azure environment and on-premises using [Azure ExpressRoute](../expressroute/expressroute-introduction.md) and other supporting resources which are deployed in a managed resource group. The NFC is typically the first resource that you would create to establish this connectivity to bootstrap and configure your management and workload networks. The Network Fabric Controller enables you to manage all the Network resources within your Operator Nexus instance like Network Fabric, Network Racks, Network Devices, Isolation Domains, Route Policies, etc. You can manage the lifecycle of a Network Fabric Controller via Azure using any ### Network Fabric -Network Fabric (NF) resource is a representation of your on-premises network topology in Azure. Every Network Fabric must be associated to and controlled by a Network Fabric Controller which is deployed in the same Azure region. You can associate multiple Network Fabric resources per Network Fabric Controller, see [Nexus Limits and Quotas](./reference-limits-and-quotas.md). A single deployment of the infrastructure is considered a Network Fabric instance. +Network Fabric (NF) resource is a representation of your on-premises network topology in Azure. Every Network Fabric must be associated with and controlled by a Network Fabric Controller that is deployed in the same Azure region. You can associate multiple Network Fabric resources per Network Fabric Controller, see [Nexus Limits and Quotas](./reference-limits-and-quotas.md). A single deployment of the infrastructure is considered a Network Fabric instance. Operator Nexus allows you to create Network Fabrics based on specific SKU types, where each SKU represents the number of network racks and compute servers in each rack deployed on-premises. -Each Network Fabric resource can contain a collection of network racks, network devices, isolation domains for their interconnections. Once a Network Fabric is created and you've validated that your network devices are connected, then it can be Provisioned. Provisioning a Network Fabric is the process of bootstrapping the Network Fabric instance to get the management network up. +Each Network Fabric resource can contain a collection of network racks, network devices, and isolation domains for their interconnections. Once a Network Fabric is created and you've validated that your network devices are connected, then it can be Provisioned. Provisioning a Network Fabric is the process of bootstrapping the Network Fabric instance to get the management network up. You can manage the lifecycle of a Network Fabric via Azure using any of the supported interfaces - Azure CLI, REST API, etc. See [how to create and provision a Network Fabric](./howto-configure-network-fabric.md) to learn more. ### Network racks -Network Rack resource is a representation of your on-premises Racks from the networking perspective. The number of network racks in an Operator Nexus instance depends on the Network Fabric SKU which was chosen while creation. +Network Rack resource is a representation of your on-premises racks from the networking perspective. The number of network racks in an Operator Nexus instance depends on the Network Fabric SKU that was chosen during creation. -Each network rack consists of Network Devices which are part of that rack. For example - Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, Network Packet Brokers (NPB). +Each network rack consists of Network Devices that are part of that rack. For example - Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, and Network Packet Brokers (NPB). -The Network Rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other Racks via Network to Network Interconnect (NNI) resource. +The Network Rack also models the connectivity to the operator's Physical Edge switches (PEs) and the ToRs on the other racks via Network to Network Interconnect (NNI) resource. -The lifecycle of Network Rack resources is tied to the Network Fabric resource. The Network Racks are automatically created when you create the Network Fabric and the number of racks depends on the SKU which was chosen. When the Network Fabric resource is deleted, all the associated Network Racks are also deleted along with it. +The lifecycle of Network Rack resources is tied to the Network Fabric resource. The Network Racks are automatically created when you create the Network Fabric and the number of racks depends on the SKU that was chosen. When the Network Fabric resource is deleted, all the associated Network Racks are also deleted along with it. ### Network devices -Network Devices represent the Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, Network Packet Brokers (NPB) which are deployed as part of the Network Fabric instance. Each Network Device resource is associated to a specific Network Rack where it is deployed. +Network Devices represent the Customer Edge (CE) routers, Top of Rack (ToR) Switches, Management Switches, and Network Packet Brokers (NPB) which are deployed as part of the Network Fabric instance. Each Network Device resource is associated with a specific Network Rack where it is deployed. -Each network device resource has a SKU, Role, Host Name, and Serial Number as properties, and can have multiple network interfaces associated. Network Interfaces contain the IPv4 and IPv6 addresses, physical identifier, interface type, and the associated connections. Network Interfaces also has the administrativeState property which indicates whether the interface is enabled or disabled. +Each network device resource has a SKU, Role, Host Name, and Serial Number as properties, and can have multiple network interfaces associated. Network Interfaces contain the IPv4 and IPv6 addresses, physical identifier, interface type, and the associated connections. Network Interfaces also have the `administrativeState` property that indicates whether the interface is enabled or disabled. -The lifecycle of the Network Interface depends on the Network Device and can exist as long as the parent network device resource exists. However, you can perform certain operations on a network interface resource like enable/disable the administrativeState via Azure using any of the supported interfaces - Azure CLI, REST API, etc. +The lifecycle of the Network Interface depends on the Network Device and can exist as long as the parent network device resource exists. However, you can perform certain operations on a network interface resource like enable/disable the `administrativeState` via Azure using any of the supported interfaces - Azure CLI, REST API, etc. The lifecycle of the Network Device resources depends on the network rack resource and will exist as long as the parent Network Fabric resource exists. However, before provisioning the Network Fabric, you can perform certain operations on a network device like setting a custom hostname and updating the serial number of the device via Azure using any of the supported interfaces - Azure CLI, REST API, etc. ### Isolation domains -Isolation Domains enable east-west or north-south connectivity across Operator Nexus instance. They provide the required network connectivity between infrastructure components and also workload components. In principle, there are two types of networks which are established by isolation domains - management network and workload or tenant network. +Isolation Domains enable east-west or north-south connectivity across Operator Nexus instance. They provide the required network connectivity between infrastructure components and also workload components. In principle, there are two types of networks that are established by isolation domains - management network and workload or tenant network. -Management network is the private connectivity that enables communication between the Network Fabric instance which is deployed on-premises and Azure Virtual Network. You can create workload or tenant networks to enable communication between the workloads which are deployed across the Operator Nexus instance. +A management network provides private connectivity that enables communication between the Network Fabric instance that is deployed on-premises and Azure Virtual Network. You can create workload or tenant networks to enable communication between the workloads that are deployed across the Operator Nexus instance. -Each isolation domain is associated to a specific Network Fabric resource and has the option to be enabled/disabled. Only when an isolation domain is enabled, it's configured on the network devices and the configuration is removed once the isolation domain is removed. +Each isolation domain is associated with a specific Network Fabric resource and has the option to be enabled/disabled. Only when an isolation domain is enabled, it's configured on the network devices, and the configuration is removed once the isolation domain is removed. Primarily, there are two types of isolation domains: There are two types of Layer 3 networks that you can create: * Internal Network * External Network -Internal networks enable layer 3 east-west connectivity across racks within the Operator Nexus instance and external networks enable layer 3 north-south connectivity from the Operator Nexus instance to networks outside the instance. A Layer 3 isolation domain must be configured with at least one internal network and external networks are optional. +Internal networks enable layer 3 east-west connectivity across racks within the Operator Nexus instance and external networks enable layer 3 north-south connectivity from the Operator Nexus instance to networks outside the instance. A Layer 3 isolation domain must be configured with at least one internal network; external networks are optional. ### Cluster manager -The Cluster Manager (CM) is hosted on Azure and manages the lifecycle of all on-premises clusters. +The Cluster Manager (CM) is hosted on Azure and manages the lifecycle of all on-premises infrastructure (also referred to as infra clusters). Like NFC, a CM can manage multiple Operator Nexus instances. The CM and the NFC are hosted in the same Azure subscription. -### Cluster +### Infrastructure Cluster -The Cluster (or Compute Cluster) resource models a collection of racks, bare metal machines, storage, and networking. -Each cluster is mapped to the on-premises Network Fabric. A cluster provides a holistic view of the deployed compute capacity. -Cluster capacity examples include the number of vCPUs, the amount of memory, and the amount of storage space. A cluster is also the basic unit for compute and storage upgrades. +The Infrastructure Cluster (or Compute Cluster or infra cluster) resource models a collection of racks, bare metal machines, storage, and networking. +Each infra cluster is mapped to the on-premises Network Fabric. The cluster provides a holistic view of the deployed compute capacity. +Infra cluster capacity examples include the number of vCPUs, the amount of memory, and the amount of storage space. A cluster is also the basic unit for compute and storage upgrades. ### Rack -The Rack (or a compute rack) resource represents the compute servers (Bare Metal Machines), management servers, management switch and ToRs. The Rack is created, updated or deleted as part of the Cluster lifecycle management. +The Rack (or a compute rack) resource represents the compute servers (Bare Metal Machines), management servers, management switches, and ToRs. The Rack is created, updated, or deleted as part of the infra cluster lifecycle management. ### Storage appliance -Storage Appliances represent storage arrays used for persistent data storage in the Operator Nexus instance. All user and consumer data is stored in these appliances local to your premises. This local storage complies with some of the most stringent local data storage requirements. +Storage Appliances represent storage arrays used for persistent data storage in the Operator Nexus instance. All user and consumer data is stored in these local on-premises appliances. This local storage complies with some of the most stringent local data storage requirements. ### Bare Metal Machine -Bare Metal Machines represent the physical servers in a rack. They're lifecycle managed by the Cluster Manager. +Bare Metal Machines represent the physical servers in a rack. They are lifecycle managed by the Cluster Manager. Bare Metal Machines are used by workloads to host Virtual Machines and Kubernetes clusters. ## Workload components You can use VMs to host your Virtualized Network Function (VNF) workloads. ### Nexus Kubernetes cluster -Nexus Kubernetes cluster is Azure Kubernetes Service cluster modified to run on your on-premises Operator Nexus instance. The Nexus Kubernetes cluster is designed to host your Containerized Network Function (CNF) workloads. +Nexus Kubernetes cluster is a Kubernetes cluster modified to run on your on-premises Operator Nexus instance. The Nexus Kubernetes cluster is designed to host your Containerized Network Function (CNF) workloads. |
operator-nexus | Concepts Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-security.md | + + Title: "Azure Operator Nexus: Security concepts" +description: Security overview for Azure Operator Nexus ++++ Last updated : 08/14/2023++++# Azure Operator Nexus security ++Azure Operator Nexus is designed and built to both detect and defend against +the latest security threats and comply with the strict requirements of government +and industry security standards. Two cornerstones form the foundation of its +security architecture: ++* **Security by default** - Security resiliency is an inherent part of the platform with little to no configuration changes needed to use it securely. +* **Assume breach** - The underlying assumption is that any system can be compromised, and as such the goal is to minimize the impact of a security breach if one occurs. ++Azure Operator Nexus realizes the above by leveraging Microsoft cloud-native security tools that give you the ability to improve your cloud security posture while allowing you to protect your operator workloads. ++## Platform-wide protection via Microsoft Defender for Cloud ++[Microsoft Defender for Cloud](../defender-for-cloud/defender-for-cloud-introduction.md) is a cloud-native application protection platform (CNAPP) that provides the security capabilities needed to harden your resources, manage your security posture, protect against cyberattacks, and streamline security management. These are some of the key features of Defender for Cloud that apply to the Azure Operator Nexus platform: ++* **Vulnerability assessment for virtual machines and container registries** - Easily enable vulnerability assessment solutions to discover, manage, and resolve vulnerabilities. View, investigate, and remediate the findings directly from within Defender for Cloud. +* **Hybrid cloud security** ΓÇô Get a unified view of security across all your on-premises and cloud workloads. Apply security policies and continuously assess the security of your hybrid cloud workloads to ensure compliance with security standards. Collect, search, and analyze security data from multiple sources, including firewalls and other partner solutions. +* **Threat protection alerts** - Advanced behavioral analytics and the Microsoft Intelligent Security Graph provide an edge over evolving cyberattacks. Built-in behavioral analytics and machine learning can identify attacks and zero-day exploits. Monitor networks, machines, Azure Storage and cloud services for incoming attacks and post-breach activity. Streamline investigation with interactive tools and contextual threat intelligence. +* **Compliance assessment against a variety of security standards** - Defender for Cloud continuously assesses your hybrid cloud environment to analyze the risk factors according to the controls and best practices in Azure Security Benchmark. When you enable the advanced security features, you can apply a range of other industry standards, regulatory standards, and benchmarks according to your organizationΓÇÖs needs. Add standards and track your compliance with them from the regulatory compliance dashboard. +* **Container security features** - Benefit from vulnerability management and real-time threat protection on your containerized environments. ++There are enhanced security options that let you protect your on-premises host servers as well as the Kubernetes clusters that run your operator workloads. These options are described below. ++## Bare metal machine host operating system protection via Microsoft Defender for Endpoint ++Azure Operator Nexus bare-metal machines (BMMs), which host the on-premises infrastructure compute servers, are protected when you elect to enable the [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) solution. Microsoft Defender for Endpoint provides preventative antivirus (AV), endpoint detection and response (EDR), and vulnerability management capabilities. ++You have the option to enable Microsoft Defender for Endpoint protection once you have selected and activated a [Microsoft Defender for Servers](../defender-for-cloud/tutorial-enable-servers-plan.md) plan, as Defender for Servers plan activation is a prerequisite for Microsoft Defender for Endpoint. Once enabled, the Microsoft Defender for Endpoint configuration is managed by the platform to ensure optimal security and performance, and to reduce the risk of misconfigurations. ++## Kubernetes cluster workload protection via Microsoft Defender for Containers ++On-premises Kubernetes clusters that run your operator workloads are protected when you elect to enable the Microsoft Defender for Containers solution. [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) provides run-time threat protection for clusters and Linux nodes as well as cluster environment hardening against misconfigurations. ++You have the option to enable Defender for Containers protection within Defender for Cloud by activating the Defender for Containers plan. ++## Cloud security is a shared responsibility ++It is important to understand that in a cloud environment, security is a [shared responsibility](../security/fundamentals/shared-responsibility.md) between you and the cloud provider. The responsibilities vary depending on the type of cloud service your workloads run on, whether it is Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS), as well as where the workloads are hosted ΓÇô within the cloud providerΓÇÖs or your own on-premises datacenters. ++Azure Operator Nexus workloads run on servers in your datacenters, so you are in control of changes to your on-premises environment. Microsoft periodically makes new platform releases available that contain security and other updates. You must then decide when to apply these releases to your environment as appropriate for your organizationΓÇÖs business needs. |
operator-nexus | How To Route Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/how-to-route-policy.md | -Route policies provides Operators the capability to allow or deny routes in regards to Layer 3 isolation domains in Network Fabric. +Route policies provide Operators the capability to allow or deny routes in regards to Layer 3 isolation domains in Network Fabric. With route policies, routes are tagged with certain attributes via community values and extended community values when they're distributed via Border Gateway Patrol (BGP). Expected output: ## IP extended community -The `IPExtendedCommunity`resource allows operators to manipulate routes based on route targets. Operators use it to specify conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tag them with specific extended community values. The operator must create an ARM resource of the type `I`PExtendedCommunityList` by providing a list of community values and specific properties. ExtendedCommunityLists are used in specifying match conditions and the action properties for route policies. +The `IPExtendedCommunity`resource allows operators to manipulate routes based on route targets. Operators use it to specify conditions and actions for adding/removing routes as they're propagated up-stream/down-stream or tag them with specific extended community values. The operator must create an ARM resource of the type `IPExtendedCommunityList` by providing a list of community values and specific properties. ExtendedCommunityLists are used in specifying match conditions and the action properties for route policies. ### Parameters for IP extended community |
operator-nexus | Howto Azure Operator Nexus Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-azure-operator-nexus-prerequisites.md | To get started with Operator Nexus, you need to create a Network Fabric Controll in your target Azure region. Each NFC is associated with a CM in the same Azure region and your subscription.-The NFC/CM pair lifecycle manages up to 32 Azure Operator Nexus instances deployed in your sites connected to this Azure region. -You'll need to complete the prerequisites before you can deploy the Operator Nexus first NFC and CM pair. -In subsequent deployments of Operator Nexus, you can skip to creating the NFC and CM. +You'll need to complete the prerequisites before you can deploy the first Operator Nexus NFC and CM pair. +In subsequent deployments of Operator Nexus, you will only need to create the NFC and CM after the [quota](./reference-limits-and-quotas.md#network-fabric) of supported Operator Nexus instances has been reached. ## Resource Provider Registration In subsequent deployments of Operator Nexus, you can skip to creating the NFC an - Microsoft.Resources ## Dependent Azure resources setup+ - Establish [ExpressRoute](/azure/expressroute/expressroute-introduction) connectivity from your on-premises network to an Azure Region: - ExpressRoute circuit [creation and verification](/azure/expressroute/expressroute-howto-circuit-portal-resource-manager) |
operator-nexus | Howto Cluster Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-cluster-manager.md | -The Cluster Manager is deployed in the operator's Azure subscription to manage the lifecycle of Operator Nexus Clusters. +The Cluster Manager is deployed in the operator's Azure subscription to manage the lifecycle of Operator Nexus Infrastructure Clusters. ## Before you begin |
operator-nexus | Howto Configure Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md | The metrics generated from the logging data are available in [Azure Monitor metr ## Create a Cluster -The Cluster resource represents an on-premises deployment of the platform +The Infrastructure Cluster resource represents an on-premises deployment of the platform within the Cluster Manager. All other platform-specific resources are dependent upon it for their lifecycle. |
operator-nexus | Howto Install Cli Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md | Example output: Name Version -- - monitor-control-service 0.2.0-connectedmachine 0.5.1 -connectedk8s 1.3.20 +connectedmachine 0.6.0 +connectedk8s 1.4.0 k8s-extension 1.4.2-networkcloud 1.0.0b2 +networkcloud 1.0.0 k8s-configuration 1.7.0-managednetworkfabric 3.1.0 +managednetworkfabric 3.2.0 customlocation 0.1.3 ssh 2.0.1 ``` |
operator-nexus | Howto Run Instance Readiness Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md | A service principal with the following role assignments. The supplemental script * `Contributor` - For creating and manipulating resources * `Storage Blob Data Contributor` - For reading from and writing to the storage blob container-* `Azure ARC Kubernetes Admin` - For ARC enrolling the NAKS cluster +* `Azure ARC Kubernetes Admin` - For ARC enrolling the NKS cluster Additionally, the script creates the necessary security group, and adds the service principal to the security group. If the security group exists, it adds the service principal to the existing security group. |
operator-nexus | Howto Set Up Defender For Cloud Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-set-up-defender-for-cloud-security.md | + + Title: "Azure Operator Nexus: How to set up the Defender for Cloud security environment" +description: Learn how to enable and configure Defender for Cloud security plan features on your Operator Nexus subscription. ++++ Last updated : 08/18/2023++++# Set up the Defender for Cloud security environment on your Operator Nexus subscription ++This guide provides you with instructions on how to enable Microsoft Defender for Cloud and activate and configure some of its enhanced security plan options that can be used to secure your Operator Nexus bare metal compute servers and workloads. ++## Before you begin ++To aid your understanding of Defender for Cloud and its many security features, there's a wide variety of material available on the [Microsoft Defender for Cloud documentation](https://learn.microsoft.com/azure/defender-for-cloud/) site that you might find helpful. ++## Prerequisites ++To successfully complete the actions in this guide: +- You must have an Azure Operator Nexus subscription. +- You must have a deployed Azure Arc-connected Operator Nexus instance running in your on-premises environment. +- You must use an Azure portal user account in your subscription with Owner, Contributor or Reader role. ++## Enable Defender for Cloud ++Enabling Microsoft Defender for Cloud on your Operator Nexus subscription is simple and immediately gives you access to its free included security features. To turn on Defender for Cloud: ++1. Sign in to [Azure portal](https://portal.azure.com). +2. In the search box at the top, enter ΓÇ£Defender for Cloud.ΓÇ¥ +3. Select Microsoft Defender for Cloud under Services. ++When the Defender for Cloud [overview page](../defender-for-cloud/overview-page.md) opens, you have successfully activated Defender for Cloud on your subscription. The overview page is an interactive dashboard user experience that provides a comprehensive view of your Operator Nexus security posture. It displays security alerts, coverage information, and much more. Using this dashboard, you can assess the security of your workloads and identify and mitigate risks. ++After activating Defender for Cloud, you have the option to enable Defender for CloudΓÇÖs enhanced security features that provide important server and workload protections: +- [Defender for Servers](../defender-for-cloud/tutorial-enable-servers-plan.md) +- [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) ΓÇô made available through Defender for Servers +- [Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) + +## Set up a Defender for Servers plan to protect your bare metal servers ++To take advantage of the added security protection of your on-premises bare metal machine (BMM) compute servers that's provided by Microsoft Defender for Endpoint, you can enable and configure a [Defender for Servers plan](../defender-for-cloud/plan-defender-for-servers-select-plan.md) on your Operator Nexus subscription. ++### Prerequisites ++- Defender for Cloud must be enabled on your subscription. ++To set up a Defender for Servers plan: +1. [Turn on the Defender for Servers plan feature](../defender-for-cloud/tutorial-enable-servers-plan.md#enable-the-defender-for-servers-plan) under Defender for Cloud. +2. [Select one of the Defender for Servers plans](../defender-for-cloud/tutorial-enable-servers-plan.md#select-a-defender-for-servers-plan). +3. While on the *Defender plans* page, click the Settings link for Servers under the ΓÇ£Monitoring coverageΓÇ¥ column. The *Settings & monitoring* page will open. + * Ensure that **Log Analytics agent/Azure Monitor agent** is set to Off. + * Ensure that **Endpoint protection** is set to Off. + :::image type="content" source="media/security/nexus-defender-for-servers-plan-settings.png" alt-text="Screenshot of Defender for Servers plan settings for Operator Nexus." lightbox="media/security/nexus-defender-for-servers-plan-settings.png"::: + * Click Continue to save any changed settings. ++### Operator Nexus-specific requirement for enabling Defender for Endpoint + +> [!IMPORTANT] +> In Operator Nexus, Microsoft Defender for Endpoint is enabled on a per-cluster basis rather than across all clusters at once, which is the default behavior when the Endpoint protection setting is enabled in Defender for Servers. To request Endpoint protection to be turned on in one or more of your on-premises workload clusters you will need to open a Microsoft Support ticket, and the Support team will subsequently perform the enablement actions. You must have a Defender for Servers plan active in your subscription prior to opening a ticket. ++Once Defender for Endpoint is enabled by Microsoft Support, its configuration is managed by the platform to ensure optimal security and performance, and to reduce the risk of misconfigurations. ++## Set up the Defender for Containers plan to protect your Azure Kubernetes Service cluster workloads ++You can protect the on-premises Kubernetes clusters that run your operator workloads by enabling and configuring the [Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) plan on your subscription. ++### Prerequisites ++- Defender for Cloud must be enabled on your subscription. ++To set up the Defender for Containers plan: ++1. Turn on the [Defender for Containers plan feature](../defender-for-cloud/tutorial-enable-containers-azure.md#enable-the-defender-for-containers-plan) under Defender for Cloud. +2. While on the *Defender plans* page, click the Settings link for Containers under the ΓÇ£Monitoring coverageΓÇ¥ column. The *Settings & monitoring* page will open. + * Ensure that **DefenderDaemonSet** is set to Off. + * Ensure that **Azure Policy for Kubernetes** is set to Off. + :::image type="content" source="media/security/nexus-defender-for-containers-plan-settings.png" alt-text="Screenshot of Defender for Containers plan settings for Operator Nexus." lightbox="media/security/nexus-defender-for-containers-plan-settings.png"::: + * Click Continue to save any changed settings. |
operator-nexus | Howto Use Azure Policy For Aks Cluster Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-azure-policy-for-aks-cluster-security.md | + + Title: "Azure Operator Nexus: How to use Azure Policy to protect on-premises Azure Kubernetes Service clusters" +description: Learn how to assign Azure built-in policies or create custom policies to secure your Operator Nexus on-premises Azure Kubernetes Service (AKS) clusters. ++++ Last updated : 08/18/2023++++# Use Azure Policy to secure your Azure Kubernetes Service (AKS) clusters ++You can set up extra security protections for your Operator Nexus Arc-connected on-premises Azure Kubernetes Service (AKS) clusters using Azure Policy. With Azure Policy, you assign a single policy or group of related policies (called an initiative or policy set) to one or more of your clusters. Individual policies can be either built-in or custom policy definitions that you create. ++This guide provides information on how to apply policy definitions to your clusters and verify those assignments are being enforced. ++## Before you begin ++If you're new to Azure Policy, here are some helpful resources that you can use to become more familiar with Azure Policy, how it can be used to secure your AKS clusters, and the built-in policy definitions that are available for you to use for AKS resource protection: ++- [Azure Policy documentation](https://learn.microsoft.com/azure/governance/policy/) +- [Understand Azure Policy for Kubernetes clusters](../governance/policy/concepts/policy-for-kubernetes.md) +- [Azure Policy Built-in definitions for AKS](../aks/policy-reference.md) ++## Prerequisites ++- One or more on-premises AKS clusters that are Arc-connected to Azure. ++ > [!NOTE] + > Operator Nexus does not require you to install the Azure Policy add-on for AKS in your clusters since the extension is automatically installed during AKS cluster deployment. ++- A user account in your subscription with the appropriate role: + * A [Resource Policy Contributor](../role-based-access-control/built-in-roles.md#resource-policy-contributor) or Owner can view, create, assign, and disable policies. + * A Contributor or Reader can view policies and policy assignments. ++## Apply and validate policies against your AKS clusters ++The process for assigning a policy or initiative to your AKS clusters and validating the assignment is as follows: ++1. Determine whether there is an existing [built-in AKS policy](../aks/policy-reference.md) or initiative that is suitable for your security requirements. +2. Sign in to the [Azure portal](https://portal.azure.com) to perform the appropriate type of policy or initiative assignment on your Operator Nexus subscription based on your research in Step 1. + * If a built-in policy or initiative exists, you can assign it using the instructions [here](../aks/use-azure-policy.md?source=recommendations#assign-a-built-in-policy-definition-or-initiative). + * Otherwise, you can create and assign a [custom policy definition](../aks/use-azure-policy.md?source=recommendations#create-and-assign-a-custom-policy-definition). ++3. [Validate](../aks/use-azure-policy.md?source=recommendations#validate-an-azure-policy-is-running) that the policy or initiative has been applied to your clusters. |
operator-nexus | Quickstarts Kubernetes Cluster Deployment Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-cli.md | Before you run the commands, you need to set several variables to define the con | SERVICE_CIDR | The network range for the Kubernetes services in the cluster, in CIDR notation. | | DNS_SERVICE_IP | The IP address for the Kubernetes DNS service. | - Once you've defined these variables, you can run the Azure CLI command to create the cluster. Add the ```--debug``` flag at the end to provide more detailed output for troubleshooting purposes. To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example: ```bash RESOURCE_GROUP="myResourceGroup"-LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')" -SUBSCRIPTION_ID="$(az account show -o tsv --query id)" +SUBSCRIPTION_ID="<Azure subscription ID>" +LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION_ID -o tsv)" CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>" CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>" CNI_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>" POD_CIDR="10.244.0.0/16" SERVICE_CIDR="10.96.0.0/16" DNS_SERVICE_IP="10.96.0.10" ```+ > [!IMPORTANT] > It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, CNI_ARM_ID, and AAD_ADMIN_GROUP_OBJECT_ID with your actual values before running these commands. After defining these variables, you can create the Kubernetes cluster by executi ```azurecli az networkcloud kubernetescluster create \name "${CLUSTER_NAME}" \resource-group "${RESOURCE_GROUP}" \subscription "${SUBSCRIPTION_ID}" \extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \location "${LOCATION}" \kubernetes-version "${K8S_VERSION}" \aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \admin-username "${ADMIN_USERNAME}" \ssh-key-values "${SSH_PUBLIC_KEY}" \control-plane-node-configuration \+ --name "${CLUSTER_NAME}" \ + --resource-group "${RESOURCE_GROUP}" \ + --subscription "${SUBSCRIPTION_ID}" \ + --extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \ + --location "${LOCATION}" \ + --kubernetes-version "${K8S_VERSION}" \ + --aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \ + --admin-username "${ADMIN_USERNAME}" \ + --ssh-key-values "${SSH_PUBLIC_KEY}" \ + --control-plane-node-configuration \ count="${CONTROL_PLANE_COUNT}" \ vm-sku-name="${CONTROL_PLANE_VM_SIZE}" \initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \network-configuration \+ --initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \ + --network-configuration \ cloud-services-network-id="${CSN_ARM_ID}" \ cni-network-id="${CNI_ARM_ID}" \ pod-cidrs="[${POD_CIDR}]" \ After a few minutes, the command completes and returns information about the clu [!INCLUDE [quickstart-cluster-connect](./includes/kubernetes-cluster/quickstart-cluster-connect.md)] ## Add an agent pool+ The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ```az networkcloud kubernetescluster agentpool create``` command. The following example creates an agent pool named ```myNexusAKSCluster-nodepool-2```: You can also use the default values for some of the variables, as shown in the following example: AGENT_POOL_VM_SIZE="NC_M4_v1" AGENT_POOL_COUNT="1" AGENT_POOL_MODE="User" ```+ After defining these variables, you can add an agent pool by executing the following Azure CLI command: ```azurecli az networkcloud kubernetescluster agentpool create \ --name "${AGENT_POOL_NAME}" \ --kubernetes-cluster-name "${CLUSTER_NAME}" \ --resource-group "${RESOURCE_GROUP}" \+ --subscription "${SUBSCRIPTION_ID}" \ --extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \ --count "${AGENT_POOL_COUNT}" \ --mode "${AGENT_POOL_MODE}" \ |
operator-nexus | Quickstarts Tenant Workload Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-tenant-workload-deployment.md | To define these variables, use the following set commands and replace the exampl ```bash # Azure parameters RESOURCE_GROUP="myResourceGroup"-SUBSCRIPTION="$(az account show -o tsv --query id)" +SUBSCRIPTION="<Azure subscription ID>" CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"-LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')" +LOCATION="$(az group show --name $RESOURCE_GROUP --query location --subscription $SUBSCRIPTION -o tsv)" # VM parameters VM_NAME="myNexusVirtualMachine" |
operator-nexus | Reference Limits And Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-limits-and-quotas.md | The creation of the Network Cloud specific resources is subject to the following | Racks | Up to BOM-specified Compute Racks per Nexus Cluster | | Bare Metal Machines | Up to BOM-specified BareMetal machines per Rack | | Storage Appliances | Up to BOM-specified Storage appliances per Nexus Cluster instance |-| NAKS Cluster | Depends on selection of VM flavor and number of nodes per NAKS cluster | +| NKS Cluster | Depends on selection of VM flavor and number of nodes per NKS cluster | | Layer 2 Networks | 3500 per Nexus instance | | Layer 3 Networks | 200 per Nexus instance | | Trunked Networks | 3500 per Nexus instance | The table here briefly mentions other Azure resources that are necessary. Howeve | Resource Type | Notes | | - | -| | Subscription | [Subscription limits](../azure-resource-manager/management/azure-subscription-service-limits.md) |-| Resource Group | [Resource Group Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). There's a max limit for RG per subscription. Operators need to make appropriate consideration for how they want to manage Resource Groups for NAKS clusters vs Virtual machines per Nexus instance. | +| Resource Group | [Resource Group Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#resource-group-limits). There's a max limit for RG per subscription. Operators need to make appropriate consideration for how they want to manage Resource Groups for NKS clusters vs Virtual machines per Nexus instance. | | VM Flavors | Customer generally has VM flavor quota in each region within subscription. You need to ensure that you can still create VMs per the requirements. | | AKS Clusters | [AKS Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-kubernetes-service-limits) | | Virtual Networks | [Virtual Network Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) | The table here briefly mentions other Azure resources that are necessary. Howeve | Load Balancers (Standard) | [Load Balancer Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#load-balancer) | | Public IP Address (Standard) | [Public IP Address Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#publicip-address) | | Azure Monitor Metrics | [Azure Monitor Limits](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-monitor-limits) |-| Log Analytics Workspace | [Log Analytics Workspace Limits](../azure-monitor/service-limits.md#log-analytics-workspaces) | +| Log Analytics Workspace | [Log Analytics Workspace Limits](../azure-monitor/service-limits.md#log-analytics-workspaces) | |
operator-nexus | Reference Near Edge Compute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-compute.md | Azure Operator Nexus offers a group of on-premises cloud solutions. One of the o In a near-edge environment (also known as an instance), the compute servers (also known as bare-metal machines) represent the physical machines on the rack. They run the CBL-Mariner operating system and provide support for running high-performance workloads. -## Available SKUs +<!-- ## Available SKUs The Azure Operator Nexus offering is built with the following compute nodes for near-edge instances (the nodes that run the actual customer workloads). | SKU | Description | | -- | -- | | Dell R750 | Compute node for near edge |+--> ## Compute connectivity |
operator-nexus | Reference Near Edge Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-storage.md | This table lists the characteristics of the storage appliance. | Number of maximum I/O operations supported per second <br>(with 80/20 read/write ratio) | 250K+ (4K) <br>150K+ (16K) | | Number of I/O operations supported per volume per second | 50K+ | | Maximum I/O latency supported | 10 ms |-| Nominal failover time supported | 10 s | +| Nominal failover time supported | 10 s | |
orbital | License Spacecraft | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md | To initiate the spacecraft licensing process, you'll need: - A spacecraft object that corresponds to the spacecraft in orbit or slated for launch. The links in this object must match all current and planned filings. - A list of ground stations that you wish to use to communicate with your satellite. -## Step 1 - Initiate the request +## Step 1: Initiate the request The process starts by initiating the licensing request via the Azure portal. The process starts by initiating the licensing request via the Azure portal. 1. Click next to Review + Create. 1. Click Create. -## Step 2 - Provide more details +## Step 2: Provide more details When the request is generated, our regulatory team will investigate the request and determine if more detail is required. If so, a customer support representative will reach out to you with a regulatory intake form. You'll need to input information regarding relevant filings, call signs, orbital parameters, link details, antenna details, point of contacts, etc. Fill out all relevant fields in this form as it helps speeds up the process. When you're done entering information, email this form back to the customer support representative. -## Step 3 - Await feedback from our regulatory team +## Step 3: Await feedback from our regulatory team Based on the details provided in the steps above, our regulatory team will make an assessment on time and cost to onboard your spacecraft to all requested ground stations. This step will take a few weeks to execute. Once the determination is made, we'll confirm the cost with you and ask you to authorize before proceeding. -## Step 4 - Azure Orbital requests the relevant licensing +## Step 4: Azure Orbital requests the relevant licensing Upon authorization, you will be billed the fees associated with each relevant ground station. Our regulatory team will seek the relevant licenses to enable your spacecraft to communicate with the desired ground stations. Refer to the following table for an estimated timeline for execution: Upon authorization, you will be billed the fees associated with each relevant gr | -- | - | | - | - | - | | Onboarding Timeframe | 3-6 months | 3-6 months | 3-6 months | <1 month | 3-6 months | -## Step 5 - Spacecraft is authorized +## Step 5: Spacecraft is authorized Once the licenses are in place, the spacecraft object will be updated by Azure Orbital to represent the licenses held at the specified ground stations. To understand how the authorizations are applied, see [Spacecraft Object](./spacecraft-object.md). |
orbital | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview.md | With Azure Orbital Ground Station, you can focus on your missions by off-loading Azure Orbital Ground Station uses MicrosoftΓÇÖs global infrastructure and low-latency global network along with an expansive partner ecosystem of ground station networks, cloud modems, and "Telemetry, Tracking, & Control" (TT&C) functions. ## Earth Observation with Azure Orbital Ground Station For a full end-to-end solution to manage fleet operations and "Telemetry, Tracki * Direct data ingestion into Azure * Marketplace integration with third-party data processing and image calibration services * Integrated cloud modems for X and S bands- * Global reach through integrated third-party networks + * Global reach through first-party and integrated third-party networks + ## Links to learn more - [Overview, features, security, and FAQ](https://azure.microsoft.com/products/orbital/#layout-container-uid189e) |
orbital | Satellite Imagery With Orbital Ground Station | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md | +> [!NOTE] +> NASA has deprecated support of the DRL software used to process Aqua satellite imagery. Please see: [DRL Current Status](https://directreadout.sci.gsfc.nasa.gov/home.html). Steps 2, 3, and 4 of this tutorial are no longer relevant but presented for informational purposes only. + This article is a comprehensive walk-through showing how to use the [Azure Orbital Ground Station (AOGS)](https://azure.microsoft.com/services/orbital/) to capture and process satellite imagery. It introduces the AOGS and its core concepts and shows how to schedule contacts. The article also steps through an example in which we collect and process NASA Aqua satellite data in an Azure virtual machine (VM) using NASA-provided tools. Aqua is a polar-orbiting spacecraft launched by NASA in 2002. Data from all science instruments aboard Aqua is downlinked to the Earth using direct broadcast over the X-band in near real-time. More information about Aqua can be found on the [Aqua Project Science](https://aqua.nasa.gov/) website. |
postgresql | Concepts Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md | You can choose from the following categories of enhanced metrics: |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||-|**Sessions By State** (Preview)|`sessions_by_state` |Count|Overall state of the back ends. |State|No| -|**Sessions By WaitEventType** (Preview)|`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the back end is waiting.|Wait Event Type|No| -|**Oldest Backend** (Preview) |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest back end (irrespective of the state).|Doesn't apply|No| -|**Oldest Query** (Preview) |`longest_query_time_sec`|Seconds|Age in seconds of the longest query that's currently running. |Doesn't apply|No| -|**Oldest Transaction** (Preview) |`longest_transaction_time_sec`|Seconds|Age in seconds of the longest transaction (including idle transactions).|Doesn't apply|No| -|**Oldest xmin** (Preview)|`oldest_backend_xmin`|Count|The actual value of the oldest `xmin`. If `xmin` isn't increasing, it indicates that there are some long-running transactions that can potentially hold dead tuples from being removed. |Doesn't apply|No| -|**Oldest xmin Age** (Preview)|`oldest_backend_xmin_age`|Count|Age in units of the oldest `xmin`. Indicates how many transactions passed since the oldest `xmin`. |Doesn't apply|No| +|**Sessions By State** |`sessions_by_state` |Count|Overall state of the back ends. |State|No| +|**Sessions By WaitEventType** |`sessions_by_wait_event_type` |Count|Sessions by the type of event for which the back end is waiting.|Wait Event Type|No| +|**Oldest Backend** |`oldest_backend_time_sec` |Seconds|Age in seconds of the oldest back end (irrespective of the state).|Doesn't apply|No| +|**Oldest Query** |`longest_query_time_sec`|Seconds|Age in seconds of the longest query that's currently running. |Doesn't apply|No| +|**Oldest Transaction** |`longest_transaction_time_sec`|Seconds|Age in seconds of the longest transaction (including idle transactions).|Doesn't apply|No| +|**Oldest xmin** |`oldest_backend_xmin`|Count|The actual value of the oldest `xmin`. If `xmin` isn't increasing, it indicates that there are some long-running transactions that can potentially hold dead tuples from being removed. |Doesn't apply|No| +|**Oldest xmin Age** |`oldest_backend_xmin_age`|Count|Age in units of the oldest `xmin`. Indicates how many transactions passed since the oldest `xmin`. |Doesn't apply|No| #### Database |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||-|**Backends** (Preview) |`numbackends`|Count|Number of back ends that are connected to this database.|DatabaseName|No| -|**Deadlocks** (Preview)|`deadlocks` |Count|Number of deadlocks that are detected in this database.|DatabaseName|No| -|**Disk Blocks Hit** (Preview)|`blks_hit` |Count|Number of times disk blocks were found already in the buffer cache, so that a read wasn't necessary.|DatabaseName|No| -|**Disk Blocks Read** (Preview) |`blks_read`|Count|Number of disk blocks that were read in this database.|DatabaseName|No| -|**Temporary Files** (Preview)|`temp_files` |Count|Number of temporary files that were created by queries in this database. |DatabaseName|No| -|**Temporary Files Size** (Preview) |`temp_bytes` |Bytes|Total amount of data that's written to temporary files by queries in this database. |DatabaseName|No| -|**Total Transactions** (Preview) |`xact_total` |Count|Number of total transactions that executed in this database. |DatabaseName|No| -|**Transactions Committed** (Preview) |`xact_commit`|Count|Number of transactions in this database that have been committed.|DatabaseName|No| -|**Transactions Rolled back** (Preview) |`xact_rollback`|Count|Number of transactions in this database that have been rolled back.|DatabaseName|No| -|**Tuples Deleted** (Preview) |`tup_deleted`|Count|Number of rows that were deleted by queries in this database. |DatabaseName|No| -|**Tuples Fetched** (Preview) |`tup_fetched`|Count|Number of rows that were fetched by queries in this database. |DatabaseName|No| -|**Tuples Inserted** (Preview)|`tup_inserted` |Count|Number of rows that were inserted by queries in this database.|DatabaseName|No| -|**Tuples Returned** (Preview)|`tup_returned` |Count|Number of rows that were returned by queries in this database.|DatabaseName|No| -|**Tuples Updated** (Preview) |`tup_updated`|Count|Number of rows that were updated by queries in this database. |DatabaseName|No| +|**Backends** |`numbackends`|Count|Number of back ends that are connected to this database.|DatabaseName|No| +|**Deadlocks** |`deadlocks` |Count|Number of deadlocks that are detected in this database.|DatabaseName|No| +|**Disk Blocks Hit** |`blks_hit` |Count|Number of times disk blocks were found already in the buffer cache, so that a read wasn't necessary.|DatabaseName|No| +|**Disk Blocks Read** |`blks_read`|Count|Number of disk blocks that were read in this database.|DatabaseName|No| +|**Temporary Files** |`temp_files` |Count|Number of temporary files that were created by queries in this database. |DatabaseName|No| +|**Temporary Files Size** |`temp_bytes` |Bytes|Total amount of data that's written to temporary files by queries in this database. |DatabaseName|No| +|**Total Transactions** |`xact_total` |Count|Number of total transactions that executed in this database. |DatabaseName|No| +|**Transactions Committed** |`xact_commit`|Count|Number of transactions in this database that have been committed.|DatabaseName|No| +|**Transactions Rolled back** |`xact_rollback`|Count|Number of transactions in this database that have been rolled back.|DatabaseName|No| +|**Tuples Deleted** |`tup_deleted`|Count|Number of rows that were deleted by queries in this database. |DatabaseName|No| +|**Tuples Fetched** |`tup_fetched`|Count|Number of rows that were fetched by queries in this database. |DatabaseName|No| +|**Tuples Inserted** |`tup_inserted` |Count|Number of rows that were inserted by queries in this database.|DatabaseName|No| +|**Tuples Returned** |`tup_returned` |Count|Number of rows that were returned by queries in this database.|DatabaseName|No| +|**Tuples Updated** |`tup_updated`|Count|Number of rows that were updated by queries in this database. |DatabaseName|No| #### Logical replication |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||-|**Max Logical Replication Lag** (Preview)|`logical_replication_delay_in_bytes`|Bytes|Maximum lag across all logical replication slots.|Doesn't apply|Yes | +|**Max Logical Replication Lag** |`logical_replication_delay_in_bytes`|Bytes|Maximum lag across all logical replication slots.|Doesn't apply|Yes | #### Replication |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||-|**Max Physical Replication Lag** (Preview)|`physical_replication_delay_in_bytes`|Bytes|Maximum lag across all asynchronous physical replication slots.|Doesn't apply|Yes | -|**Read Replica Lag** (Preview)|`physical_replication_delay_in_seconds`|Seconds|Read replica lag in seconds. |Doesn't apply|Yes | +|**Max Physical Replication Lag** |`physical_replication_delay_in_bytes`|Bytes|Maximum lag across all asynchronous physical replication slots.|Doesn't apply|Yes | +|**Read Replica Lag** |`physical_replication_delay_in_seconds`|Seconds|Read replica lag in seconds. |Doesn't apply|Yes | #### Saturation Autovaccum metrics can be used to monitor and tune autovaccum performance for Az |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||-|**Analyze Counter User Tables** (Preview)|`analyze_count_user_tables`|Count|Number of times user-only tables have been manually analyzed in this database. |DatabaseName|No | -|**AutoAnalyze Counter User Tables** (Preview)|`autoanalyze_count_user_tables`|Count|Number of times user-only tables have been analyzed by the autovacuum daemon in this database. |DatabaseName|No | -|**AutoVacuum Counter User Tables** (Preview) |`autovacuum_count_user_tables` |Count|Number of times user-only tables have been vacuumed by the autovacuum daemon in this database. |DatabaseName|No | -|**Estimated Dead Rows User Tables** (Preview)|`n_dead_tup_user_tables` |Count|Estimated number of dead rows for user-only tables in this database. |DatabaseName|No | -|**Estimated Live Rows User Tables** (Preview)|`n_live_tup_user_tables` |Count|Estimated number of live rows for user-only tables in this database. |DatabaseName|No | -|**Estimated Modifications User Tables** (Preview)|`n_mod_since_analyze_user_tables`|Count|Estimated number of rows that were modified since user-only tables were last analyzed. |DatabaseName|No | -|**User Tables Analyzed** (Preview) |`tables_analyzed_user_tables`|Count|Number of user-only tables that have been analyzed in this database. |DatabaseName|No | -|**User Tables AutoAnalyzed** (Preview) |`tables_autoanalyzed_user_tables`|Count|Number of user-only tables that have been analyzed by the autovacuum daemon in this database.|DatabaseName|No | -|**User Tables AutoVacuumed** (Preview) |`tables_autovacuumed_user_tables`|Count|Number of user-only tables that have been vacuumed by the autovacuum daemon in this database.|DatabaseName|No | -|**User Tables Counter** (Preview)|`tables_counter_user_tables` |Count|Number of user-only tables in this database.|DatabaseName|No | -|**User Tables Vacuumed** (Preview) |`tables_vacuumed_user_tables`|Count|Number of user-only tables that have been vacuumed in this database. |DatabaseName|No | -|**Vacuum Counter User Tables** (Preview) |`vacuum_count_user_tables` |Count|Number of times user-only tables have been manually vacuumed in this database (not counting `VACUUM FULL`).|DatabaseName|No | +|**Analyze Counter User Tables** |`analyze_count_user_tables`|Count|Number of times user-only tables have been manually analyzed in this database. |DatabaseName|No | +|**AutoAnalyze Counter User Tables** |`autoanalyze_count_user_tables`|Count|Number of times user-only tables have been analyzed by the autovacuum daemon in this database. |DatabaseName|No | +|**AutoVacuum Counter User Tables** |`autovacuum_count_user_tables` |Count|Number of times user-only tables have been vacuumed by the autovacuum daemon in this database. |DatabaseName|No | +|**Estimated Dead Rows User Tables** |`n_dead_tup_user_tables` |Count|Estimated number of dead rows for user-only tables in this database. |DatabaseName|No | +|**Estimated Live Rows User Tables** |`n_live_tup_user_tables` |Count|Estimated number of live rows for user-only tables in this database. |DatabaseName|No | +|**Estimated Modifications User Tables** |`n_mod_since_analyze_user_tables`|Count|Estimated number of rows that were modified since user-only tables were last analyzed. |DatabaseName|No | +|**User Tables Analyzed** |`tables_analyzed_user_tables`|Count|Number of user-only tables that have been analyzed in this database. |DatabaseName|No | +|**User Tables AutoAnalyzed** |`tables_autoanalyzed_user_tables`|Count|Number of user-only tables that have been analyzed by the autovacuum daemon in this database.|DatabaseName|No | +|**User Tables AutoVacuumed** |`tables_autovacuumed_user_tables`|Count|Number of user-only tables that have been vacuumed by the autovacuum daemon in this database.|DatabaseName|No | +|**User Tables Counter** |`tables_counter_user_tables` |Count|Number of user-only tables in this database.|DatabaseName|No | +|**User Tables Vacuumed** |`tables_vacuumed_user_tables`|Count|Number of user-only tables that have been vacuumed in this database. |DatabaseName|No | +|**Vacuum Counter User Tables** |`vacuum_count_user_tables` |Count|Number of times user-only tables have been manually vacuumed in this database (not counting `VACUUM FULL`).|DatabaseName|No | ### Considerations for using autovacuum metrics You can use PgBouncer metrics to monitor the performance of the PgBouncer proces |Display name|Metric ID|Unit|Description|Dimension|Default enabled| |||||||-|**Active client connections** (Preview) |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL connection. |DatabaseName|No | -|**Waiting client connections** (Preview)|`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL connection to service them.|DatabaseName|No | -|**Active server connections** (Preview) |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL that are in use by a client connection. |DatabaseName|No | -|**Idle server connections** (Preview) |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL that are idle and ready to service a new client connection. |DatabaseName|No | -|**Total pooled connections** (Preview)|`total_pooled_connections`|Count|Current number of pooled connections. |DatabaseName|No | -|**Number of connection pools** (Preview)|`num_pools` |Count|Total number of connection pools. |DatabaseName|No | +|**Active client connections** |`client_connections_active` |Count|Connections from clients that are associated with an Azure Database for PostgreSQL connection. |DatabaseName|No | +|**Waiting client connections** |`client_connections_waiting`|Count|Connections from clients that are waiting for an Azure Database for PostgreSQL connection to service them.|DatabaseName|No | +|**Active server connections** |`server_connections_active` |Count|Connections to Azure Database for PostgreSQL that are in use by a client connection. |DatabaseName|No | +|**Idle server connections** |`server_connections_idle` |Count|Connections to Azure Database for PostgreSQL that are idle and ready to service a new client connection. |DatabaseName|No | +|**Total pooled connections** |`total_pooled_connections`|Count|Current number of pooled connections. |DatabaseName|No | +|**Number of connection pools** |`num_pools` |Count|Total number of connection pools. |DatabaseName|No | ### Considerations for using the PgBouncer metrics Is-db-alive is an database server availability metric for Azure Postgres Flexibl |Display Name |Metric ID |Unit |Description |Dimension |Default enabled| |-|-|-|--|||-|**Database Is Alive** (Preview) |`is_db_alive` |Count |Indicates if the database is up or not |N/a |Yes | +|**Database Is Alive** |`is_db_alive` |Count |Indicates if the database is up or not |N/a |Yes | #### Considerations when using the Database availability metrics -- Aggregating this metric with `MAX()` will allow customers to determine weather the server has been up or down in the last minute.+- Aggregating this metric with `MAX()` will allow customers to determine whether the server has been up or down in the last minute. - Customers have option to further aggregate these metrics with any desired frequency (5m, 10m, 30m etc.) to suit their alerting requirements and avoid any false positive. - Other possible aggregations are `AVG()` and `MIN()` |
postgresql | Concepts Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md | Here are some concepts to be familiar with when you're using virtual networks wi * **Private DNS zone integration**. Azure private DNS zone integration allows you to resolve the private DNS within the current virtual network or any in-region peered virtual network where the private DNS zone is linked. ### Using a private DNS zone -If you use the Azure portal or the Azure CLI to create flexible servers with a virtual network, a new private DNS zone is automatically provisioned for each server in your subscription by using the server name that you provided. +[Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution. ++When using private network access with Azure virtual network, providing the private DNS zone information is mandatory in order to be able to do DNS resolution. For new Azure Database for PostgreSQL Flexible Server creation using private network access, private DNS zones will need to be used while configuring flexible servers with private access. +For new Azure Database for PostgreSQL Flexible Server creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring flexible servers with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating flexible servers, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription. If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, create private DNS zones that end with `.postgres.database.azure.com`. Use those zones while configuring flexible servers with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name can't be the name you use for one of your flexible servers or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md). -When using private network access with Azure virtual network, providing the private DNS zone information is mandatory across various interfaces, including API, ARM, and Terraform. Therefore, for new Azure Database for PostgreSQL Flexible Server creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring flexible servers with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating flexible servers, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription. Using Azure Portal, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL - Flexible Server to another private DNS zone that exists the same or different subscription. > [!IMPORTANT] > Ability to change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL - Flexible Server to another private DNS zone is currently disabled for servers with High Availability feature enabled. +After you create a private DNS zone in Azure, you'll need to [link](../../dns/private-dns-virtual-network-links.md) a virtual network to it. Once linked, resources hosted in that virtual network can access the private DNS zone. + > [!IMPORTANT] + > We no longer validate virtual network link presence on server creation for Azure Database for PostgreSQL - Flexible Server with private networking. When creating server through the Portal we provide customer choice to create link on server creation via checkbox *"Link Private DNS Zone your virtual network"* in the Azure Portal. ### Integration with a custom DNS server |
postgresql | Concepts Single To Flexible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md | The following table shows the time for performing offline migrations for databas > [!IMPORTANT] > In order to perform faster migrations, pick a higher SKU for your flexible server. You can always change the SKU to match the application needs post migration. +## Migration of users/roles, ownerships and privileges +Along with data migration, the tool automatically provides the following built-in capabilities: +- Migration of users/roles present on your source server to the target server. +- Migration of ownership of all the database objects on your source server to the target server. +- Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server. ++> [!NOTE] +> This functionality is enabled only for flexible servers in **North Europe** region. It will be enabled for flexible servers in other Azure regions soon. In the meantime, you can follow the steps mentioned in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md#migrate-the-roles) to perform user/roles migration + ## Limitations - You can have only one active migration to your flexible server. - You can select a max of eight databases in one migration attempt. If you've more than eight databases, you must wait for the first migration to be complete before initiating another migration for the rest of the databases. Support for migration of more than eight databases in a single migration will be introduced later. - The source and target server must be in the same Azure region. Cross region migrations are not supported.-- The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details, firewall rules, users, roles and permissions. In the later part of the document, we point you to docs that can help you perform the migration of users, roles and firewall rules from single server to flexible server.+- The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details and firewall rules. - The migration tool shows the number of tables copied from source to target server. You need to validate the data in target server post migration. - The tool only migrates user databases and not system databases like template_0, template_1, azure_sys and azure_maintenance. +> [!NOTE] +> The following limitations are applicable only for flexible servers on which the migration of users/roles functionality is enabled. ++- AAD users present on your source server will not be migrated to target server. To mitigate this limitation, manually create all AAD users on your target server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before triggering a migration. If AAD users are not created on target server, migration will fail with appropriate error message. +- If the target flexible server uses SCRAM-SHA-256 password encryption method, connection to flexible server using the users/roles on single server will fail since the passwords are encrypted using md5 algorithm. To mitigate this limitation, please choose the option **MD5** for **password_encryption** server parameter on your flexible server. +- Though the ownership of database objects such as tables, views, sequences, etc. are copied to the target server, the owner of the database in your target server will be the migration user of your target server. The limitation can be mitigated by executing the following command ++```sql + ALTER DATABASE <dbname> OWNER TO <user>; +``` + Make sure the user executing the above command is a member of the user to which ownership is being assigned to. This limitation will be fixed in the upcoming releases of the migration tool to match the database owners on your source server. ## Experience Get started with the Single to Flex migration tool by using any of the following methods: For calculating the total downtime to perform offline migration of production se > [!NOTE] > The size of databases is not the right metric for validation.The source server might have bloats/dead tuples which can bump up the size on the source server. Also, the storage containers used in single and flexible servers are completely different. It is completely normal to have size differences between source and target servers. If there is an issue in the first three steps of validation, it indicates a problem with the migration. -- **Migration of server settings** - The users, roles/privileges, server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server. Users and roles are migrated from Single to Flexible server by following the steps listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md).+- **Migration of server settings** - The server parameters, firewall rules (if applicable), tags, alerts need to be manually copied from single server to flexible server. - **Changing connection strings** - Post successful validation, application should change their connection strings to point to flexible server. This activity is coordinated with the application team to make changes to all the references of connection strings pointing to single server. Note that in the flexible server the user parameter in the connection string no longer needs to be in the **username@servername** format. You should just use the **user=username** format for this parameter in the connection string For example The changes to this server parameter would require a server restart to come into Use the **Save and Restart** option and wait for the postgresql server to restart. ++##### Create AAD users on target server +> [!NOTE] +> This pre-requisite is applicable only for flexible servers on which the migration of users/roles functionality is enabled. ++Execute the following query on your source server to get the list of AAD users. +```sql +SELECT r.rolname + FROM + pg_roles r + JOIN pg_auth_members am ON r.oid = am.member + JOIN pg_roles m ON am.roleid = m.oid + WHERE + m.rolname IN ( + 'azure_ad_admin', + 'azure_ad_user', + 'azure_ad_mfa' + ); +``` +Create the AAD users on your target flexible server using this [link](../flexible-server/how-to-manage-azure-ad-users.md) before creating a migration. + ### Migration Once the pre-migration steps are complete, you're ready to carry out the migration of the production databases of your single server. At this point, you've finalized the day and time of production migration along with a planned downtime for your applications. In summary, the Single to Flexible migration tool will migrate a table in parall - Once the migration is complete, verify the data on your flexible server and make sure it's an exact copy of the single server. - Post verification, enable HA option as needed on your flexible server. - Change the SKU of the flexible server to match the application needs. This change needs a database server restart.-- Migrate users and roles from single to flexible servers. This step can be done by creating users on flexible servers and providing them with suitable privileges or by using the steps that are listed in this [doc](../single-server/how-to-upgrade-using-dump-and-restore.md). - If you've changed any server parameters from their default values in single server, copy those server parameter values in flexible server. - Copy other server settings like tags, alerts, firewall rules (if applicable) from single server to flexible server. - Make changes to your application to point the connection strings to flexible server. |
postgresql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md | |
postgresql | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md | |
private-5g-core | Azure Stack Edge Virtual Machine Sizing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-virtual-machine-sizing.md | The following table contains information about the VMs that Azure Private 5G Cor | AP5GC Cluster Control Plane VM | Standard_F4s_v1 | 4 | 4 | Ephemeral - 128 | Control Plane of the Kubernetes cluster used for AP5GC | | AP5GC Cluster Node VM | Standard_F16s_HPN | 16 | 32 | Ephemeral - 128 </br> Persistent - 102 GB | AP5GC workload node | | Control plane upgrade reserve | | 4 | 4 | 0 | Used by ASE during upgrade of the control plane VM |-| **Total requirements** | | **24** | **44** | **Ephemeral - 336** </br> **Persistent - 102** </br> **Total - 438** | | +| **Total requirements** | | **28** | **44** | **Ephemeral - 336** </br> **Persistent - 102** </br> **Total - 438** | | ## Remaining usable resource on Azure Stack Edge Pro |
private-5g-core | Commission Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/commission-cluster.md | Run the following commands at the PowerShell prompt, specifying the object ID yo ```powershell Invoke-Command -Session $minishellSession -ScriptBlock {Set-HcsKubeClusterArcInfo -CustomLocationsObjectId *object ID*}++Invoke-Command -Session $minishellSession -ScriptBlock {Enable-HcsAzureKubernetesService -f} ``` Once you've run this command, you should see an updated option in the local UI ΓÇô **Kubernetes** becomes **Kubernetes (Preview)** as shown in the following image. :::image type="content" source="media/commission-cluster/commission-cluster-kubernetes-preview.png" alt-text="Screenshot of configuration menu, with Kubernetes (Preview) highlighted."::: -Select the **This Kubernetes cluster is for Azure Private 5G Core or SAP Digital Manufacturing Cloud workloads** checkbox. -- If you go to the Azure portal and navigate to your **Azure Stack Edge** resource, you should see an **Azure Kubernetes Service** option. You'll set up the Azure Kubernetes Service in [Start the cluster and set up Arc](#start-the-cluster-and-set-up-arc). :::image type="content" source="media/commission-cluster/commission-cluster-ase-resource.png" alt-text="Screenshot of Azure Stack Edge resource in the Azure portal. Azure Kubernetes Service (PREVIEW) is shown under Edge services in the left menu."::: The Azure Private 5G Core private mobile network requires a custom location and 1. Create the Network Function Operator Kubernetes extension: ```azurecli- Add-Content -Path $TEMP_FILE -Value @" + cat > $TEMP_FILE <<EOF { "helm.versions": "v3", "Microsoft.CustomLocation.ServiceAccount": "azurehybridnetwork-networkfunction-operator", The Azure Private 5G Core private mobile network requires a custom location and "helm.release-namespace": "azurehybridnetwork", "managed-by": "helm" }- "@ + EOF ``` ```azurecli |
private-5g-core | Complete Private Mobile Network Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md | In this how-to guide, you'll carry out each of the tasks you need to complete be ## Get access to Azure Private 5G Core for your Azure subscription -Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://aka.ms/privateMECMSP). +Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://forms.office.com/r/4Q1yNRakXe). ## Choose the core technology type (5G or 4G) |
private-5g-core | Gather Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md | You must already have an AP5GC site deployed to collect diagnostics. 1. Copy the contents of the **URL** field in the **Container properties** view. 1. Create a [User-assigned identity](../active-directory/managed-identities-azure-resources/overview.md) and assign it to the storage account created above with the **Storage Blob Data Contributor** role. > [!TIP]- > Make sure the same User-assigned identity is used during site creation. + > You may have already created and associated a user-assigned identity when creating the site. 1. Navigate to the **Packet core control plane** resource for the site.-1. Select **Identity** under **Settings** on the left side menu. -1. Toggle **Modify user assigned managed identity?** to **Yes** and select **+ Add**. -1. In the **Add user assigned managed identity** select the user-signed managed identity you created. +1. Select **Identity** under **Settings** in the left side menu. 1. Select **Add**.-1. Select **Next**. -1. Select **Create**. +1. Select the user-signed managed identity you created and select **Add**. ## Gather diagnostics for a site |
private-5g-core | Upgrade Packet Core Arm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-arm-template.md | In this step, you'll roll back your packet core using a REST API request. Follow If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced. -> [!NOTE] -> You can roll back your packet core instance to version [PMN-2211-0](azure-private-5g-core-release-notes-2211.md) or later. - 1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Perform a [rollback POST request](/rest/api/mobilenetwork/packet-core-control-planes/rollback?tabs=HTTP). |
private-5g-core | Upgrade Packet Core Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/upgrade-packet-core-azure-portal.md | If you encountered issues after the upgrade, you can roll back the packet core i If any of the configuration you set while your packet core instance was running a newer version isn't supported in the version that you want to roll back to, you'll need to revert to the previous configuration before you're able to perform a rollback. Check the packet core release notes for information on when new features were introduced. -> [!NOTE] -> You can roll back your packet core instance to version [PMN-2211-0](azure-private-5g-core-release-notes-2211.md) or later. - 1. Ensure you have a backup of your deployment information. If you need to back up again, follow [Back up deployment information](#back-up-deployment-information). 1. Navigate to the **Packet Core Control Plane** resource that you want to roll back as described in [View the current packet core version](#view-the-current-packet-core-version). 1. Select **Rollback version**. If any of the configuration you set while your packet core instance was running :::image type="content" source="media/upgrade-packet-core-azure-portal/confirm-rollback.png" alt-text="Screenshot of the Azure portal showing the Confirm rollback field in the Rollback packet core screen."::: 1. Select **Roll back packet core**.-1. Azure will now redeploy the packet core instance at the new software version. You can check the latest status of the rollback by looking at the **Packet core installation state** field. The **Packet Core Control Plane** resource's overview page will refresh every 20 seconds, and you can select **Refresh** to trigger a manual update. The **Packet core installation state** field will show as **RollingBack** during the rollback and update to **Installed** when the process completes. +1. Azure will now redeploy the packet core instance at the previous software version. You can check the latest status of the rollback by looking at the **Packet core installation state** field. The **Packet Core Control Plane** resource's overview page will refresh every 20 seconds, and you can select **Refresh** to trigger a manual update. The **Packet core installation state** field will show as **RollingBack** during the rollback and update to **Installed** when the process completes. 1. Follow the steps in [Restore backed up deployment information](#restore-backed-up-deployment-information) to reconfigure your deployment. 1. Follow the steps in [Verify upgrade](#verify-upgrade) to check if the rollback was successful. |
private-link | Inspect Traffic With Azure Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/inspect-traffic-with-azure-firewall.md | Title: 'Use Azure Firewall to inspect traffic destined to a private endpoint' + Title: 'Azure Firewall scenarios to inspect traffic destined to a private endpoint' -description: Learn how you can inspect traffic destined to a private endpoint using Azure Firewall. +description: Learn about different scenarios to inspect traffic destined to a private endpoint using Azure Firewall. - Previously updated : 04/27/2023+ Last updated : 08/14/2023 -# Use Azure Firewall to inspect traffic destined to a private endpoint +# Azure Firewall scenarios to inspect traffic destined to a private endpoint > [!NOTE] > If you want to secure traffic to private endpoints in Azure Virtual WAN using secured virtual hub, see [Secure traffic destined to private endpoints in Azure Virtual WAN](../firewall-manager/private-link-inspection-secure-virtual-hub.md). If your security requirements require client traffic to services exposed via pri The same considerations as in scenario 2 above apply. In this scenario, there aren't virtual network peering charges. For more information about how to configure your DNS servers to allow on-premises workloads to access private endpoints, see [on-premises workloads using a DNS forwarder](./private-endpoint-dns.md#on-premises-workloads-using-a-dns-forwarder). -## Prerequisites --* An Azure subscription. --* A Log Analytics workspace. --See, [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md) to create a workspace if you don't have one in your subscription. --## Sign in to Azure --Sign in to the [Azure portal](https://portal.azure.com). --## Create a VM --In this section, you create a virtual network and subnet to host the VM used to access your private link resource. An Azure SQL database is used as the example service. --## Virtual networks and parameters --Create three virtual networks and their corresponding subnets to: --* Contain the Azure Firewall used to restrict communication between the VM and the private endpoint. --* Host the VM that is used to access your private link resource. --* Host the private endpoint. --Replace the following parameters in the steps with the following information: --### Azure Firewall network --| Parameter | Value | -|--|-| -| **\<resource-group-name>** | myResourceGroup | -| **\<virtual-network-name>** | myAzFwVNet | -| **\<region-name>** | South Central US | -| **\<IPv4-address-space>** | 10.0.0.0/16 | -| **\<subnet-name>** | AzureFirewallSubnet | -| **\<subnet-address-range>** | 10.0.0.0/24 | --### Virtual machine network --| Parameter | Value | -|--|-| -| **\<resource-group-name>** | myResourceGroup | -| **\<virtual-network-name>** | myVMVNet | -| **\<region-name>** | South Central US | -| **\<IPv4-address-space>** | 10.1.0.0/16 | -| **\<subnet-name>** | VMSubnet | -| **\<subnet-address-range>** | 10.1.0.0/24 | --### Private endpoint network --| Parameter | Value | -|--|-| -| **\<resource-group-name>** | myResourceGroup | -| **\<virtual-network-name>** | myPEVNet | -| **\<region-name>** | South Central US | -| **\<IPv4-address-space>** | 10.2.0.0/16 | -| **\<subnet-name>** | PrivateEndpointSubnet | -| **\<subnet-address-range>** | 10.2.0.0/24 | ---10. Repeat steps 1 to 9 to create the virtual networks for hosting the virtual machine and private endpoint resources. --### Create virtual machine --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Compute** > **Virtual machine**. --2. In **Create a virtual machine - Basics**, enter or select this information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this resource group in the previous section. | - | **Instance details** | | - | Virtual machine name | Enter **myVM**. | - | Region | Select **(US) South Central US**. | - | Availability options | Leave the default **No infrastructure redundancy required**. | - | Image | Select **Ubuntu Server 18.04 LTS - Gen1**. | - | Size | Select **Standard_B2s**. | - | **Administrator account** | | - | Authentication type | Select **Password**. | - | Username | Enter a username of your choosing. | - | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/linux/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| - | Confirm Password | Reenter password. | - | **Inbound port rules** | | - | Public inbound ports | Select **None**. | ---3. Select **Next: Disks**. --4. In **Create a virtual machine - Disks**, leave the defaults and select **Next: Networking**. --5. In **Create a virtual machine - Networking**, select this information: -- | Setting | Value | - | - | -- | - | Virtual network | Select **myVMVNet**. | - | Subnet | Select **VMSubnet (10.1.0.0/24)**.| - | Public IP | Leave the default **(new) myVm-ip**. | - | Public inbound ports | Select **Allow selected ports**. | - | Select inbound ports | Select **SSH**.| - || --6. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. --7. When you see the **Validation passed** message, select **Create**. ---## Deploy the Firewall --1. On the Azure portal menu or from the **Home** page, select **Create a resource**. --2. Type **firewall** in the search box and press **Enter**. --3. Select **Firewall** and then select **Create**. --4. On the **Create a Firewall** page, use the following table to configure the firewall: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. | - | **Instance details** | | - | Name | Enter **myAzureFirewall**. | - | Region | Select **South Central US**. | - | Availability zone | Leave the default **None**. | - | Choose a virtual network | Select **Use Existing**. | - | Virtual network | Select **myAzFwVNet**. | - | Public IP address | Select **Add new** and in Name enter **myFirewall-ip**. | - | Forced tunneling | Leave the default **Disabled**. | - ||| -5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. --6. When you see the **Validation passed** message, select **Create**. --## Enable firewall logs --In this section, you enable the logs on the firewall. --1. In the Azure portal, select **All resources** in the left-hand menu. --2. Select the firewall **myAzureFirewall** in the list of resources. --3. Under **Monitoring** in the firewall settings, select **Diagnostic settings** --4. Select **+ Add diagnostic setting** in the Diagnostic settings. --5. In **Diagnostics setting**, enter or select this information: -- | Setting | Value | - | - | -- | - | Diagnostic setting name | Enter **myDiagSetting**. | - | Category details | | - | log | Select **AzureFirewallApplicationRule** and **AzureFirewallNetworkRule**. | - | Destination details | Select **Send to Log Analytics**. | - | Subscription | Select your subscription. | - | Log Analytics workspace | Select your Log Analytics workspace. | --6. Select **Save**. --## Create Azure SQL database --In this section, you create a private SQL Database. --1. On the upper-left side of the screen in the Azure portal, select **Create a resource** > **Databases** > **SQL Database**. --2. In **Create SQL Database - Basics**, enter or select this information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. You created this resource group in the previous section.| - | **Database details** | | - | Database name | Enter **mydatabase**. | - | Server | Select **Create new** and enter the following information. | - | Server name | Enter **mydbserver**. If this name is taken, enter a unique name. | - | Server admin sign in | Enter a name of your choosing. | - | Password | Enter a password of your choosing. | - | Confirm Password | Reenter password | - | Location | Select **(US) South Central US**. | - | Want to use SQL elastic pool | Leave the default **No**. | - | Compute + storage | Leave the default **General Purpose Gen5, 2 vCores, 32 GB Storage**. | - ||| --3. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. --4. When you see the **Validation passed** message, select **Create**. --## Create private endpoint --In this section, you create a private endpoint for the Azure SQL database in the previous section. --1. In the Azure portal, select **All resources** in the left-hand menu. --2. Select the Azure SQL server **mydbserver** in the list of services. If you used a different server name, choose that name. --3. In the server settings, select **Private endpoint connections** under **Security**. --4. Select **+ Private endpoint**. --5. In **Create a private endpoint**, enter or select this information in the **Basics** tab: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. | - | **Instance details** | | - | Name | Enter **SQLPrivateEndpoint**. | - | Region | Select **(US) South Central US.** | --6. Select the **Resource** tab or select **Next: Resource** at the bottom of the page. --7. In the **Resource** tab, enter or select this information: -- | Setting | Value | - | - | -- | - | Connection method | Select **Connect to an Azure resource in my directory**. | - | Subscription | Select your subscription. | - | Resource type | Select **Microsoft.Sql/servers**. | - | Resource | Select **mydbserver** or the name of the server you created in the previous step. - | Target subresource | Select **sqlServer**. | --8. Select the **Configuration** tab or select **Next: Configuration** at the bottom of the page. --9. In the **Configuration** tab, enter or select this information: -- | Setting | Value | - | - | -- | - | **Networking** | | - | Virtual network | Select **myPEVnet**. | - | Subnet | Select **PrivateEndpointSubnet**. | - | **Private DNS integration** | | - | Integrate with private DNS zone | Select **Yes**. | - | Subscription | Select your subscription. | - | Private DNS zones | Leave the default **privatelink.database.windows.net**. | --10. Select the **Review + create** tab or select **Review + create** at the bottom of the page. --11. Select **Create**. --12. After the endpoint is created, select **Firewalls and virtual networks** under **Security**. --13. In **Firewalls and virtual networks**, select **Yes** next to **Allow Azure services and resources to access this server**. --14. Select **Save**. --## Connect the virtual networks using virtual network peering --In this section, we connect virtual networks **myVMVNet** and **myPEVNet** to **myAzFwVNet** using peering. There isn't direct connectivity between **myVMVNet** and **myPEVNet**. --1. In the portal's search bar, enter **myAzFwVNet**. --2. Select **Peerings** under **Settings** menu and select **+ Add**. --3. In **Add Peering** enter or select the following information: -- | Setting | Value | - | - | -- | - | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myVMVNet**. | - | **Peer details** | | - | Virtual network deployment model | Leave the default **Resource Manager**. | - | I know my resource ID | Leave unchecked. | - | Subscription | Select your subscription. | - | Virtual network | Select **myVMVNet**. | - | Name of the peering from remote virtual network to myAzFwVNet | Enter **myVMVNet-to-myAzFwVNet**. | - | **Configuration** | | - | **Configure virtual network access settings** | | - | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. | - | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. | - | **Configure forwarded traffic settings** | | - | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. | - | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. | - | **Configure gateway transit settings** | | - | Allow gateway transit | Leave unchecked | --4. Select **OK**. --5. Select **+ Add**. --6. In **Add Peering** enter or select the following information: -- | Setting | Value | - | - | -- | - | Name of the peering from myAzFwVNet to remote virtual network | Enter **myAzFwVNet-to-myPEVNet**. | - | **Peer details** | | - | Virtual network deployment model | Leave the default **Resource Manager**. | - | I know my resource ID | Leave unchecked. | - | Subscription | Select your subscription. | - | Virtual network | Select **myPEVNet**. | - | Name of the peering from remote virtual network to myAzFwVNet | Enter **myPEVNet-to-myAzFwVNet**. | - | **Configuration** | | - | **Configure virtual network access settings** | | - | Allow virtual network access from myAzFwVNet to remote virtual network | Leave the default **Enabled**. | - | Allow virtual network access from remote virtual network to myAzFwVNet | Leave the default **Enabled**. | - | **Configure forwarded traffic settings** | | - | Allow forwarded traffic from remote virtual network to myAzFwVNet | Select **Enabled**. | - | Allow forwarded traffic from myAzFwVNet to remote virtual network | Select **Enabled**. | - | **Configure gateway transit settings** | | - | Allow gateway transit | Leave unchecked | --7. Select **OK**. --## Link the virtual networks to the private DNS zone --In this section, we link virtual networks **myVMVNet** and **myAzFwVNet** to the **privatelink.database.windows.net** private DNS zone. This zone was created when we created the private endpoint. --The link is required for the VM and firewall to resolve the FQDN of database to its private endpoint address. Virtual network **myPEVNet** was automatically linked when the private endpoint was created. -->[!NOTE] ->If you don't link the VM and firewall virtual networks to the private DNS zone, both the VM and firewall will still be able to resolve the SQL Server FQDN. They will resolve to its public IP address. --1. In the portal's search bar, enter **privatelink.database**. --2. Select **privatelink.database.windows.net** in the search results. --3. Select **Virtual network links** under **Settings**. --4. Select **+ Add** --5. In **Add virtual network link** enter or select the following information: -- | Setting | Value | - | - | -- | - | Link name | Enter **Link-to-myVMVNet**. | - | **Virtual network details** | | - | I know the resource ID of virtual network | Leave unchecked. | - | Subscription | Select your subscription. | - | Virtual network | Select **myVMVNet**. | - | **CONFIGURATION** | | - | Enable auto registration | Leave unchecked. | --6. Select **OK**. --## Configure an application rule with SQL FQDN in Azure Firewall --In this section, configure an application rule to allow communication between **myVM** and the private endpoint for SQL Server **mydbserver.database.windows.net**. --This rule allows communication through the firewall that we created in the previous steps. --1. In the portal's search bar, enter **myAzureFirewall**. --2. Select **myAzureFirewall** in the search results. --3. Select **Rules** under **Settings** in the **myAzureFirewall** overview. --4. Select the **Application rule collection** tab. --5. Select **+ Add application rule collection**. --6. In **Add application rule collection** enter or select the following information: -- | Setting | Value | - | - | -- | - | Name | Enter **SQLPrivateEndpoint**. | - | Priority | Enter **100**. | - | Action | Enter **Allow**. | - | **Rules** | | - | **FQDN tags** | | - | Name | Leave blank. | - | Source type | Leave the default **IP address**. | - | Source | Leave blank. | - | FQDN tags | Leave the default **0 selected**. | - | **Target FQDNs** | | - | Name | Enter **SQLPrivateEndpoint**. | - | Source type | Leave the default **IP address**. | - | Source | Enter **10.1.0.0/16**. | - | Protocol: Port | Enter **mssql:1433**. | - | Target FQDNs | Enter **mydbserver.database.windows.net**. | --7. Select **Add**. --## Route traffic between the virtual machine and private endpoint through Azure Firewall --We didn't create a virtual network peering directly between virtual networks **myVMVNet** and **myPEVNet**. The virtual machine **myVM** doesn't have a route to the private endpoint we created. --In this section, we create a route table with a custom route. --The route sends traffic from the **myVM** subnet to the address space of virtual network **myPEVNet**, through the Azure Firewall. --1. On the Azure portal menu or from the **Home** page, select **Create a resource**. --2. Type **route table** in the search box and press **Enter**. --3. Select **Route table** and then select **Create**. --4. On the **Create Route table** page, use the following table to configure the route table: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. | - | **Instance details** | | - | Region | Select **South Central US**. | - | Name | Enter **VMsubnet-to-AzureFirewall**. | - | Propagate gateway routes | Select **No**. | --5. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration. --6. When you see the **Validation passed** message, select **Create**. --7. Once the deployment completes select **Go to resource**. --8. Select **Routes** under **Settings**. --9. Select **+ Add**. --10. On the **Add route** page, enter, or select this information: -- | Setting | Value | - | - | -- | - | Route name | Enter **myVMsubnet-to-privateendpoint**. | - | Address prefix | Enter **10.2.0.0/16**. | - | Next hop type | Select **Virtual appliance**. | - | Next hop address | Enter **10.0.0.4**. | --11. Select **OK**. --12. Select **Subnets** under **Settings**. --13. Select **+ Associate**. --14. On the **Associate subnet** page, enter or select this information: -- | Setting | Value | - | - | -- | - | Virtual network | Select **myVMVNet**. | - | Subnet | Select **VMSubnet**. | --15. Select **OK**. --## Connect to the virtual machine from your client computer --Connect to the VM **myVm** from the internet as follows: --1. In the portal's search bar, enter **myVm-ip**. --2. Select **myVM-ip** in the search results. --3. Copy or write down the value under **IP address**. --4. If you're using Windows 10, run the following command using PowerShell. For other Windows client versions, use an SSH client like [Putty](https://www.putty.org/): --* Replace **username** with the admin username you entered during VM creation. --* Replace **IPaddress** with the IP address from the previous step. -- ```bash - ssh username@IPaddress - ``` --5. Enter the password you defined when creating **myVm** --## Access SQL Server privately from the virtual machine --In this section, you connect privately to the SQL Database using the private endpoint. --1. Enter `nslookup mydbserver.database.windows.net` - - You receive a message similar to the following output: -- ```output - Server: 127.0.0.53 - Address: 127.0.0.53#53 -- Non-authoritative answer: - mydbserver.database.windows.net canonical name = mydbserver.privatelink.database.windows.net. - Name: mydbserver.privatelink.database.windows.net - Address: 10.2.0.4 - ``` --2. Install [SQL Server command-line tools](/sql/linux/quickstart-install-connect-ubuntu#tools). --3. Run the following command to connect to the SQL Server. Use the server admin and password you defined when you created the SQL Server in the previous steps. --* Replace **\<ServerAdmin>** with the admin username you entered during the SQL server creation. --* Replace **\<YourPassword>** with the admin password you entered during SQL server creation. -- ```bash - sqlcmd -S mydbserver.database.windows.net -U '<ServerAdmin>' -P '<YourPassword>' - ``` -4. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool. --5. Close the connection to **myVM** by entering **exit**. --## Validate the traffic in Azure Firewall logs --1. In the Azure portal, select **All Resources** and select your Log Analytics workspace. --2. Select **Logs** under **General** in the Log Analytics workspace page. --3. Select the blue **Get Started** button. --4. In the **Example queries** window, select **Firewalls** under **All Queries**. --5. Select the **Run** button under **Application rule log data**. --6. In the log query output, verify **mydbserver.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **RuleCollection**. --## Clean up resources --When you're done using the resources, delete the resource group and all of the resources it contains: --1. Enter **myResourceGroup** in the **Search** box at the top of the portal and select **myResourceGroup** from the search results. --1. Select **Delete resource group**. --1. Enter **myResourceGroup** for **TYPE THE RESOURCE GROUP NAME** and select **Delete**. - ## Next steps -In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall. +In this article, you explored different scenarios that you can use to restrict traffic between a virtual machine and a private endpoint using Azure Firewall. -You connected to the VM and securely communicated to the database through Azure Firewall using private link. +For a tutorial on how to configure Azure Firewall to inspect traffic destined to a private endpoint, see [Tutorial: Inspect private endpoint traffic with Azure Firewall](tutorial-inspect-traffic-azure-firewall.md) To learn more about private endpoint, see [What is Azure Private Endpoint?](private-endpoint-overview.md). |
private-link | Private Endpoint Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md | A private-link resource is the destination target of a specified private endpoin | Azure Data Factory | Microsoft.DataFactory/factories | dataFactory | | Azure Data Explorer | Microsoft.Kusto/clusters | cluster | | Azure Database for MariaDB | Microsoft.DBforMariaDB/servers | mariadbServer |-| Azure Database for MySQL | Microsoft.DBforMySQL/servers | mysqlServer | +| Azure Database for MySQL - Single Server | Microsoft.DBforMySQL/servers | mysqlServer | +| Azure Database for MySQL- Flexible Server | Microsoft.DBforMySQL/flexibleServers | mysqlServer | | Azure Database for PostgreSQL - Single server | Microsoft.DBforPostgreSQL/servers | postgresqlServer | | Azure Device Provisioning Service | Microsoft.Devices/provisioningServices | iotDps | | Azure IoT Hub | Microsoft.Devices/IotHubs | iotHub | For complete, detailed information about recommendations to configure DNS for pr ## Limitations -The following information lists the known limitations to the use of private endpoints: +The following information lists the known limitations to the use of private endpoints: ++### Static IP address ++| Limitation | Description | +| | | +| Static IP address configuration currently unsupported. | **Azure Kubernetes Service (AKS)** </br> **Azure Application Gateway** </br> **HD Insight**. | ### Network security group |
private-link | Tutorial Inspect Traffic Azure Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-inspect-traffic-azure-firewall.md | + + Title: 'Tutorial: Inspect private endpoint traffic with Azure Firewall' +description: Learn how to inspect private endpoint traffic with Azure Firewall. +++++ Last updated : 08/15/2023+++# Tutorial: Inspect private endpoint traffic with Azure Firewall ++Azure Private Endpoint is the fundamental building block for Azure Private Link. Private endpoints enable Azure resources deployed in a virtual network to communicate privately with private link resources. ++Private endpoints allow resources access to the private link service deployed in a virtual network. Access to the private endpoint through virtual network peering and on-premises network connections extend the connectivity. ++You may need to inspect or block traffic from clients to the services exposed via private endpoints. Complete this inspection by using [Azure Firewall](../firewall/overview.md) or a third-party network virtual appliance. ++For more information and scenarios that involve private endpoints and Azure Firewall, see [Azure Firewall scenarios to inspect traffic destined to a private endpoint](inspect-traffic-with-azure-firewall.md). ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> * Create a virtual network and bastion host for the test virtual machine. +> * Create the private endpoint virtual network. +> * Create a test virtual machine. +> * Deploy Azure Firewall. +> * Create an Azure SQL database. +> * Create a private endpoint for Azure SQL. +> * Create a network peer between the private endpoint virtual network and the test virtual machine virtual network. +> * Link the virtual networks to a private DNS zone. +> * Configure application rules in Azure Firewall for Azure SQL. +> * Route traffic between the test virtual machine and Azure SQL through Azure Firewall. +> * Test the connection to Azure SQL and validate in Azure Firewall logs. ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Prerequisites ++- An Azure account with an active subscription. ++- A Log Analytics workspace. For more information about the creation of a log analytics workspace, see [Create a Log Analytics workspace in the Azure portal](../azure-monitor/logs/quick-create-workspace.md). ++## Sign in to the Azure portal ++Sign in to the [Azure portal](https://portal.azure.com). +++++## Deploy Azure Firewall ++1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results. ++1. In **Firewalls**, select **+ Create**. ++1. Enter or select the following information in the **Basics** tab of **Create a firewall**: ++ | Setting | Value | + ||| + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Name | Enter **firewall**. | + | Region | Select **East US 2**. | + | Availability zone | Select **None**. | + | Firewall SKU | Select **Standard**. | + | Firewall management | Select **Use a Firewall Policy to manage this firewall**. | + | Firewall policy | Select **Add new**. </br> Enter **firewall-policy** in **Policy name**. </br> Select **East US 2** in region. </br> Select **OK**. | + | Choose a virtual network | Select **Create new**. | + | Virtual network name | Enter **vnet-firewall**. | + | Address space | Enter **10.2.0.0/16**. | + | Subnet address space | Enter **10.2.1.0/26**. | + | Public IP address | Select **Add new**. </br> Enter **public-ip-firewall** in **Name**. </br> Select **OK**. | ++1. Select **Review + create**. ++1. Select **Create**. ++Wait for the firewall deployment to complete before you continue. ++## Enable firewall logs ++In this section, you enable the firewall logs and send them to the log analytics workspace. ++> [!NOTE] +> You must have a log analytics workspace in your subscription before you can enable firewall logs. For more information, see [Prerequisites](#prerequisites). ++1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results. ++1. Select **firewall**. ++1. In **Monitoring** select **Diagnostic settings**. ++1. Select **+ Add diagnostic setting**. ++1. In **Diagnostic setting** enter or select the following information: ++ | Setting | Value | + ||| + | Diagnostic setting name | Enter **diagnostic-setting-firewall**. | + | **Logs** | | + | Categories | Select **Azure Firewall Application Rule (Legacy Azure Diagnostics)** and **Azure Firewall Network Rule (Legacy Azure Diagnostics)**. | + | **Destination details** | | + | Destination | Select **Send to Log Analytics workspace**. | + | Subscription | Select your subscription. | + | Log Analytics workspace | Select your log analytics workspace. | ++1. Select **Save**. ++## Create an Azure SQL database ++1. In the search box at the top of the portal, enter **SQL**. Select **SQL databases** in the search results. ++1. In **SQL databases**, select **+ Create**. ++1. In the **Basics** tab of **Create SQL Database**, enter or select the following information: ++ | Setting | Value | + ||| + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Database details** | | + | Database name | Enter **sql-db**. | + | Server | Select **Create new**. </br> Enter **sql-server-1** in **Server name** (Server names must be unique, replace **sql-server-1** with a unique value). </br> Select **(US) East US 2** in **Location**. </br> Select **Use SQL authentication**. </br> Enter a server admin sign-in and password. </br> Select **OK**. | + | Want to use SQL elastic pool? | Select **No**. | + | Workload environment | Leave the default of **Production**. | + | **Backup storage redundancy** | | + | Backup storage redundancy | Select **Locally redundant backup storage**. | ++1. Select **Next: Networking**. ++1. In the **Networking** tab of **Create SQL Database**, enter or select the following information: ++ | Setting | Value | + ||| + | **Network connectivity** | | + | Connectivity method | Select **Private endpoint**. | + | **Private endpoints** | | + | Select **+Add private endpoint**. | | + | **Create private endpoint** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | Location | Select **East US 2**. | + | Name | Enter **private-endpoint-sql**. | + | Target subresource | Select **SqlServer**. | + | **Networking** | | + | Virtual network | Select **vnet-private-endpoint**. | + | Subnet | Select **subnet-private-endpoint**. | + | **Private DNS integration** | | + | Integrate with private DNS zone | Select **Yes**. | + | Private DNS zone | Leave the default of **privatelink.database.windows.net**. | ++1. Select **OK**. ++1. Select **Review + create**. ++1. Select **Create**. ++## Connect virtual networks with virtual network peering ++In this section, you connect the virtual networks with virtual network peering. The networks **vnet-1** and **vnet-private-endpoint** are connected to **vnet-firewall**. There isn't direct connectivity between **vnet-1** and **vnet-private-endpoint**. ++1. In the search box at the top of the portal, enter **Virtual networks**. Select **Virtual networks** in the search results. ++1. Select **vnet-firewall**. ++1. In **Settings** select **Peerings**. ++1. In **Peerings** select **+ Add**. ++1. In **Add peering**, enter or select the following information: ++ | Setting | Value | + ||| + | **This virtual network** | | + | Peering link name | Enter **vnet-firewall-to-vnet-1**. | + | Traffic to remote virtual network | Select **Allow (default)**. | + | Traffic forwarded from remote virtual network | Select **Allow (default)**. | + | Virtual network gateway or Route Server | Select **None (default)**. | + | **Remote virtual network** | | + | Peering link name | Enter **vnet-1-to-vnet-firewall**. | + | Virtual network deployment model | Select **Resource manager**. | + | Subscription | Select your subscription. | + | Virtual network | Select **vnet-1**. | + | Traffic to remote virtual network | Select **Allow (default)**. | + | Traffic forwarded from remote virtual network | Select **Allow (default)**. | + | Virtual network gateway or Route Server | Select **None (default)**. | ++1. Select **Add**. ++1. In **Peerings** select **+ Add**. ++1. In **Add peering**, enter or select the following information: ++ | Setting | Value | + ||| + | **This virtual network** | | + | Peering link name | Enter **vnet-firewall-to-vnet-private-endpoint**. | + | Traffic to remote virtual network | Select **Allow (default)**. | + | Traffic forwarded from remote virtual network | Select **Allow (default)**. | + | Virtual network gateway or Route Server | Select **None (default)**. | + | **Remote virtual network** | | + | Peering link name | Enter **vnet-private-endpoint-to-vnet-firewall**. | + | Virtual network deployment model | Select **Resource manager**. | + | Subscription | Select your subscription. | + | Virtual network | Select **vnet-private-endpoint**. | + | Traffic to remote virtual network | Select **Allow (default)**. | + | Traffic forwarded from remote virtual network | Select **Allow (default)**. | + | Virtual network gateway or Route Server | Select **None (default)**. | ++1. Select **Add**. ++1. Verify the **Peering status** displays **Connected** for both network peers. ++## Link the virtual networks to the private DNS zone ++The private DNS zone created during the private endpoint creation in the previous section must be linked to the **vnet-1** and **vnet-firewall** virtual networks. ++1. In the search box at the top of the portal, enter **Private DNS zone**. Select **Private DNS zones** in the search results. ++1. Select **privatelink.database.windows.net**. ++1. In **Settings** select **Virtual network links**. ++1. Select **+ Add**. ++1. In **Add virtual network link**, enter or select the following information: ++ | Setting | Value | + ||| + | **Virtual network link** | | + | Virtual network link name | Enter **link-to-vnet-1**. | + | Subscription | Select your subscription. | + | Virtual network | Select **vnet-1 (test-rg)**. | + | Configuration | Leave the default of unchecked for **Enable auto registration**. | ++1. Select **OK**. ++1. Select **+ Add**. ++1. In **Add virtual network link**, enter or select the following information: ++ | Setting | Value | + ||| + | **Virtual network link** | | + | Virtual network link name | Enter **link-to-vnet-firewall**. | + | Subscription | Select your subscription. | + | Virtual network | Select **vnet-firewall (test-rg)**. | + | Configuration | Leave the default of unchecked for **Enable auto registration**. | ++1. Select **OK**. ++## Create route between vnet-1 and vnet-private-endpoint ++A network link between **vnet-1** and **vnet-private-endpoint** doesn't exist. You must create a route to allow traffic to flow between the virtual networks through Azure Firewall. ++The route sends traffic from **vnet-1** to the address space of virtual network **vnet-private-endpoint**, through the Azure Firewall. ++1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results. ++1. Select **+ Create**. ++1. In the **Basics** tab of **Create Route table**, enter or select the following information: ++ | Setting | Value | + ||| + | **Project details** | | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Region | Select **East US 2**. | + | Name | Enter **vnet-1-to-vnet-firewall**. | + | Propagate gateway routes | Leave the default of **Yes**. | ++1. Select **Review + create**. ++1. Select **Create**. ++1. In the search box at the top of the portal, enter **Route tables**. Select **Route tables** in the search results. ++1. Select **vnet-1-to-vnet-firewall**. ++1. In **Settings** select **Routes**. ++1. Select **+ Add**. ++1. In **Add route**, enter or select the following information: ++ | Setting | Value | + ||| + | Route name | Enter **subnet-1-to-subnet-private-endpoint**. | + | Destination type | Select **IP Addresses**. | + | Destination IP addresses/CIDR ranges | Enter **10.1.0.0/16**. | + | Next hop type | Select **Virtual appliance**. | + | Next hop address | Enter **10.2.1.4**. | ++1. Select **Add**. ++1. In **Settings**, select **Subnets**. ++1. Select **+ Associate**. ++1. In **Associate subnet**, enter or select the following information: ++ | Setting | Value | + ||| + | Virtual network | Select **vnet-1(test-rg)**. | + | Subnet | Select **subnet-1**. | ++1. Select **OK**. ++## Configure an application rule in Azure Firewall ++Create an application rule to allow communication from **vnet-1** to the private endpoint of the Azure SQL server **sql-server-1.database.windows.net**. Replace **sql-server-1** with the name of your Azure SQL server. + +1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall Policies** in the search results. ++1. In **Firewall Policies**, select **firewall-policy**. ++1. In **Settings** select **Application rules**. ++1. Select **+ Add a rule collection**. ++1. In **Add a rule collection**, enter or select the following information: ++ | Setting | Value | + ||| + | Name | Enter **rule-collection-sql**. | + | Rule collection type | Leave the selection of **Application**. | + | Priority | Enter **100**. | + | Rule collection action | Select **Allow**. | + | Rule collection group | Leave the default of **DefaultApplicationRuleCollectionGroup**. | + | **Rules** | | + | **Rule 1** | | + | Name | Enter **SQLPrivateEndpoint**. | + | Source type | Select **IP Address**. | + | Source | Enter **10.0.0.0/16** | + | Protocol | Enter **mssql:1433** | + | Destination type | Select **FQDN**. | + | Destination | Enter **sql-server-1.database.windows.net**. | ++1. Select **Add**. ++## Test connection to Azure SQL from virtual machine ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **vm-1**. ++1. In **Operations** select **Bastion**. ++1. Enter the username and password for the virtual machine. ++1. Select **Connect**. ++1. To verify name resolution of the private endpoint, enter the following command in the terminal window: ++ ```bash + nslookup sql-server-1.database.windows.net + ``` ++ You receive a message similar to the following example. The IP address returned is the private IP address of the private endpoint. ++ ```output + Server: 127.0.0.53 + Address: 127.0.0.53#53 ++ Non-authoritative answer: + sql-server-8675.database.windows.netcanonical name = sql-server-8675.privatelink.database.windows.net. + Name:sql-server-8675.privatelink.database.windows.net + Address: 10.1.0.4 + ``` ++1. Install the SQL server command line tools from [Install the SQL Server command-line tools sqlcmd and bcp on Linux](/sql/linux/sql-server-linux-setup-tools). Proceed with the next steps after the installation is complete. ++1. Use the following commands to connect to the SQL server you created in the previous steps. ++ * Replace **\<server-admin>** with the admin username you entered during the SQL server creation. ++ * Replace **\<admin-password>** with the admin password you entered during SQL server creation. ++ * Replace **sql-server-1** with the name of your SQL server. ++ ```bash + sqlcmd -S sql-server-1.database.windows.net -U '<server-admin>' -P '<admin-password>' + ``` ++1. A SQL command prompt is displayed on successful sign in. Enter **exit** to exit the **sqlcmd** tool. ++## Validate traffic in the Azure Firewall logs ++1. In the search box at the top of the portal, enter **Log Analytics**. Select **Log Analytics** in the search results. ++1. Select your log analytics workspace. In this example, the workspace is named **log-analytics-workspace**. ++1. In the General settings, select **Logs**. ++1. In the example **Queries** in the search box, enter **Application rule**. In the returned results in **Network**, select the **Run** button for **Application rule log data**. ++1. In the log query output, verify **sql-server-1.database.windows.net** is listed under **FQDN** and **SQLPrivateEndpoint** is listed under **Rule**. +++## Next steps ++Advance to the next article to learn how to use a private endpoint with Azure Private Resolver: +> [!div class="nextstepaction"] +> [Create a private endpoint DNS infrastructure with Azure Private Resolver for an on-premises workload](tutorial-dns-on-premises-private-resolver.md) |
public-multi-access-edge-compute-mec | Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md | Title: Key concepts for Azure public MEC description: Learn about important concepts for Azure public multi-access edge compute (MEC). --++ Last updated 11/22/2022 |
quotas | Quotas Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/quotas/quotas-overview.md | Title: Quotas overview description: Learn about to view quotas and request increases in the Azure portal. Previously updated : 07/22/2022 Last updated : 08/17/2023 -The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings?view=o365-worldwide). +The concept of quotas is designed to help protect customers from things like inaccurately resourced deployments and mistaken consumption. For Azure, it helps minimize risks from deceptive or inappropriate consumption and unexpected demand. Quotas are set and enforced in the scope of the [subscription](/microsoft-365/enterprise/subscriptions-licenses-accounts-and-tenants-for-microsoft-cloud-offerings. ## Quotas or limits? Different entry points, data views, actions, and programming options are availab | Option | Azure portal | Quota APIs | Support API | |||||-| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota API](/rest/api/reserved-vm-instances/quotaapi) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. | +| Summary | The portal provides a customer-friendly user interface for accessing quota information.<br><br>From [Azure Home](https://portal.azure.com/#home), **Quotas** is a centralized location to directly view quotas and quota usage and request quota increases.<br><br>From the Subscriptions page, **Quotas + usage** offers quick access to requesting quota increases for a given subscription.| The [Azure Quota Service REST API](/rest/api/quota) programmatically provides the ability to get current quota limits, find current usage, and request quota increases by subscription, resource provider, and location. | The [Azure Support REST API](/rest/api/support/) enables customers to create service quota support tickets programmatically. | | Availability | All customers | All customers | All customers with unified, premier, professional direct support plans | | Which to choose? | Useful for customers desiring a central location and an efficient visual interface for viewing and managing quotas. Provides quick access to requesting quota increases. | Useful for customers who want granular and programmatic control of quota management for adjustable quotas. Intended for end to end automation of quota usage validation and quota increase requests through APIs. | Customers who want end to end automation of support request creation and management. Provides an alternative path to Azure portal for requests. | | Providers supported | All providers | Compute, Machine Learning | All providers | |
reliability | Reliability Guidance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md | Azure reliability guidance contains the following: | **Products** | | | |[Azure Cosmos DB](../cosmos-db/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|-[Azure Database for PostgreSQL - Flexible Server](reliability-postgre-flexible.md)| +[Azure Database for PostgreSQL - Flexible Server](reliability-postgresql-flexible-server.md)| [Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| Azure reliability guidance contains the following: [Azure Storage Mover](reliability-azure-storage-mover.md)| [Azure Virtual Machine Scale Sets](../virtual-machines/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Virtual Machines](reliability-virtual-machines.md)|+[Azure Virtual Machines Image Builder](reliability-image-builder.md)| [Azure Virtual Network](../vpn-gateway/create-zone-redundant-vnet-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure VPN Gateway](../vpn-gateway/about-zone-redundant-vnet-gateways.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |
reliability | Reliability Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-image-builder.md | + + Title: Reliability in Azure Image Builder +description: Find out about reliability in Azure Image Builder ++++++ Last updated : 08/22/2023+++# Reliability in Azure Image Builder (AIB) ++This article describes reliability support in Azure Image Builder. Azure Image Builder doesn't currently support availability zones at this time, however it does support [cross-regional resiliency with disaster recovery](#disaster-recovery-cross-region-failover). +++Azure Image Builder (AIB) is a regional service with a cluster that serves single regions. The AIB regional setup keeps data and resources within the regional boundary. AIB as a service doesn't do fail over for cluster and SQL database in region down scenarios. +++For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). +++## Disaster recovery: cross-region failover ++If a region-wide disaster occurs, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). ++To ensure fast and easy recovery for Azure Image Builder (AIB), it's recommended that you run an image template in region pairs or multiple regions when designing your AIB solution. You should also replicate resources from the start when you're setting up your image templates. +++### Cross-region disaster recovery in multi-region geography ++When a regional disaster occurs, Microsoft is responsible for outage detection, notifications, and support for AIB. However, you're responsible for setting up disaster recovery for the control (service side) and data planes. +++#### Outage detection, notification, and management ++Microsoft sends a notification if there's an outage in the Azure Image Builder (AIB) Service. One common outage symptom is image templates getting 500 errors when attempting to run. You can review Azure Image Builder outage notifications and status updates through [support request management.](../azure-portal/supportability/how-to-manage-azure-support-request.md) +++#### Set up disaster recovery and outage detection ++You're responsible for setting up disaster recovery for your Azure Image Builder (AIB) environment, as there isn't a region failover at the AIB service side. You need to configure both the control plane (service side) and data plane. ++It's recommended that you create an AIB resource in another nearby region, into which you can replicate your resources. For more information, see the [supported regions](../virtual-machines/image-builder-overview.md#regions) and what resources are included in an [AIB creation](/azure/virtual-machines/image-builder-overview#how-it-works). ++### Single-region geography disaster recovery ++In the case of a diaster for single-region, you still need to get an image template resource from that region even when that region isn't available. You can either maintain a copy of an image template locally or can use [Azure Resource Graph](../governance/resource-graph/index.yml) from the Azure portal to get an image template resource. ++To get an image template resource using Resource Graph from the Azure portal: ++1. Go to the search bar in Azure portal and search for *resource graph explorer*. ++ ![Screenshot of Azure Resource Graph Explorer in the portal.](../virtual-machines//media/image-builder-reliability/resource-graph-explorer-portal.png#lightbox) ++1. Use the search bar on the far left to search resource by type and name to see how the details give you properties of the image template. The *See details* option on the bottom right shows the image template's properties attribute and tags separately. Template name, location, ID, and tenant ID can be used to get the correct image template resource. ++ ![Screenshot of using Azure Resource Graph Explorer search.](../virtual-machines//media/image-builder-reliability/resource-graph-explorer-search.png#lightbox) +++### Capacity and proactive disaster recovery resiliency ++Microsoft and its customers operate under the [shared responsibility model](./business-continuity-management-program.md#shared-responsibility-model). In customer-enabled DR (customer-responsible services), you're responsible for addressing DR for any service you deploy and control. To ensure that recovery is proactive, you should always pre-deploy secondaries. Without pre-deployed secondaries, there's no guarantee of capacity at time of impact. ++When planning where to replicate a template, consider: ++- AIB region availability: + - Choose [AIB supported regions](../virtual-machines//image-builder-overview.md#regions) close to your users. + - AIB continually expands into new regions. +- Azure paired regions: + - For your geographic area, choose two regions paired together. + - Recovery efforts for paired regions where prioritization is needed. ++## Additional guidance ++In regards to your data processing information, refer to the Azure Image Builder [data residency](../virtual-machines//linux/image-builder-json.md#data-residency) details. +++## Next steps ++- [Reliability in Azure](../reliability/overview.md) +- [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) +- [Azure Image Builder overview](../virtual-machines//image-builder-overview.md) |
reliability | Reliability Postgre Flexible | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgre-flexible.md | - Title: Reliability and high availability in Azure Database for PostgreSQL - Flexible Server -description: Find out about reliability and high availability in Azure Database for PostgreSQL - Flexible Server ----- Previously updated : 08/04/2023---<!--#Customer intent: I want to understand reliability support in Azure Database for PostgreSQL - Flexible Server so that I can respond to and/or avoid failures in order to minimize downtime and data loss. --> --# High availability (Reliability) in Azure Database for PostgreSQL - Flexible Server ----This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). --Azure Database for PostgreSQL: Flexible Server offers high availability support by provisioning physically separate primary and standby replica either within the same availability zone (zonal) or across availability zones (zone-redundant). This high availability model is designed to ensure that committed data is never lost in the case of failures. The model is also designed so that the database doesn't become a single point of failure in your software architecture. For more information on high availability and availability zone support, see [Availability zone support](#availability-zone-support). ---## Availability zone support ---Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md#azure-services-with-availability-zone-support) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events. --- **Zone-redundant**. Zone redundant high availability deploys a standby replica in a different zone with automatic failover capability. Zone redundancy provides the highest level of availability, but requires you to configure application redundancy across zones. For that reason, choose zone redundancy when you want protection from availability zone level failures and when latency across the availability zones is acceptable. -- You can choose the region and the availability zones for both primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with a similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs, a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, automatically storing **three** data copies. A zone-redundant configuration provides physical isolation of the entire stack between primary and standby servers. -- :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Pictures illustrating redundant high availability architecture."::: --- **Zonal**. Choose a zonal deployment when you want to achieve the highest level of availability within a single availability zone, but with the lowest network latency. You can choose the region and the availability zone to deploy both your primary database server. A standby replica server is *automatically* provisioned and managed in the *same* availability zone - with similar compute, storage, and network configuration - as the primary server. A zonal configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. -- :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Pictures illustrating zonal high availability architecture."::: - -->[!NOTE] ->Both zonal and zone-redundant deployment models architecturally behave the same. Various discussions in the following sections apply to both unless called out otherwise. --### Prerequisites --**Zone redundancy:** --- The **zone-redundancy** option is only available in a [regions that support availability zones](../postgresql/flexible-server/overview.md#azure-regions).--- Zone-redundancy zones are **not** supported for:-- - Azure Database for PostgreSQL ΓÇô Single Server SKU. - - Burstable compute tier. - - Regions with single-zone availability. --**Zonal:** --- The **zonal** deployment option is available in all [Azure regions](../postgresql/flexible-server/overview.md#azure-regions) where you can deploy Flexible Server. ---### High availability features --* A standby replica is deployed in the same VM configuration - including vCores, storage, network settings - as the primary server. --* You can add availability zone support for an existing database server. --* You can remove the standby replica by disabling high availability. --* You can choose availability zones for your primary and standby database servers for zone-redundant availability. --* Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time. --* In zone-redundant and zonal models, automatic backups are performed periodically from the primary database server. At the same time, the transaction logs are continuously archived in the backup storage from the standby replica. If the region supports availability zones, backup data is stored on zone-redundant storage (ZRS). In regions that don't support availability zones, backup data is stored on local redundant storage (LRS). --* Clients always connect to the end hostname of the primary database server. --* Any changes to the server parameters are also applied to the standby replica. --* Ability to restart the server to pick up any static server parameter changes. - -* Periodic maintenance activities such as minor version upgrades happen at the standby first and the service failed to reduce downtime. --### High availability limitations --* Due to synchronous replication to the standby server, especially with a zone-redundant configuration, applications can experience elevated write and commit latency. --* Standby replica cannot be used for read queries. --* Depending on the workload and activity on the primary server, the failover process might take longer than 120 seconds due to the recovery involved at the standby replica before it can be promoted. --* The standby server typically recovers WAL files at 40 MB/s. If your workload exceeds this limit, you may encounter extended time for the recovery to complete either during the failover or after establishing a new standby. --* Configuring for availability zones induces some latency to writes and commitsΓÇöno impact on reading queries. The performance impact varies depending on your workload. As a general guideline, writes and commit impact can be around 20-30% impact. --* Restarting the primary database server also restarts the standby replica. --* Configuring extra standbys is not supported. --* Configuring customer-initiated management tasks cannot be scheduled during the managed maintenance window. --* Planned events such as scale computing and scale storage happens on the standby first and then on the primary server. Currently, the server doesn't failover for these planned operations. --* If logical decoding or logical replication is configured with an availability-configured Flexible Server, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server. --* Configuring availability zones between private (VNET) and public access isn't supported. You must configure availability zones within a VNET (spanned across availability zones within a region) or public access. --* Availability zones are configured only within a single region. Availability zones cannot be configured across regions. --### SLA --- **Zone-Redundancy** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql).--- **Zonal** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql).--### Create an Azure Database for PostgreSQL - Flexible Server with availability zone enabled --To learn how to create an Azure Database for PostgreSQL - Flexible Server for high availability with availability zones, see [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal). --### Availability zone redeployment and migration --To learn how to enable or disable high availability configuration in your flexible server in both zone-redundant and zonal deployment models see [Manage high availability in Flexible Server](../postgresql/flexible-server/how-to-manage-high-availability-portal.md). ---### High availability components and workflow --#### Transaction completion --Application transaction-triggered writes and commits are first logged to the WAL on the primary server. It is then streamed to the standby server using the Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged for write completion. Only then and the application confirmed the writes. An extra round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process does not wait for the logs to be applied to the standby server. The standby server is permanently in recovery mode until it is promoted. --#### Health check --Flexible server health monitoring periodically checks for both the primary and standby health. If, after multiple pings, health monitoring detects that a primary server isn't reachable, the service then initiates an automatic failover to the standby server. The health monitoring algorithm is based on multiple data points to avoid false positive situations. --#### Failover modes --Flexible server supports two failover modes, [**Planned failover**](#planned-failover) and [**Unplanned failover**](#unplanned-failover). In both modes, once the replication is severed, the standby server runs the recovery before being promoted as a primary and opens for read/write. With automatic DNS entries updated with the new primary server endpoint, applications can connect to the server using the same endpoint. A new standby server is established in the background, so that your application can maintain connectivity. ---#### High availability status --The health of primary and standby servers are continuously monitored, and appropriate actions are taken to remediate issues, including triggering a failover to the standby server. The table below lists the possible high availability statuses: --| **Status** | **Description** | -| - | | -| **Initializing** | In the process of creating a new standby server. | -| **Replicating Data** | After the standby is created, it is catching up with the primary. | -| **Healthy** | Replication is in steady state and healthy. | -| **Failing Over** | The database server is in the process of failing over to the standby. | -| **Removing Standby** | In the process of deleting standby server. | -| **Not Enabled** | Zone redundant high availability is not enabled. | -- ->[!NOTE] -> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during the post-create stage, operating when the primary server activity is low is recommended. --#### Steady-state operations --PostgreSQL client applications are connected to the primary server using the DB server name. Application reads are served directly from the primary server. At the same time, commits and writes are confirmed to the application only after the log data is persisted on both the primary server and the standby replica. Due to this extra round-trip, applications can expect elevated latency for writes and commits. You can monitor the health of the high availability on the portal. ---1. Clients connect to the flexible server and perform write operations. -2. Changes are replicated to the standby site. -3. Primary receives an acknowledgment. -4. Writes/commits are acknowledged. --#### Point-in-time restore of high availability servers --For flexible servers configured with high availability, log data is replicated in real-time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates, are replicated to the standby replica. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform a point-in-time restore from the backup. Using a flexible server's point-in-time restore capability, you can restore to the time before the error occurred. A new database server is restored as a single-zone flexible server with a new user-provided server name for databases configured with high availability. You can use the restored server for a few use cases: --- You can use the restored server for production and optionally enable zone-redundant high availability.--- If you want to restore an object, export it from the restored database server and import it to your production database server.-- If you want to clone your database server for testing and development purposes or to restore for any other purposes, you can perform the point-in-time restore.--To learn how to do a point-in-time restore of a flexible server, see [Point-in-time restore of a flexible server](/azure/postgresql/flexible-server/how-to-restore-server-portal). --### Failover Support --#### Planned failover --Planned downtime events include Azure scheduled periodic software updates and minor version upgrades.You can also use a planned failover to return the primary server to a preferred availability zone. When configured in high availability, these operations are first applied to the standby replica while the applications continue to access the primary server. Once the standby replica is updated, primary server connections are drained, and a failover is triggered, which activates the standby replica to be the primary with the same database server name. Client applications have to reconnect with the same database server name to the new primary server and can resume their operations. A new standby server is established in the same zone as the old primary. --For other user-initiated operations such as scale-compute or scale-storage, the changes are applied on the standby first, followed by the primary. Currently, the service is not failed over to the standby, and hence while the scale operation is carried out on the primary server, applications will encounter a short downtime. --You can also use this feature to failover to the standby server with reduced downtime. For example, your primary could be on a different availability zone after an unplanned failover than the application. You want to bring the primary server back to the previous zone to colocate with your application. --When executing this feature, the standby server is first prepared to ensure it is caught up with recent transactions, allowing the application to continue performing reads/writes. The standby is then promoted, and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover. --| **Step** | **Description** | **App downtime expected?** | - | - | | -- | - | 1 | Wait for the standby server to have caught-up with the primary. | No | - | 2 | Internal monitoring system initiates the failover workflow. | No | - | 3 | Application writes are blocked when the standby server is close to the primary log sequence number (LSN). | Yes | - | 4 | Standby server is promoted to be an independent server. | Yes | - | 5 | DNS record is updated with the new standby server's IP address. | Yes | - | 6 | Application to reconnect and resume its read/write with new primary | No | - | 7 | A new standby server in another zone is established. | No | - | 8 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No | - | 9 | A steady state between the primary and the standby server is established. | No | - | 10 | Planned failover process is complete. | No | --Application downtime starts at step #3 and can resume operation post step #5. The rest of the steps happen in the background without impacting application writes and commits. --->[!TIP] ->With flexible server, you can optionally schedule Azure-initiated maintenance activities by choosing a 60-minute window on a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that window. If you don't choose a custom window, a system allocated 1-hr window between 11 pm - 7 am local time is selected for your server. - ->These Azure-initiated maintenance activities are also performed on the standby replica for flexible servers that are configured with availability zones. ---For a list of possible planned downtime events, see [Planned downtime events](/azure/postgresql/flexible-server/concepts-business-continuity#planned-downtime-events) --#### Unplanned failover --Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention. --For information on unplanned failovers and downtime, including possible scenarios, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation). ---#### Failover testings (forced failover) --With a forced failover, you can simulate an unplanned outage scenario while running your production workload and observe your application downtime. You can also use a forced failover when your primary server becomes unresponsive. --A forced failover brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it is promoted to be the primary server. DNS records are updated, and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background, which doesn't impact the uptime. --The following are the steps during forced failover: -- | **Step** | **Description** | **App downtime expected?** | - | - | | -- | - | 1 | Primary server is stopped shortly after receiving the failover request. | Yes | - | 2 | Application encounters downtime as the primary server is down. | Yes | - | 3 | Internal monitoring system detects the failure and initiates a failover to the standby server. | Yes | - | 4 | Standby server enters recovery mode before being fully promoted as an independent server. | Yes | - | 5 | The failover process waits for the standby recovery to complete. | Yes | - | 6 | Once the server is up, the DNS record is updated with the same hostname but using the standby's IP address. | Yes | - | 7 | Application can reconnect to the new primary server and resume the operation. | No | - | 8 | A standby server in the preferred zone is established. | No | - | 9 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No | - | 10 | A steady state between the primary and the standby server is established. | No | - | 11 | Forced failover process is complete. | No | --Application downtime is expected to start after step #1 and persists until step #6 is completed. The rest of the steps happen in the background without impacting the application writes and commits. -->[!Important] ->The end-to-end failover process includes (a) failing over to the standby server after the primary failure and (b) establishing a new standby server in a steady state. As your application incurs downtime until the failover to the standby is complete, **please measure the downtime from your application/client perspective** instead of the overall end-to-end failover process. ---#### Considerations while performing forced failovers --* The overall end-to-end operation time may be seen as longer than the actual downtime experienced by the application. -- >[!IMPORTANT] - > Always observe the downtime from the application perspective! --* Don't perform immediate, back-to-back failovers. Wait for at least 15-20 minutes between failovers, allowing the new standby server to be fully established. --* It's recommended that your perform a forced failover during a low-activity period to reduce downtime. ---### Zone-down experience --**Zonal**. To recover from a zone-level failure, you can [perform point-in-time restore](#point-in-time-restore-of-high-availability-servers) using the backup. You can choose a custom restore point with the latest time to restore the latest data. A new flexible server is deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. --For more information on point-in-time restore see [Backup and restore in Azure Database for PostgreSQL-Flexible Server] -(/azure/postgresql/flexible-server/concepts-backup-restore). --**Zone-redundant**. Flexible server is automatically failed over to the standby server within 60-120s with zero data loss. ---## Configurations without availability zones --Although it's not recommended, you can configure you flexible server without high availability enabled. For flexible servers configured without high availability, the service provides local redundant storage with three copies of data, zone-redundant backup (in regions where it is supported), and built-in server resiliency to automatically restart a crashed server and relocate the server to another physical node. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. During planned or unplanned failover events, if the server goes down, the service maintains the availability of the servers using the following automated procedure: --1. A new compute Linux VM is provisioned. -2. The storage with data files is mapped to the new virtual machine -3. PostgreSQL database engine is brought online on the new virtual machine. --The picture below shows the transition between VM and storage failure. ---## Disaster recovery: cross-region failover --In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). --Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database. --### Cross-region disaster recovery in multi-region geography --#### Geo-redundant backup and restore --Geo-redundant backup and restore provides the ability to restore your server in a different region in the event of a disaster. It also provides at least 99.99999999999999 percent (16 nines) durability of backup objects over a year. --Geo-redundant backup can be configured only at the time of server creation. When the server is configured with geo-redundant backup, the backup data and transaction logs are copied to the paired region asynchronously through storage replication. --For more information on geo-redundant backup and restore, see [geo-redundant backup and restore](/azure/postgresql/flexible-server/concepts-backup-restore#geo-redundant-backup-and-restore). --#### Read replicas --Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers. --For more information on on read replica features and considerations, see [Read replicas](/azure/postgresql/flexible-server/concepts-read-replicas). --#### Outage detection, notification, and management --If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server is provisioned and recovered to the last available data that was copied to this region. --You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. --For more information on unplanned downtime mitigation as well as recovery after regional disaster, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation). ------## Next steps -> [!div class="nextstepaction"] -> [Azure Database for PostgreSQL documentation](/azure/postgresql/) --> [!div class="nextstepaction"] -> [Reliability in Azure](availability-zones-overview.md) |
reliability | Reliability Postgresql Flexible Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-postgresql-flexible-server.md | + + Title: Reliability and high availability in PostgreSQL - Flexible Server +titleSufffix: Azure Database for PostgreSQL - Flexible Server +description: Find out about reliability and high availability in Azure Database for PostgreSQL - Flexible Server +++ Last updated : 08/24/2023++++ - references_regions + - subject-reliability +++<!--#Customer intent: I want to understand reliability support in Azure Database for PostgreSQL - Flexible Server so that I can respond to and/or avoid failures in order to minimize downtime and data loss. --> ++# High availability (Reliability) in Azure Database for PostgreSQL - Flexible Server +++This article describes high availability in Azure Database for PostgreSQL - Flexible Server, which includes [availability zones](#availability-zone-support) and [cross-region resiliency with disaster recovery](#disaster-recovery-cross-region-failover). For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). ++Azure Database for PostgreSQL: Flexible Server offers high availability support by provisioning physically separate primary and standby replica either within the same availability zone (zonal) or across availability zones (zone-redundant). This high availability model is designed to ensure that committed data is never lost in the case of failures. The model is also designed so that the database doesn't become a single point of failure in your software architecture. For more information on high availability and availability zone support, see [Availability zone support](#availability-zone-support). ++## Availability zone support +++Azure Database for PostgreSQL - Flexible Server supports both [zone-redundant and zonal models](availability-zones-service-support.md#azure-services-with-availability-zone-support) for high availability configurations. Both high availability configurations enable automatic failover capability with zero data loss during both planned and unplanned events. ++- **Zone-redundant**. Zone redundant high availability deploys a standby replica in a different zone with automatic failover capability. Zone redundancy provides the highest level of availability, but requires you to configure application redundancy across zones. For that reason, choose zone redundancy when you want protection from availability zone level failures and when latency across the availability zones is acceptable. ++ You can choose the region and the availability zones for both primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with a similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs, a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, automatically storing **three** data copies. A zone-redundant configuration provides physical isolation of the entire stack between primary and standby servers. ++ :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png" alt-text="Pictures illustrating redundant high availability architecture." lightbox="../postgresql/flexible-server/media/business-continuity/concepts-zone-redundant-high-availability-architecture.png"::: ++- **Zonal**. Choose a zonal deployment when you want to achieve the highest level of availability within a single availability zone, but with the lowest network latency. You can choose the region and the availability zone to deploy both your primary database server. A standby replica server is *automatically* provisioned and managed in the *same* availability zone - with similar compute, storage, and network configuration - as the primary server. A zonal configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. ++ :::image type="content" source="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png" alt-text="Pictures illustrating zonal high availability architecture." lightbox="../postgresql/flexible-server/media/business-continuity/concepts-same-zone-high-availability-architecture.png"::: ++> [!NOTE] +> Both zonal and zone-redundant deployment models architecturally behave the same. Various discussions in the following sections apply to both unless called out otherwise. ++### Prerequisites ++**Zone redundancy:** ++- The **zone-redundancy** option is only available in a [regions that support availability zones](../postgresql/flexible-server/overview.md#azure-regions). ++- Zone-redundancy zones are **not** supported for: ++ - Azure Database for PostgreSQL ΓÇô Single Server SKU. + - Burstable compute tier. + - Regions with single-zone availability. ++**Zonal:** ++- The **zonal** deployment option is available in all [Azure regions](../postgresql/flexible-server/overview.md#azure-regions) where you can deploy Flexible Server. ++### High availability features ++- A standby replica is deployed in the same VM configuration - including vCores, storage, network settings - as the primary server. ++- You can add availability zone support for an existing database server. ++- You can remove the standby replica by disabling high availability. ++- You can choose availability zones for your primary and standby database servers for zone-redundant availability. ++- Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time. ++- In zone-redundant and zonal models, automatic backups are performed periodically from the primary database server. At the same time, the transaction logs are continuously archived in the backup storage from the standby replica. If the region supports availability zones, backup data is stored on zone-redundant storage (ZRS). In regions that don't support availability zones, backup data is stored on local redundant storage (LRS). ++- Clients always connect to the end hostname of the primary database server. ++- Any changes to the server parameters are also applied to the standby replica. ++- Ability to restart the server to pick up any static server parameter changes. ++- Periodic maintenance activities such as minor version upgrades happen at the standby first and the service failed to reduce downtime. ++### High availability limitations ++- Due to synchronous replication to the standby server, especially with a zone-redundant configuration, applications can experience elevated write and commit latency. ++- Standby replica can't be used for read queries. ++- Depending on the workload and activity on the primary server, the failover process might take longer than 120 seconds due to the recovery involved at the standby replica before it can be promoted. ++- The standby server typically recovers WAL files at 40 MB/s. If your workload exceeds this limit, you may encounter extended time for the recovery to complete either during the failover or after establishing a new standby. ++- Configuring for availability zones induces some latency to writes and commitsΓÇöno impact on reading queries. The performance impact varies depending on your workload. As a general guideline, writes and commit impact can be around 20-30% impact. ++- Restarting the primary database server also restarts the standby replica. ++- Configuring an extra standby isn't supported. ++- Configuring customer-initiated management tasks can't be scheduled during the managed maintenance window. ++- Planned events such as scale computing and scale storage happens on the standby first and then on the primary server. Currently, the server doesn't failover for these planned operations. ++- If logical decoding or logical replication is configured with an availability-configured Flexible Server, in the event of a failover to the standby server, the logical replication slots aren't copied over to the standby server. ++- Configuring availability zones between private (VNET) and public access isn't supported. You must configure availability zones within a VNET (spanned across availability zones within a region) or public access. ++- Availability zones are configured only within a single region. Availability zones can't be configured across regions. ++### SLA ++- **Zone-Redundancy** model offers uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql). ++- **Zonal** model offers uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql). ++### Create an Azure Database for PostgreSQL - Flexible Server with availability zone enabled ++To learn how to create an Azure Database for PostgreSQL - Flexible Server for high availability with availability zones, see [Quickstart: Create an Azure Database for PostgreSQL - Flexible Server in the Azure portal](/azure/postgresql/flexible-server/quickstart-create-server-portal). ++### Availability zone redeployment and migration ++To learn how to enable or disable high availability configuration in your flexible server in both zone-redundant and zonal deployment models see [Manage high availability in Flexible Server](../postgresql/flexible-server/how-to-manage-high-availability-portal.md). ++### High availability components and workflow ++#### Transaction completion ++Application transaction-triggered writes and commits are first logged to the WAL on the primary server. It's then streamed to the standby server using the Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged for write completion. Only then and the application confirmed the writes. An extra round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process doesn't wait for the logs to be applied to the standby server. The standby server is permanently in recovery mode until it's promoted. ++#### Health check ++Flexible server health monitoring periodically checks for both the primary and standby health. After multiple pings, if health monitoring detects that a primary server isn't reachable, the service then initiates an automatic failover to the standby server. The health monitoring algorithm is based on multiple data points to avoid false positive situations. ++#### Failover modes ++Flexible server supports two failover modes, [**Planned failover**](#planned-failover) and [**Unplanned failover**](#unplanned-failover). In both modes, once the replication is severed, the standby server runs the recovery before being promoted as a primary and opens for read/write. With automatic DNS entries updated with the new primary server endpoint, applications can connect to the server using the same endpoint. A new standby server is established in the background, so that your application can maintain connectivity. ++#### High availability status ++The health of primary and standby servers are continuously monitored, and appropriate actions are taken to remediate issues, including triggering a failover to the standby server. The table below lists the possible high availability statuses: ++| **Status** | **Description** | +| | | +| **Initializing** | In the process of creating a new standby server. | +| **Replicating Data** | After the standby is created, it's catching up with the primary. | +| **Healthy** | Replication is in steady state and healthy. | +| **Failing Over** | The database server is in the process of failing over to the standby. | +| **Removing Standby** | In the process of deleting standby server. | +| **Not Enabled** | Zone redundant high availability isn't enabled. | ++> [!NOTE] +> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during the post-create stage, operating when the primary server activity is low is recommended. ++#### Steady-state operations ++PostgreSQL client applications are connected to the primary server using the DB server name. Application reads are served directly from the primary server. At the same time, commits and writes are confirmed to the application only after the log data is persisted on both the primary server and the standby replica. Due to this extra round-trip, applications can expect elevated latency for writes and commits. You can monitor the health of the high availability on the portal. +++1. Clients connect to the flexible server and perform write operations. +1. Changes are replicated to the standby site. +1. Primary receives an acknowledgment. +1. Writes/commits are acknowledged. ++#### Point-in-time restore of high availability servers ++For flexible servers configured with high availability, log data is replicated in real-time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates, are replicated to the standby replica. So, you can't use standby to recover from such logical errors. To recover from such errors, you have to perform a point-in-time restore from the backup. Using a flexible server's point-in-time restore capability, you can restore to the time before the error occurred. A new database server is restored as a single-zone flexible server with a new user-provided server name for databases configured with high availability. You can use the restored server for a few use cases: ++- You can use the restored server for production and optionally enable zone-redundant high availability. ++- If you want to restore an object, export it from the restored database server and import it to your production database server. +- If you want to clone your database server for testing and development purposes or to restore for any other purposes, you can perform the point-in-time restore. ++To learn how to do a point-in-time restore of a flexible server, see [Point-in-time restore of a flexible server](/azure/postgresql/flexible-server/how-to-restore-server-portal). ++### Failover Support ++#### Planned failover ++Planned downtime events include Azure scheduled periodic software updates and minor version upgrades. You can also use a planned failover to return the primary server to a preferred availability zone. When configured in high availability, these operations are first applied to the standby replica while the applications continue to access the primary server. Once the standby replica is updated, primary server connections are drained, and a failover is triggered, which activates the standby replica to be the primary with the same database server name. Client applications have to reconnect with the same database server name to the new primary server and can resume their operations. A new standby server is established in the same zone as the old primary. ++For other user-initiated operations such as scale-compute or scale-storage, the changes are applied on the standby first, followed by the primary. Currently, the service isn't failed over to the standby, and hence while the scale operation is carried out on the primary server, applications encounters a short downtime. ++You can also use this feature to failover to the standby server with reduced downtime. For example, your primary could be on a different availability zone after an unplanned failover than the application. You want to bring the primary server back to the previous zone to colocate with your application. ++When executing this feature, the standby server is first prepared to ensure it's caught up with recent transactions, allowing the application to continue performing reads/writes. The standby is then promoted, and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover. ++| **Step** | **Description** | **App downtime expected?** | + | | | | + | 1 | Wait for the standby server to have caught-up with the primary. | No | + | 2 | Internal monitoring system initiates the failover workflow. | No | + | 3 | Application writes are blocked when the standby server is close to the primary log sequence number (LSN). | Yes | + | 4 | Standby server is promoted to be an independent server. | Yes | + | 5 | DNS record is updated with the new standby server's IP address. | Yes | + | 6 | Application to reconnect and resume its read/write with new primary | No | + | 7 | A new standby server in another zone is established. | No | + | 8 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No | + | 9 | A steady state between the primary and the standby server is established. | No | + | 10 | Planned failover process is complete. | No | ++Application downtime starts at step #3 and can resume operation post step #5. The rest of the steps happen in the background without affecting application writes and commits. ++> [!TIP] +> With flexible server, you can optionally schedule Azure-initiated maintenance activities by choosing a 60-minute window on a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that window. If you don't choose a custom window, a system allocated 1-hr window between 11 pm - 7 am local time is selected for your server. +> These Azure-initiated maintenance activities are also performed on the standby replica for flexible servers that are configured with availability zones. ++For a list of possible planned downtime events, see [Planned downtime events](/azure/postgresql/flexible-server/concepts-business-continuity#planned-downtime-events) ++#### Unplanned failover ++Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime can't be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention. ++For information on unplanned failovers and downtime, including possible scenarios, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation). ++#### Failover testings (forced failover) ++With a forced failover, you can simulate an unplanned outage scenario while running your production workload and observe your application downtime. You can also use a forced failover when your primary server becomes unresponsive. ++A forced failover brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it's promoted to be the primary server. DNS records are updated, and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background, which doesn't impact the uptime. ++The following are the steps during forced failover: ++ | **Step** | **Description** | **App downtime expected?** | + | | | | + | 1 | Primary server is stopped shortly after receiving the failover request. | Yes | + | 2 | Application encounters downtime as the primary server is down. | Yes | + | 3 | Internal monitoring system detects the failure and initiates a failover to the standby server. | Yes | + | 4 | Standby server enters recovery mode before being fully promoted as an independent server. | Yes | + | 5 | The failover process waits for the standby recovery to complete. | Yes | + | 6 | Once the server is up, the DNS record is updated with the same hostname but using the standby's IP address. | Yes | + | 7 | Application can reconnect to the new primary server and resume the operation. | No | + | 8 | A standby server in the preferred zone is established. | No | + | 9 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No | + | 10 | A steady state between the primary and the standby server is established. | No | + | 11 | Forced failover process is complete. | No | ++Application downtime is expected to start after step #1 and persists until step #6 is completed. The rest of the steps happen in the background without affecting the application writes and commits. ++> [!IMPORTANT] +> The end-to-end failover process includes (a) failing over to the standby server after the primary failure and (b) establishing a new standby server in a steady state. As your application incurs downtime until the failover to the standby is complete, **please measure the downtime from your application/client perspective** instead of the overall end-to-end failover process. ++#### Considerations while performing forced failovers ++- The overall end-to-end operation time may be seen as longer than the actual downtime experienced by the application. ++ > [!IMPORTANT] + > Always observe the downtime from the application perspective! ++- Don't perform immediate, back-to-back failovers. Wait for at least 15-20 minutes between failovers, allowing the new standby server to be fully established. ++- It's recommended that your perform a forced failover during a low-activity period to reduce downtime. ++### Zone-down experience ++**Zonal**. To recover from a zone-level failure, you can [perform point-in-time restore](#point-in-time-restore-of-high-availability-servers) using the backup. You can choose a custom restore point with the latest time to restore the latest data. A new flexible server is deployed in another nonaffected zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. ++For more information on point-in-time restore, see [Backup and restore in Azure Database for PostgreSQL-Flexible Server] +(/azure/postgresql/flexible-server/concepts-backup-restore). ++**Zone-redundant**. Flexible server is automatically failed over to the standby server within 60-120 s with zero data loss. ++## Configurations without availability zones ++Although it's not recommended, you can configure you flexible server without high availability enabled. For flexible servers configured without high availability, the service provides local redundant storage with three copies of data, zone-redundant backup (in regions where it's supported), and built-in server resiliency to automatically restart a crashed server and relocate the server to another physical node. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration. During planned or unplanned failover events, if the server goes down, the service maintains the availability of the servers using the following automated procedure: ++1. A new compute Linux VM is provisioned. +1. The storage with data files is mapped to the new virtual machine +1. PostgreSQL database engine is brought online on the new virtual machine. ++The picture below shows the transition between VM and storage failure. +++## Disaster recovery: cross-region failover ++In the case of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). ++Flexible server provides features that protect data and mitigates downtime for your mission-critical databases during planned and unplanned downtime events. Built on top of the Azure infrastructure that offers robust resiliency and availability, flexible server offers business continuity features that provide fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - the recovery time objective (RTO), and data loss exposure - the recovery point objective (RPO). For example, your business-critical database requires stricter uptime than a test database. ++### Cross-region disaster recovery in multi-region geography ++#### Geo-redundant backup and restore ++Geo-redundant backup and restore provide the ability to restore your server in a different region in the event of a disaster. It also provides at least 99.99999999999999 percent (16 nines) durability of backup objects over a year. ++Geo-redundant backup can be configured only at the time of server creation. When the server is configured with geo-redundant backup, the backup data and transaction logs are copied to the paired region asynchronously through storage replication. ++For more information on geo-redundant backup and restore, see [geo-redundant backup and restore](/azure/postgresql/flexible-server/concepts-backup-restore#geo-redundant-backup-and-restore). ++#### Read replicas ++Cross region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. Read replicas are supported in general purpose and memory optimized compute tiers. ++For more information on read replica features and considerations, see [Read replicas](/azure/postgresql/flexible-server/concepts-read-replicas). ++#### Outage detection, notification, and management ++If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server is provisioned and recovered to the last available data that was copied to this region. ++You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. ++For more information on unplanned downtime mitigation and recovery after regional disaster, see [Unplanned downtime mitigation](/azure/postgresql/flexible-server/concepts-business-continuity#unplanned-downtime-mitigation). ++## Next steps ++> [!div class="nextstepaction"] +> [Azure Database for PostgreSQL documentation](/azure/postgresql/) ++> [!div class="nextstepaction"] +> [Reliability in Azure](availability-zones-overview.md) |
remote-rendering | Custom Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/custom-models/custom-models.md | Title: Interfaces and custom models -description: Add view controllers and ingest custom models to be rendered by Azure Remote Rendering +description: Add view controllers and ingest custom models to render them with Azure Remote Rendering Last updated 06/15/2020 In this tutorial, you learn how to: ## Get started with the Mixed Reality Toolkit (MRTK) -The Mixed Reality Toolkit (MRTK) is a cross-platform toolkit for building mixed reality experiences. We'll use MRTK 2.5.1 for its interaction and visualization features. +The Mixed Reality Toolkit (MRTK) is a cross-platform toolkit for building mixed reality experiences. We use MRTK 2.8.3 for its interaction and visualization features. -To add MRTK, follow the [Required steps](https://microsoft.github.io/MixedRealityToolkit-Unity/version/releases/2.5.1/Documentation/Installation.html#required) listed in the [MRTK Installation Guide](https://microsoft.github.io/MixedRealityToolkit-Unity/version/releases/2.5.1/Documentation/Installation.html). --Those steps are: - - Even though it says "latest", please use version 2.5.1 from the MRTK release page. - - We only use the *Foundation* package in this tutorial. The *Extensions*, *Tools*, and *Examples* packages are not required. - - You should have done this step already in the first chapter, but now is a good time to double check! - - You can add MRTK to a new scene and re-add your coordinator and model objects/scripts, or you can add MRTK to your existing scene using the *Mixed Reality Toolkit -> Add to Scene and Configure* menu command. +The [official guide](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr) to import MRTK contains some steps we don't need to do. Only these three steps are necessary: + - Importing the 'Mixed Reality Toolkit/Mixed Reality Toolkit Foundation' version 2.8.3 to your project through the Mixed Reality Feature Tool ([Import MRTK](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr#import-the-mrtk-unity-foundation-package)). + - Run the configuration wizard of MRTK ([Configure MRTK](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr#configure-the-unity-project)). + - Add MRTK to the current scene ([Add to scene](/training/modules/learn-mrtk-tutorials/1-5-exercise-configure-resources?tabs=openxr#create-the-scene-and-configure-mrtk)). Use the *ARRMixedRealityToolkitConfigurationProfile* here instead of the suggested profile in the tutorial. ## Import assets used by this tutorial -Starting in this chapter, we'll implement a simple [model-view-controller pattern](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) for much of the material covered. The *model* part of the pattern is the Azure Remote Rendering specific code and the state management related to Azure Remote Rendering. The *view* and *controller* parts of the pattern are implemented using MRTK assets and some custom scripts. It is possible to use the *model* in this tutorial without the *view-controller* implemented here. This separation allows you to easily integrate the code found in this tutorial into your own application where it will take over the *view-controller* part of the design pattern. +Starting in this chapter, we'll implement a basic [model-view-controller pattern](https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) for much of the material covered. The *model* part of the pattern is the Azure Remote Rendering specific code and the state management related to Azure Remote Rendering. The *view* and *controller* parts of the pattern are implemented using MRTK assets and some custom scripts. It's possible to use the *model* in this tutorial without the *view-controller* implemented here. This separation allows you to easily integrate the code found in this tutorial into your own application where it takes over the *view-controller* part of the design pattern. -With the introduction of MRTK, there are a number of scripts, prefabs, and assets that can now be added to the project to support interactions and visual feedback. These assets, referred to as the **Tutorial Assets**, are bundled into a [Unity Asset Package](https://docs.unity3d.com/Manual/AssetPackages.html), which is included in the [Azure Remote Rendering GitHub](https://github.com/Azure/azure-remote-rendering) in '\Unity\TutorialAssets\TutorialAssets.unitypackage'. +With the introduction of MRTK, there are multiple scripts, prefabs, and assets that can now be added to the project to support interactions and visual feedback. These assets referred to as the **Tutorial Assets**, are bundled into a [Unity Asset Package](https://docs.unity3d.com/Manual/AssetPackages.html), which is included in the [Azure Remote Rendering GitHub](https://github.com/Azure/azure-remote-rendering) in '\Unity\TutorialAssets\TutorialAssets.unitypackage'. 1. Clone or download the git repository [Azure Remote Rendering](https://github.com/Azure/azure-remote-rendering), if downloading extract the zip to a known location. 1. In your Unity project, choose *Assets -> Import Package -> Custom Package*.-1. In the file explorer, navigate to the directory where you cloned or unzipped the Azure Remote Rendering repository, then select the .unitypackage found in **Unity -> TutorialAssets -> TutorialAssets.unitypackage** +1. In the file explorer, navigate to the directory where you cloned or unzipped the Azure Remote Rendering repository, then select the `.unitypackage` found in **Unity -> TutorialAssets -> TutorialAssets.unitypackage** 1. Select the **Import** button to import the contents of the package into your project. 1. In the Unity Editor, select *Mixed Reality Toolkit -> Utilities -> Upgrade MRTK Standard Shader for Lightweight Render Pipeline* from the top menu bar and follow the prompts to upgrade the shader. -Once MRTK and the Tutorial Assets are included in the project, we'll switch the MRTK profile to one more suitable for the tutorial. +Once MRTK and the Tutorial Assets are setup double check, that the correct profile is selected. 1. Select the **MixedRealityToolkit** GameObject in the scene hierarchy. 1. In the Inspector, under the **MixedRealityToolkit** component, switch the configuration profile to *ARRMixedRealityToolkitConfigurationProfile*. 1. Press *Ctrl+S* to save your changes. -This will configure MRTK, primarily, with the default HoloLens 2 profiles. The provided profiles are pre-configured in the following ways: +This step configures MRTK, primarily, with the default HoloLens 2 profiles. The provided profiles are preconfigured in the following ways: - Turn off the profiler (Press 9 to toggle it on/off, or say "Show/Hide Profiler" on device). - Turn off the eye gaze cursor. - Enable Unity mouse clicks, so you can click MRTK UI elements with the mouse instead of the simulated hand. ## Add the App Menu -Most of the view controllers in this tutorial operate against abstract base classes instead of against concrete classes. This pattern provides more flexibility and allows us to provide the view controllers for you, while still helping you learn the Azure Remote Rendering code. For simplicity, the **RemoteRenderingCoordinator** class does not have an abstract class provided and its view controller operates directly against the concrete class. +Most of the view controllers in this tutorial operate against abstract base classes instead of against concrete classes. This pattern provides more flexibility and allows us to provide the view controllers for you, while still helping you learn the Azure Remote Rendering code. For simplicity, the **RemoteRenderingCoordinator** class doesn't have an abstract class provided and its view controller operates directly against the concrete class. -You can now add the prefab **AppMenu** to the scene, for visual feedback of the current session state. This view controller will "unlock" more sub menu view controllers as we implement and integrate more ARR features into the scene. For now, the **AppMenu** will have a visual indication of the ARR state and present the modal panel that the user uses to authorize the application to connect to ARR. +You can now add the prefab **AppMenu** to the scene, for visual feedback of the current session state. The **AppMenu** also present the modal panel that the user uses to authorize the application to connect to ARR. 1. Locate the **AppMenu** prefab in *Assets/RemoteRenderingTutorial/Prefabs/AppMenu* 1. Drag the **AppMenu** prefab into the scene.-1. You'll likely see a dialog for **TMP Importer**, since this is the first time we're including *Text Mesh Pro* assets in the scene. Follow the prompts to **Import TMP Essentials**. Then close the importer dialog, the examples and extras are not needed. +1. If you see a dialog for **TMP Importer**, follow the prompts to **Import TMP Essentials**. Then close the importer dialog, as the examples and extras aren't needed. 1. The **AppMenu** is configured to automatically hook up and provide the modal for consenting to connecting to a Session, so we can remove the bypass placed earlier. On the **RemoteRenderingCoordinator** GameObject, remove the bypass for authorization we implemented previously, by pressing the '-' button on the **On Requesting Authorization** event. ![Remove bypass](./media/remove-bypass-event.png). You can now add the prefab **AppMenu** to the scene, for visual feedback of the 1. Test the view controller by pressing **Play** in the Unity Editor. 1. In the Editor, now that MRTK is configured, you can use the WASD keys to change the position your view and holding the right mouse button + moving the mouse to change your view direction. Try "driving" around the scene a bit to get a feel for the controls. 1. On device, you can raise your palm up to summon the **AppMenu**, in the Unity Editor, use the hotkey 'M'.-1. If you've lost sight of the menu, press the 'M' key to summon the menu. The menu will be placed near the camera for easy interaction. -1. The authorization will now show as a request to the right of the **AppMenu**, from now on, you'll use this to authorize the app to manage remote rendering sessions. +1. If you've lost sight of the menu, press the 'M' key to summon the menu. The menu is placed near the camera for easy interaction. +1. The **AppMenu** presents a UI element for authorization to the right of the **AppMenu**. From now on, you should use this UI element to authorize the app to manage remote rendering sessions. ![UI authorize](./media/authorize-request-ui.png) You can now add the prefab **AppMenu** to the scene, for visual feedback of the ## Manage model state -Now we'll implement a new script, **RemoteRenderedModel** that is for tracking state, responding to events, firing events, and configuration. Essentially, **RemoteRenderedModel** stores the remote path for the model data in `modelPath`. It will listen for state changes in the **RemoteRenderingCoordinator** to see if it should automatically load or unload the model it defines. The GameObject that has the **RemoteRenderedModel** attached to it will be the local parent for the remote content. +We need a new script called **RemoteRenderedModel** that is for tracking state, responding to events, firing events, and configuration. Essentially, **RemoteRenderedModel** stores the remote path for the model data in `modelPath`. It listens for state changes in the **RemoteRenderingCoordinator** to see if it should automatically load or unload the model it defines. The GameObject that has the **RemoteRenderedModel** attached to it's the local parent for the remote content. -Notice that the **RemoteRenderedModel** script implements **BaseRemoteRenderedModel**, included from **Tutorial Assets**. This will allow the remote model view controller to bind with your script. +Notice that the **RemoteRenderedModel** script implements **BaseRemoteRenderedModel**, included from **Tutorial Assets**. This connection allows the remote model view controller to bind with your script. 1. Create a new script named **RemoteRenderedModel** in the same folder as **RemoteRenderingCoordinator**. Replace the entire contents with the following code: Notice that the **RemoteRenderedModel** script implements **BaseRemoteRenderedMo } ``` -In the most basic terms, **RemoteRenderedModel** holds the data needed to load a model (in this case the SAS or *builtin://* URI) and tracks the remote model state. When it's time to load, the `LoadModel` method is called on **RemoteRenderingCoordinator** and the Entity containing the model is returned for reference and unloading. +In the most basic terms, **RemoteRenderedModel** holds the data needed to load a model (in this case the SAS or *builtin://* URI) and tracks the remote model state. When it's time to load the model, the `LoadModel` method is called on **RemoteRenderingCoordinator**, and the Entity containing the model is returned for reference and unloading. ## Load the Test Model -Let's test the new script by loading the test model again. We'll add a Game Object to contain the script and be a parent to the test model. We'll also create a virtual stage that contains the model. The stage will stay fixed relative to the real world using a [WorldAnchor](/windows/mixed-reality/develop/unity/spatial-anchors-in-unity?tabs=worldanchor). We use a fixed stage so that the model itself can still be moved around later on. +Let's test the new script by loading the test model again. For this test, we need a Game Object to contain the script and be a parent to the test model, and we also need a virtual stage that contains the model. The stage stays fixed relative to the real world using a [WorldAnchor](/windows/mixed-reality/develop/unity/spatial-anchors-in-unity?tabs=worldanchor). We use a fixed stage so that the model itself can still be moved around later on. 1. Create a new empty Game Object in the scene and name it **ModelStage**. 1. Add a World Anchor component to **ModelStage** Let's test the new script by loading the test model again. We'll add a Game Obje 1. Ensure **AutomaticallyLoad** is turned on. 1. Press **Play** in the Unity Editor to test the application.-1. Grant authorization by clicking the *Connect* button to allow the app to create a session and it will connect to a Session and automatically load the model. +1. Grant authorization by clicking the *Connect* button to allow the app to create a session, connect to it, and automatically load the model. -Watch the Console as the application progresses through its states. Keep in mind, some states may take some time to complete, and won't show progress. Eventually, you'll see the logs from the model loading and then the test model will be rendered in the scene. +Watch the Console as the application progresses through its states. Keep in mind, some states may take some time to complete, and there might be no progress updates for a while. Eventually, you see logs from the model loading and then shortly after the rendered test model in the scene. -Try moving and rotating the **TestModel** GameObject via the Transform in the Inspector, or in the Scene view. You'll see the model move and rotate it in the Game view. +Try moving and rotating the **TestModel** GameObject via the Transform in the Inspector, or in the Scene view and observe the transformations in the Game view. ![Unity Log](./media/unity-loading-log.png) ## Provision Blob Storage in Azure and custom model ingestion -Now we can try loading your own model. To do that, you'll need to configure Blob Storage and on Azure, upload and convert a model, then we'll load the model using the **RemoteRenderedModel** script. The custom model loading steps can be safely skipped if you don't have your own model to load at this time. +Now we can try loading your own model. To do that, you need to configure Blob Storage on Azure, upload and convert a model, and then load the model using the **RemoteRenderedModel** script. The custom model loading steps can be safely skipped if you don't have your own model to load at this time. -Follow the steps specified in the [Quickstart: Convert a model for rendering](../../../quickstarts/convert-model.md). Skip the **Insert new model into Quickstart Sample App** section for the purpose of this tutorial. Once you have your ingested model's *Shared Access Signature (SAS)* URI, continue to the next step below. +Follow the steps specified in the [Quickstart: Convert a model for rendering](../../../quickstarts/convert-model.md). Skip the **Insert new model into Quickstart Sample App** section for this tutorial. Once you have your ingested model's *Shared Access Signature (SAS)* URI, continue. ## Load and rendering a custom model Follow the steps specified in the [Quickstart: Convert a model for rendering](.. ![Add RemoteRenderedModel component](./media/add-remote-rendered-model-script.png) 1. Fill in the `Model Display Name` with an appropriate name for your model.-1. Fill in the `Model Path` with the model's *Shared Access Signature (SAS)* URI you created in the ingestion steps above. +1. Fill in the `Model Path` with the model's *Shared Access Signature (SAS)* URI you created in the [Provision Blob Storage in Azure and custom model ingestion](#provision-blob-storage-in-azure-and-custom-model-ingestion) step. 1. Position the GameObject in front of the camera, at position **x = 0, y = 0, z = 3.** 1. Ensure **AutomaticallyLoad** is turned on. 1. Press **Play** in the Unity Editor to test the application. - You will see the Console begin to populate with the current state, and eventually, model loading progress messages. Your custom model will then load into the scene. + The console shows the current session state and also the model loading progress messages, once the session is connected. -1. Remove your custom model object from the scene. The best experience for this tutorial will be using the test model. While multiple models are certainly supported in ARR, this tutorial was written to best support a single remote model at a time. +1. Remove your custom model object from the scene. The best experience for this tutorial is with the test model. While multiple models are supported in ARR, this tutorial was written to best support a single remote model at a time. ## Next steps -You can now load your own models into Azure Remote Rendering and view them in your application! Next, we'll guide you through manipulating your models. +You can now load your own models into Azure Remote Rendering and view them in your application! Next, we guide you through manipulating your models. > [!div class="nextstepaction"] > [Next: Manipulating models](../manipulate-models/manipulate-models.md) |
remote-rendering | Manipulate Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/manipulate-models/manipulate-models.md | The bounds of a model are defined by the box that contains the entire model - ju > [!NOTE] > If you see an error in Visual Studio claiming *Feature 'X' is not available in C# 6. Please use language version 7.0 or greater*, these error can be safely ignored. This is related to Unity's Solution and Project generation. - This script should be added to the same GameObject as the script that implements **BaseRemoteRenderedModel**. In this case, that means **RemoteRenderedModel**. Similar to previous scripts, this initial code will handle all the state changes, events, and data related to remote bounds. + This script should be added to the same GameObject as the script that implements **BaseRemoteRenderedModel**. In this case, that means **RemoteRenderedModel**. Similar to previous scripts, this initial code handles all the state changes, events, and data related to remote bounds. There is only one method left to implement: **QueryBounds**. **QueryBounds** fetches the bounds asynchronously, takes the result of the query and applies it to the local **BoxCollider**. The bounds of a model are defined by the box that contains the entire model - ju } ``` - We'll check the query result to see if it was successful. If yes, convert and apply the returned bounds in a format that the **BoxCollider** can accept. + We check the query result to see if it was successful. If yes, convert and apply the returned bounds in a format that the **BoxCollider** can accept. -Now, when the **RemoteBounds** script is added to the same game object as the **RemoteRenderedModel**, a **BoxCollider** will be added if needed and when the model reaches its `Loaded` state, the bounds will automatically be queried and applied to the **BoxCollider**. +Now, when the **RemoteBounds** script is added to the same game object as the **RemoteRenderedModel**, a **BoxCollider** is added if needed and when the model reaches its `Loaded` state, the bounds will automatically be queried and applied to the **BoxCollider**. 1. Using the **TestModel** GameObject created previously, add the **RemoteBounds** component. 1. Confirm the script is added. This tutorial is using MRTK for object interaction. Most of the MRTK specific im 1. Press Unity's Play button to play the scene and open the **Model Tools** menu inside the **AppMenu**. ![View controller](./media/model-with-view-controller.png) -The **AppMenu** has a sub menu **Model Tools** that implements a view controller for binding with the model. When the GameObject contains a **RemoteBounds** component, the view controller will add a [**BoundingBox**](https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_BoundingBox.html) component, which is an MRTK component that renders a bounding box around an object with a **BoxCollider**. A [**ObjectManipulator**](https://microsoft.github.io/MixedRealityToolkit-Unity/version/releases/2.5.1/api/Microsoft.MixedReality.Toolkit.UI.ObjectManipulator.html), which is responsible for hand interactions. These scripts combined will allow us to move, rotate, and scale the remotely rendered model. +The **AppMenu** has a sub menu **Model Tools** that implements a view controller for binding with the model. When the GameObject contains a **RemoteBounds** component, the view controller will add a [**BoundingBox**](https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/README_BoundingBox.html) component, which is an MRTK component that renders a bounding box around an object with a **BoxCollider**. A [**ObjectManipulator**](/windows/mixed-reality/mrtk-unity/mrtk2/features/ux-building-blocks/object-manipulator), which is responsible for hand interactions. These scripts combined will allow us to move, rotate, and scale the remotely rendered model. 1. Move your mouse to the Game panel and click inside it to give it focus. 1. Using [MRTK's hand simulation](https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/InputSimulation/InputSimulationService.html#hand-simulation), press and hold the left Shift key. |
role-based-access-control | Built In Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md | The following table provides a brief description of each built-in role. Click th > | [Contributor](#contributor) | Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. | b24988ac-6180-42a0-ab88-20f7382dd24c | > | [Owner](#owner) | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. | 8e3af657-a8ff-443c-a75c-2fe8c4bcb635 | > | [Reader](#reader) | View all resources, but does not allow you to make any changes. | acdd72a7-3385-48ef-bd42-f606fba81ae7 |+> | [Role Based Access Control Administrator (Preview)](#role-based-access-control-administrator-preview) | Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy. | f58310d9-a9f6-439a-9e8d-f62e7b41a168 | > | [User Access Administrator](#user-access-administrator) | Lets you manage user access to Azure resources. | 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 | > | **Compute** | | | > | [Classic Virtual Machine Contributor](#classic-virtual-machine-contributor) | Lets you manage classic virtual machines, but not access to them, and not the virtual network or storage account they're connected to. | d73bb868-a0df-4d4d-bd69-98a00b01fccb | View all resources, but does not allow you to make any changes. [Learn more](rba "type": "Microsoft.Authorization/roleDefinitions" } ```+### Role Based Access Control Administrator (Preview) ++Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy. ++> [!div class="mx-tableFixed"] +> | Actions | Description | +> | | | +> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/write | Create a role assignment at the specified scope. | +> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/roleAssignments/delete | Delete a role assignment at the specified scope. | +> | */read | Read resources of all types, except secrets. | +> | [Microsoft.Support](resource-provider-operations.md#microsoftsupport)/* | Create and update a support ticket | +> | **NotActions** | | +> | *none* | | +> | **DataActions** | | +> | *none* | | +> | **NotDataActions** | | +> | *none* | | ++```json +{ + "assignableScopes": [ + "/" + ], + "description": "Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.", + "id": "/providers/Microsoft.Authorization/roleDefinitions/f58310d9-a9f6-439a-9e8d-f62e7b41a168", + "name": "f58310d9-a9f6-439a-9e8d-f62e7b41a168", + "permissions": [ + { + "actions": [ + "Microsoft.Authorization/roleAssignments/write", + "Microsoft.Authorization/roleAssignments/delete", + "*/read", + "Microsoft.Support/*" + ], + "notActions": [], + "dataActions": [], + "notDataActions": [] + } + ], + "roleName": "Role Based Access Control Administrator (Preview)", + "roleType": "BuiltInRole", + "type": "Microsoft.Authorization/roleDefinitions" +} +``` ### User Access Administrator |
role-based-access-control | Elevate Access Global Admin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md | You should remove this elevated access once you have made the changes you need t Follow these steps to elevate access for a Global Administrator using the Azure portal. -1. Sign in to the [Azure portal](https://portal.azure.com) or the [Azure Active Directory admin center](https://aad.portal.azure.com) as a Global Administrator. +1. Sign in to the [Azure portal](https://portal.azure.com) as a Global Administrator. If you are using Azure AD Privileged Identity Management, [activate your Global Administrator role assignment](../active-directory/privileged-identity-management/pim-how-to-activate-role.md). |
role-based-access-control | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md | Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
role-based-access-control | Role Assignments Steps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md | When you assign a role at a parent scope, those permissions are inherited to the [!INCLUDE [Scope for Azure RBAC least privilege](../../includes/role-based-access-control/scope-least.md)] For more information, see [Understand scope](scope-overview.md). -## Step 4. Check your prerequisites +## Step 4: Check your prerequisites To assign roles, you must be signed in with a user that is assigned a role that has role assignments write permission, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) at the scope you are trying to assign the role. Similarly, to remove a role assignment, you must have the role assignments delete permission. If your user account doesn't have permission to assign a role within your subscr If you are using a service principal to assign roles, you might get the error "Insufficient privileges to complete the operation." This error is likely because Azure is attempting to look up the assignee identity in Azure Active Directory (Azure AD) and the service principal cannot read Azure AD by default. In this case, you need to grant the service principal permissions to read data in the directory. Alternatively, if you are using Azure CLI, you can create the role assignment by using the assignee object ID to skip the Azure AD lookup. For more information, see [Troubleshoot Azure RBAC](troubleshooting.md). -## Step 5. Assign role +## Step 5: Assign role Once you know the security principal, role, and scope, you can assign the role. You can assign roles using the Azure portal, Azure PowerShell, Azure CLI, Azure SDKs, or REST APIs. |
role-based-access-control | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
role-based-access-control | Tutorial Role Assignments Group Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-group-powershell.md | |
role-based-access-control | Tutorial Role Assignments User Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-user-powershell.md | |
route-server | Expressroute Vpn Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/expressroute-vpn-support.md | For example, in the following diagram: You can also replace the SDWAN appliance with Azure VPN gateway. Since Azure VPN and ExpressRoute gateways are fully managed, you only need to enable the route exchange for the two on-premises networks to talk to each other. +If you enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes dynamically over BGP. For more information, see [How to configure BGP for Azure VPN Gateway](../vpn-gateway/bgp-howto.md). If you donΓÇÖt enable BGP on the VPN gateway, the gateway learns *On-premises 1* routes that are defined in the local network gateway of *On-premises 1*. For more information, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). Whether you enable BGP on the VPN gateway or not, the gateway advertises the routes it learns to the Route Server if route exchange is enabled. For more information, see [Configure route exchange](quickstart-configure-route-server-portal.md#configure-route-exchange). + > [!IMPORTANT] -> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not necessary to have BGP enabled on the VPN gateway. +> Azure VPN gateway must be configured in [**active-active**](../vpn-gateway/vpn-gateway-activeactive-rm-powershell.md) mode and have the ASN set to 65515. It's not a requirement to have BGP enabled on the VPN gateway to communicate with the Route Server. -> [!IMPORTANT] +> [!NOTE] > When the same route is learned over ExpressRoute, Azure VPN or an SDWAN appliance, the ExpressRoute network will be preferred. ## Next steps |
route-server | Next Hop Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/next-hop-ip.md | -With the support for Next Hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and leverage load balancing to improve connectivity performance. +With the support for Next Hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and use load balancing to improve connectivity performance. :::image type="content" source="./media/next-hop-ip/route-server-next-hop.png" alt-text="Diagram of two NVAs behind a load balancer and a Route Server."::: |
route-server | Route Server Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/route-server-faq.md | No, Azure Route Server supports only 16-bit (2 bytes) ASNs. If the route has the same AS path length, Azure Route Server will program multiple copies of the route, each with a different next hop, to the virtual machines (VMs) in the virtual network. When a VM sends traffic to the destination of this route, the VM host uses Equal-Cost Multi-Path (ECMP) routing. However, if one NVA sends the route with a shorter AS path length than other NVAs, Azure Route Server will only program the route that has the next hop set to this NVA to the VMs in the virtual network. +### Does creating a Route Server affect the operation of existing virtual network gateways (VPN or ExpressRoute)? ++Yes. When you create or delete a Route Server in a virtual network that contains a virtual network gateway (ExpressRoute or VPN), expect downtime until the operation is complete. If you have an ExpressRoute circuit connected to the virtual network where you're creating or deleting the Route Server, the downtime doesn't affect the ExpressRoute circuit or its connections to other virtual networks. + ### Does Azure Route Server exchange routes by default between NVAs and the virtual network gateways (VPN or ExpressRoute)? No. By default, Azure Route Server doesn't propagate routes it receives from an NVA and a virtual network gateway to each other. The Route Server exchanges these routes after you enable **branch-to-branch** in it. You can still use Route Server to direct traffic between subnets in different vi No, Azure Route Server provides transit only between ExpressRoute and Site-to-Site (S2S) VPN gateway connections (when enabling the *branch-to-branch* setting). +### Can I create an Azure Route Server in a spoke VNet that's connected to a Virtual WAN hub? ++No. The spoke VNet can't have a Route Server if it's connected to the virtual WAN hub. + ## Limitations ### How many Azure Route Servers can I create in a virtual network? No, Azure Route Server doesn't support configuring a user defined route (UDR) on No, Azure Route Server doesn't support network security group association to the ***RouteServerSubnet*** subnet. -### <a name = "limitations"></a>What are Azure Route Server limits? +### <a name = "limits"></a>What are Azure Route Server limits? Azure Route Server has the following limits (per deployment). |
sap | Configure Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md | Title: Configure control plane for automation framework -description: Configure your deployment control plane for the SAP on Azure Deployment Automation Framework. +description: Configure your deployment control plane for SAP Deployment Automation Framework. -The control plane for the [SAP on Azure Deployment Automation Framework](deployment-framework.md) consists of the following components: +The control plane for [SAP Deployment Automation Framework](deployment-framework.md) consists of the following components: + - Deployer - SAP library ## Deployer -The [deployer](deployment-framework.md#deployment-components) is the execution engine of the [SAP automation framework](deployment-framework.md). It's a pre-configured virtual machine (VM) that is used for executing Terraform and Ansible commands. When using Azure DevOps the deployer is a self-hosted agent. +The [deployer](deployment-framework.md#deployment-components) is the execution engine of [SAP Deployment Automation Framework](deployment-framework.md). It's a preconfigured virtual machine (VM) that's used for running Terraform and Ansible commands. When you use Azure DevOps, the deployer is a self-hosted agent. -The configuration of the deployer is performed in a Terraform tfvars variable file. +The configuration of the deployer is performed in a Terraform `tfvars` variable file. -## Terraform Parameters +## Terraform parameters -This table shows the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts +This table shows the Terraform parameters. These parameters need to be entered manually if you aren't using the deployment scripts. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | | - |-> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP Library that contains the Terraform state files | Required | -+> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP library that contains the Terraform state files | Required | -### Environment Parameters +### Environment parameters This table shows the parameters that define the resource naming. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | - | - | - |-> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. | -> | `location` | The Azure region in which to deploy. | Required | Use lower case | -> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) | -> | 'place_delete_lock_on_resources | Place a delete lock on the key resources | Optional | -### Resource Group +> | `environment` | Identifier for the control plane (maximum of five characters). | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. | +> | `location` | Azure region in which to deploy. | Required | Use lowercase. | +> | `name_override_file` | Name override file. | Optional | See [Custom naming](naming-module.md). | +> | `place_delete_lock_on_resources` | Place a delete lock on the key resources. | Optional | ++### Resource group This table shows the parameters that define the resource group. This table shows the parameters that define the resource group. > | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional | > | `resourcegroup_tags` | Tags to be associated with the resource group | Optional | +### Network parameters -### Network Parameters +The automation framework supports both creating the virtual network and the subnets (green field) or using an existing virtual network and existing subnets (brown field) or a combination of green field and brown field: -The automation framework supports both creating the virtual network and the subnets (green field) or using an existing virtual network and existing subnets (brown field) or a combination of green field and brown field. + - **Green-field scenario**: The virtual network address space and the subnet address prefixes must be specified. + - **Brown-field scenario**: The Azure resource identifier for the virtual network and the subnets must be specified. The recommended CIDR of the virtual network address space is /27, which allows space for 32 IP addresses. A CIDR value of /28 only allows 16 IP addresses. If you want to include Azure Firewall, use a CIDR value of /25, because Azure Firewall requires a range of /26. -The recommended CIDR value for the management subnet is /28 that allows 16 IP addresses. -The recommended CIDR value for the firewall subnet is /26 that allows 64 IP addresses. +The recommended CIDR value for the management subnet is /28, which allows 16 IP addresses. +The recommended CIDR value for the firewall subnet is /26, which allows 64 IP addresses. This table shows the networking parameters. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | - | - | |-> | `management_network_name` | The name of the VNet into which the deployer will be deployed | Optional | For green field deployments. | +> | `management_network_name` | The name of the virtual network into which the deployer will be deployed | Optional | For green-field deployments | > | `management_network_logical_name` | The logical name of the network (DEV-WEEU-MGMT01-INFRASTRUCTURE) | Required | |-> | `management_network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown field deployments. | -> | `management_network_address_space` | The address range for the virtual network | Mandatory | For green field deployments. | +> | `management_network_arm_id` | The Azure resource identifier for the virtual network | Optional | For brown-field deployments | +> | `management_network_address_space` | The address range for the virtual network | Mandatory | For green-field deployments | > | | | | | > | `management_subnet_name` | The name of the subnet | Optional | |-> | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. | -> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown field deployments. | -> | `management_subnet_nsg_name` | The name of the Network Security Group name | Optional | | -> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the Network Security Group | Mandatory | Mandatory For brown field deployments. | +> | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments | +> | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown-field deployments | +> | `management_subnet_nsg_name` | The name of the network security group | Optional | | +> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the network security group | Mandatory | For brown-field deployments | > | `management_subnet_nsg_allowed_ips` | Range of allowed IP addresses to add to Azure Firewall | Optional | | > | | | | |-> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Firewall subnet | Mandatory | For brown field deployments. | -> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. | +> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Azure Firewall subnet | Mandatory | For brown-field deployments | +> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments | > | | | | |-> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Bastion subnet | Mandatory | For brown field deployments. | -> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments. | +> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Azure Bastion subnet | Mandatory | For brown-field deployments | +> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments | > | | | | |-> | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown field deployments using the web app | -> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green field deployments using the web app | +> | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown-field deployments by using the web app | +> | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments by using the web app | > [!NOTE]-> When using an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms -+> When you use an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms. -### Deployer Virtual Machine Parameters +### Deployer virtual machine parameters -This table shows the parameters related to the deployer virtual machine. +This table shows the parameters related to the deployer VM. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | - |-> | `deployer_size` | Defines the Virtual machine SKU to use, for example Standard_D4s_v3 | Optional | -> | `deployer_count` | Defines the number of Deployers | Optional | -> | `deployer_image` | Defines the Virtual machine image to use, see below | Optional | -> | `plan` | Defines the plan associated to the Virtual machine image, see below | Optional | -> | `deployer_disk_type` | Defines the disk type, for example Premium_LRS | Optional | -> | `deployer_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) true | Optional | -> | `deployer_private_ip_address` | Defines the Private IP address to use | Optional | +> | `deployer_size` | Defines the VM SKU to use, for example, Standard_D4s_v3 | Optional | +> | `deployer_count` | Defines the number of deployers | Optional | +> | `deployer_image` | Defines the VM image to use | Optional | +> | `plan` | Defines the plan associated to the VM image | Optional | +> | `deployer_disk_type` | Defines the disk type, for example, Premium_LRS | Optional | +> | `deployer_use_DHCP` | Controls if the Azure subnet-provided IP addresses should be used (dynamic) true | Optional | +> | `deployer_private_ip_address` | Defines the private IP address to use | Optional | > | `deployer_enable_public_ip` | Defines if the deployer has a public IP | Optional |-> | `auto_configure_deployer` | Defines deployer will be configured with the required software (Terraform and Ansible) | Optional | -> | `add_system_assigned_identity` | Defines deployer will be assigned a system identity | Optional | +> | `auto_configure_deployer` | Defines if the deployer is configured with the required software (Terraform and Ansible) | Optional | +> | `add_system_assigned_identity` | Defines if the deployer is assigned a system identity | Optional | +The VM image is defined by using the following structure: -The Virtual Machine image is defined using the following structure: ```python { "os_type" = "" The Virtual Machine image is defined using the following structure: ``` > [!NOTE]-> type can be marketplace/marketplace_with_plan/custom -> Note that using a image of type 'marketplace_with_plan' will require that the image in question has been used at least once in the subscription. This is because the first usage prompts the user to accept the License terms and the automation has no mean to approve it. ----### Authentication Parameters +> The type can be `marketplace/marketplace_with_plan/custom`. +> Using an image of type `marketplace_with_plan` requires that the image in question was used at least once in the subscription. The first usage prompts the user to accept the license terms and the automation has no means to approve it. -The table below defines the parameters used for defining the Virtual Machine authentication +### Authentication parameters +This section defines the parameters used for defining the VM authentication. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | | | |-> | `deployer_vm_authentication_type` | Defines the default authentication for the Deployer | Optional | +> | `deployer_vm_authentication_type` | Defines the default authentication for the deployer | Optional | > | `deployer_authentication_username` | Administrator account name | Optional | > | `deployer_authentication_password` | Administrator password | Optional | > | `deployer_authentication_path_to_public_key` | Path to the public key used for authentication | Optional | > | `deployer_authentication_path_to_private_key` | Path to the private key used for authentication | Optional | -### Key Vault Parameters +### Key vault parameters -The table below defines the parameters used for defining the Key Vault information +This section defines the parameters used for defining the Azure Key Vault information. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | | | - |-> | `user_keyvault_id` | Azure resource identifier for the user key vault | Optional | -> | `spn_keyvault_id` | Azure resource identifier for the key vault containing the deployment credentials | Optional | -> | `deployer_private_key_secret_name` | The Azure Key Vault secret name for the deployer private key | Optional | -> | `deployer_public_key_secret_name` | The Azure Key Vault secret name for the deployer public key | Optional | -> | `deployer_username_secret_name` | The Azure Key Vault secret name for the deployer username | Optional | -> | `deployer_password_secret_name` | The Azure Key Vault secret name for the deployer password | Optional | -> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment KeyVault access policies | Optional | -> | `set_secret_expiry` | Set expiry of 12 months for key vault secrets | Optional | +> | `user_keyvault_id` | Azure resource identifier for the user key vault. | Optional | +> | `spn_keyvault_id` | Azure resource identifier for the key vault that contains the deployment credentials. | Optional | +> | `deployer_private_key_secret_name` | The key vault secret name for the deployer private key. | Optional | +> | `deployer_public_key_secret_name` | The key vault secret name for the deployer public key. | Optional | +> | `deployer_username_secret_name` | The key vault secret name for the deployer username. | Optional | +> | `deployer_password_secret_name` | The key vault secret name for the deployer password. | Optional | +> | `additional_users_to_add_to_keyvault_policies` | A list of user object IDs to add to the deployment key vault access policies. | Optional | +> | `set_secret_expiry` | Set expiry of 12 months for key vault secrets. | Optional | -### DNS Support +### DNS support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |-> | `dns_label` | DNS name of the private DNS zone | Optional | -> | `use_custom_dns_a_registration` | Uses an external system for DNS, set to false for Azure native | Optional | -> | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional | -> | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional | -+> | `dns_label` | DNS name of the Private DNS zone. | Optional | +> | `use_custom_dns_a_registration` | Uses an external system for DNS, set to false for Azure native. | Optional | +> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the Private DNS zone. | Optional | +> | `management_dns_resourcegroup_name` | Resource group that contains the Private DNS zone. | Optional | ### Other parameters > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | -- | -- |-> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | | -> | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | | -> | `bastion_sku` | SKU for Azure Bastion host to be deployed (Basic/Standard) | Optional | | -> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments | -> | `use_private_endpoint` | Use private endpoints | Optional | -> | `use_service_endpoint` | Use service endpoints for subnets | Optional | -> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets | Optional | +> | `firewall_deployment` | Boolean flag that controls if an Azure firewall is to be deployed. | Optional | | +> | `bastion_deployment` | Boolean flag that controls if Azure Bastion host is to be deployed. | Optional | | +> | `bastion_sku` | SKU for Azure Bastion host to be deployed (Basic/Standard). | Optional | | +> | `enable_purge_control_for_keyvaults` | Boolean flag that controls if purge control is enabled on the key vault. | Optional | Use only for test deployments. | +> | `use_private_endpoint` | Use private endpoints. | Optional | +> | `use_service_endpoint` | Use service endpoints for subnets. | Optional | +> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional | ### Example parameters file for deployer (required parameters only) firewall_deployment=true bastion_deployment=true ``` +## SAP library -## SAP Library --The [SAP Library](deployment-framework.md#deployment-components) provides the persistent storage of the Terraform state files and the downloaded SAP installation media for the control plane. +The [SAP library](deployment-framework.md#deployment-components) provides the persistent storage of the Terraform state files and the downloaded SAP installation media for the control plane. -The configuration of the SAP Library is performed in a Terraform tfvars variable file. +The configuration of the SAP library is performed in a Terraform `tfvars` variable file. -### Terraform Parameters +### Terraform parameters -This table shows the Terraform parameters, these parameters need to be entered manually when not using the deployment scripts +This table shows the Terraform parameters. These parameters need to be entered manually if you aren't using the deployment scripts or Azure Pipelines. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | - | -- |-> | `deployer_tfstate_key` | The state file name for the deployer | Required | +> | `deployer_tfstate_key` | State file name for the deployer | Required | -### Environment Parameters +### Environment parameters This table shows the parameters that define the resource naming. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | - | - |-> | `environment` | Identifier for the control plane (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. | -> | `location` | The Azure region in which to deploy. | Required | Use lower case | -> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) | +> | `environment` | Identifier for the control plane (maximum of five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. | +> | `location` | Azure region in which to deploy | Required | Use lowercase. | +> | `name_override_file` | Name override file | Optional | See [Custom naming](naming-module.md). | -### Resource Group +### Resource group This table shows the parameters that define the resource group. This table shows the parameters that define the resource group. > | `resource_group_arm_id` | Azure resource identifier for an existing resource group | Optional | > | `resourcegroup_tags` | Tags to be associated with the resource group | Optional | --### SAP Installation media storage account +### SAP installation media storage account > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | This table shows the parameters that define the resource group. > | -- | -- | - | > | `library_terraform_state_arm_id` | Azure resource identifier | Optional | -### DNS Support -+### DNS support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |-> | `dns_label` | DNS name of the private DNS zone | Optional | -> | `use_custom_dns_a_registration` | Use an existing Private DNS zone | Optional | -> | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional | -> | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional | -+> | `dns_label` | DNS name of the Private DNS zone. | Optional | +> | `use_custom_dns_a_registration` | Use an existing Private DNS zone. | Optional | +> | `management_dns_subscription_id` | Subscription ID for the subscription that contains the Private DNS zone. | Optional | +> | `management_dns_resourcegroup_name` | Resource group that contains the Private DNS zone. | Optional | ### Extra parameters - > [!div class="mx-tdCol2BreakAll "]-> | Variable | Description | Type | -> | -- | -- | -- | -> | `use_private_endpoint` | Use private endpoints | Optional | -> | `use_service_endpoint` | Use service endpoints for subnets | Optional | -> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets | Optional | +> | Variable | Description | Type | +> | | | -- | +> | `use_private_endpoint` | Use private endpoints. | Optional | +> | `use_service_endpoint` | Use service endpoints for subnets. | Optional | +> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional | +> | `subnets_to_add_to_firewall_for_keyvaults_and_storage` | Subnets that need access to key vaults and storage accounts. | Optional | -### Example parameters file for sap library (required parameters only) +### Example parameters file for the SAP library (required parameters only) ```terraform # The environment value is a mandatory field, it is used for partitioning the environments, for example (PROD and NP) location = "westeurope" ``` --## Next steps +## Next step > [!div class="nextstepaction"] > [Configure SAP system](configure-system.md) |
sap | Configure Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-devops.md | Title: Configure Azure DevOps Services for SAP on Azure Deployment Automation Framework -description: Configure your Azure DevOps Services for the SAP on Azure Deployment Automation Framework. + Title: Configure Azure DevOps Services for SAP Deployment Automation Framework +description: Configure your Azure DevOps Services for SAP Deployment Automation Framework. -# Use SAP on Azure Deployment Automation Framework from Azure DevOps Services +# Use SAP Deployment Automation Framework from Azure DevOps Services -Using Azure DevOps will streamline the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities. -You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application. +Azure DevOps streamlines the deployment process by providing pipelines that you can run to perform the infrastructure deployment and the configuration and SAP installation activities. ++You can use Azure Repos to store your configuration files and use Azure Pipelines to deploy and configure the infrastructure and the SAP application. ## Sign up for Azure DevOps Services -To use Azure DevOps Services, you'll need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account. +To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory. To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either sign in or create a new account. -## Configure Azure DevOps Services for the SAP on Azure Deployment Automation Framework +## Configure Azure DevOps Services for SAP Deployment Automation Framework -You can use the following script to do a basic installation of Azure Devops Services for the SAP on Azure Deployment Automation Framework. +You can use the following script to do a basic installation of Azure DevOps Services for SAP Deployment Automation Framework. Open PowerShell ISE and copy the following script and update the parameters to match your environment. Open PowerShell ISE and copy the following script and update the parameters to m ``` -Run the script and follow the instructions. The script will open browser windows for authentication and for performing tasks in the Azure DevOps project. +Run the script and follow the instructions. The script opens browser windows for authentication and for performing tasks in the Azure DevOps project. You can choose to either run the code directly from GitHub or you can import a copy of the code into your Azure DevOps project. --Validate that the project has been created by navigating to the Azure DevOps portal and selecting the project. Ensure that the Repo is populated and that the pipelines have been created. +To confirm that the project was created, go to the Azure DevOps portal and select the project. Ensure that the repo was populated and that the pipelines were created. > [!IMPORTANT]-> Run the following steps on your local workstation, also ensure that you have the latest Azure CLI installed by running the 'az upgrade' command. +> Run the following steps on your local workstation. Also ensure that you have the latest Azure CLI installed by running the `az upgrade` command. -### Configure Azure DevOps Services artifacts for a new Workload zone. +### Configure Azure DevOps Services artifacts for a new workload zone -You can use the following script to deploy the artifacts needed to support a new workload zone. This will create the Variable group and the Service Connection in Azure DevOps as well as optionally the deployment service principal. +Use the following script to deploy the artifacts that are needed to support a new workload zone. This process creates the variable group and the service connection in Azure DevOps and, optionally, the deployment service principal. Open PowerShell ISE and copy the following script and update the parameters to match your environment. Open PowerShell ISE and copy the following script and update the parameters to m ``` +### Create a sample control plane configuration -### Create a sample Control Plane configuration +You can run the `Create Sample Deployer Configuration` pipeline to create a sample configuration for the control plane. When it's running, choose the appropriate Azure region. You can also control if you want to deploy Azure Firewall and Azure Bastion. -You can run the 'Create Sample Deployer Configuration' pipeline to create a sample configuration for the Control Plane. When running choose the appropriate Azure region. You can also control if you want to deploy Azure Firewall and Azure Bastion. +## Manual configuration of Azure DevOps Services for SAP Deployment Automation Framework -## Manual configuration of Azure DevOps Services for the SAP on Azure Deployment Automation Framework +You can manually configure Azure DevOps Services for SAP Deployment Automation Framework. ### Create a new project -You can use Azure Repos to store both the code from the sap-automation GitHub repository and the environment configuration files. +You can use Azure Repos to store the code from the sap-automation GitHub repository and the environment configuration files. -Open (https://dev.azure.com) and create a new project by clicking on the _New Project_ button and enter the project details. The project will contain both the Azure Repos source control repository and Azure Pipelines for performing deployment activities. +Open [Azure DevOps](https://dev.azure.com) and create a new project by selecting **New Project** and entering the project details. The project contains the Azure Repos source control repository and Azure Pipelines for performing deployment activities. -> [!NOTE] -> If you are unable to see _New Project_ ensure that you have permissions to create new projects in the organization. +If you don't see **New Project**, ensure that you have permissions to create new projects in the organization. Record the URL of the project.+ ### Import the repository -Start by importing the SAP on Azure Deployment Automation Framework GitHub repository into Azure Repos. +Start by importing the SAP Deployment Automation Framework GitHub repository into Azure Repos. -Navigate to the Repositories section and choose Import a repository, import the 'https://github.com/Azure/sap-automation-bootstrap.git' repository into Azure DevOps. For more info, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true) +Go to the **Repositories** section and select **Import a repository**. Import the `https://github.com/Azure/sap-automation-bootstrap.git` repository into Azure DevOps. For more information, see [Import a repository](/azure/devops/repos/git/import-git-repository?view=azure-devops&preserve-view=true). -If you're unable to import a repository, you can create the repository manually, and then import the content from the SAP on Azure Deployment Automation Framework GitHub repository to it. +If you're unable to import a repository, you can create the repository manually. Then you can import the content from the SAP Deployment Automation Framework GitHub repository to it. ### Create the repository for manual import -> [!NOTE] -> Only do this step if you are unable to import the repository directly. +Only do this step if you're unable to import the repository directly. ++To create the **workspaces** repository, in the **Repos** section, under **Project settings**, select **Create**. -Create the 'workspaces' repository by navigating to the 'Repositories' section in 'Project Settings' and clicking the _Create_ button. +Choose the repository, enter **Git**, and provide a name for the repository. For example, use **SAP Configuration Repository**. -Choose the repository type 'Git' and provide a name for the repository, for example 'SAP Configuration Repository'. -### Cloning the repository +### Clone the repository -In order to provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally. -Clone the repository to a local folder by clicking the _Clone_ button in the Files view in the Repos section of the portal. For more info, see [Cloning a repository](/azure/devops/repos/git/clone?view=azure-devops#clone-an-azure-repos-git-repo&preserve-view=true) +To provide a more comprehensive editing capability of the content, you can clone the repository to a local folder and edit the contents locally. +To clone the repository to a local folder, on the **Repos** section of the portal, under **Files**, select **Clone**. For more information, see [Clone a repository](/azure/devops/repos/git/clone?view=azure-devops#clone-an-azure-repos-git-repo&preserve-view=true). -### Manually importing the repository content using a local clone -You can also download the content from the SAP on Azure Deployment Automation Framework repository manually and add it to your local clone of the Azure DevOps repository. +### Manually import the repository content by using a local clone -Navigate to 'https://github.com/Azure/SAP-automation-samples' repository and download the repository content as a ZIP file by clicking the _Code_ button and choosing _Download ZIP_. +You can also manually download the content from the SAP Deployment Automation Framework repository and add it to your local clone of the Azure DevOps repository. -Copy the content from the zip file to the root folder of your local clone. +Go to the `https://github.com/Azure/SAP-automation-samples` repository and download the repository content as a .zip file. Select **Code** and choose **Download ZIP**. -Open the local folder in Visual Studio code, you should see that there are changes that need to be synchronized by the indicator by the source control icon as is shown in the picture below. +Copy the content from the .zip file to the root folder of your local clone. +Open the local folder in Visual Studio Code. You should see that changes need to be synchronized by the indicator by the source control icon shown here. -Select the source control icon and provide a message about the change, for example: "Import from GitHub" and press Cntr-Enter to commit the changes. Next select the _Sync Changes_ button to synchronize the changes back to the repository. -### Choosing the source for the Terraform and Ansible code +Select the source control icon and provide a message about the change. For example, enter **Import from GitHub** and select Ctrl+Enter to commit the changes. Next, select **Sync Changes** to synchronize the changes back to the repository. -You can either run the SDAF code directly from GitHub or you can import it locally. -#### Running the code from a local repository +### Choose the source for the Terraform and Ansible code -If you want to run the SDAF code from the local Azure DevOps project you need to create a separate code repository and a configuration repository in the Azure DevOps project. +You can either run the SAP Deployment Automation Framework code directly from GitHub or you can import it locally. -Name of code repository: 'sap-automation', source: 'https://github.com/Azure/sap-automation.git' +#### Run the code from a local repository -Name of sample and template repository: 'sap-samples', source: 'https://github.com/Azure/sap-automation-samples.git' +If you want to run the SAP Deployment Automation Framework code from the local Azure DevOps project, you need to create a separate code repository and a configuration repository in the Azure DevOps project: -#### Running the code directly from GitHub -If you want to run the code directly from GitHub you need to provide credentials for Azure DevOps to be able to pull the content from GitHub. -#### Creating the GitHub Service connection +- **Name of code repository**: `sap-automation`. Source is `https://github.com/Azure/sap-automation.git`. +- **Name of sample and template repository**: `sap-samples`. Source is `https://github.com/Azure/sap-automation-samples.git`. -To pull the code from GitHub, you need a GitHub service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true) +#### Run the code directly from GitHub -To create the service connection, go to Project settings and navigate to the Service connections setting in the Pipelines section. +If you want to run the code directly from GitHub, you need to provide credentials for Azure DevOps to be able to pull the content from GitHub. +#### Create the GitHub service connection -Choose _GitHub_ as the service connection type. Choose 'Azure Pipelines' in the OAuth Configuration drop-down. +To pull the code from GitHub, you need a GitHub service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true). -Click 'Authorize' to log on to GitHub. - -Enter a Service connection name, for instance 'SDAF Connection to GitHub' and ensure that the _Grant access permission to all pipelines_ checkbox is checked. Select _Save_ to save the service connection. +To create the service connection, go to **Project Settings** and under the **Pipelines** section, go to **Service connections**. +Select **GitHub** as the service connection type. Select **Azure Pipelines** in the **OAuth Configuration** dropdown. ++Select **Authorize** to sign in to GitHub. ++Enter a service connection name, for instance, **SDAF Connection to GitHub**. Ensure that the **Grant access permission to all pipelines** checkbox is selected. Select **Save** to save the service connection. ## Set up the web app -The automation framework optionally provisions a web app as a part of the control plane to assist with the SAP workload zone and system configuration files. If you would like to use the web app, you must first create an app registration for authentication purposes. Open the Azure Cloud Shell and execute the following commands: +The automation framework optionally provisions a web app as a part of the control plane to assist with the SAP workload zone and system configuration files. If you want to use the web app, you must first create an app registration for authentication purposes. Open Azure Cloud Shell and run the following commands. # [Linux](#tab/linux)-Replace MGMT with your environment as necessary. ++Replace `MGMT` with your environment, as necessary. + ```bash echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json az ad app credential reset --id $TF_VAR_app_registration_app_id --append --query rm manifest.json ```+ # [Windows](#tab/windows)-Replace MGMT with your environment as necessary. ++Replace `MGMT` with your environment, as necessary. + ```powershell Add-Content -Path manifest.json -Value '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' del manifest.json Save the app registration ID and password values for later use. +## Create Azure pipelines -## Create Azure Pipelines +Azure pipelines are implemented as YAML files. They're stored in the *deploy/pipelines* folder in the repository. -Azure Pipelines are implemented as YAML files and they're stored in the 'deploy/pipelines' folder in the repository. ## Control plane deployment pipeline -Create the control plane deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the control plane deployment pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | Create the control plane deployment pipeline by choosing _New Pipeline_ from the | Path | `deploy/pipelines/01-deploy-control-plane.yml` | | Name | Control plane deployment | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Control plane deployment' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Control plane deployment**. ## SAP workload zone deployment pipeline -Create the SAP workload zone pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the SAP workload zone pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | Create the SAP workload zone pipeline by choosing _New Pipeline_ from the Pipeli | Path | `deploy/pipelines/02-sap-workload-zone.yml` | | Name | SAP workload zone deployment | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP workload zone deployment' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP workload zone deployment**. ## SAP system deployment pipeline -Create the SAP system deployment pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the SAP system deployment pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | | Create the SAP system deployment pipeline by choosing _New Pipeline_ from the Pi | Path | `deploy/pipelines/03-sap-system-deployment.yml` | | Name | SAP system deployment (infrastructure) | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP system deployment (infrastructure)' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP system deployment (infrastructure)**. ## SAP software acquisition pipeline -Create the SAP software acquisition pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the SAP software acquisition pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | | Create the SAP software acquisition pipeline by choosing _New Pipeline_ from the | Path | `deploy/pipelines/04-sap-software-download.yml` | | Name | SAP software acquisition | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP software acquisition' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP software acquisition**. ## SAP configuration and software installation pipeline -Create the SAP configuration and software installation pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the SAP configuration and software installation pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | Create the SAP configuration and software installation pipeline by choosing _New | Path | `deploy/pipelines/05-DB-and-SAP-installation.yml` | | Name | Configuration and SAP installation | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'SAP configuration and software installation' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **SAP configuration and software installation**. ## Deployment removal pipeline -Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the deployment removal pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | Create the deployment removal pipeline by choosing _New Pipeline_ from the Pipel | Path | `deploy/pipelines/10-remover-terraform.yml` | | Name | Deployment removal | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Deployment removal**. ## Control plane removal pipeline -Create the control plane deployment removal pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the control plane deployment removal pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | Create the control plane deployment removal pipeline by choosing _New Pipeline_ | Path | `deploy/pipelines/12-remove-control-plane.yml` | | Name | Control plane removal | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Control plane removal' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Control plane removal**. -## Deployment removal pipeline using Azure Resource Manager +## Deployment removal pipeline by using Azure Resource Manager -Create the deployment removal ARM pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the deployment removal Azure Resource Manager pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | | Branch | main | | Path | `deploy/pipelines/11-remover-arm-fallback.yml` |-| Name | Deployment removal using ARM | +| Name | Deployment removal using ARM processor | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Deployment removal using ARM processor' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Deployment removal using ARM processor**. > [!NOTE]-> Only use this pipeline as last resort, removing just the resource groups will leave remnants that may complicate re-deployments. +> Only use this pipeline as a last resort. Removing just the resource groups leaves remnants that might complicate redeployments. ## Repository updater pipeline -Create the Repository updater pipeline by choosing _New Pipeline_ from the Pipelines section, select 'Azure Repos Git' as the source for your code. Configure your Pipeline to use an existing Azure Pipelines YAML File. Specify the pipeline with the following settings: +Create the repository updater pipeline. Under the **Pipelines** section, select **New Pipeline**. Select **Azure Repos Git** as the source for your code. Configure your pipeline to use an existing Azure Pipelines YAML file. Specify the pipeline with the following settings: | Setting | Value | | - | -- | Create the Repository updater pipeline by choosing _New Pipeline_ from the Pipel | Path | `deploy/pipelines/20-update-ado-repository.yml` | | Name | Repository updater | -Save the Pipeline, to see the Save option select the chevron next to the Run button. Navigate to the Pipelines section and select the pipeline. Rename the pipeline to 'Repository updater' by choosing 'Rename/Move' from the three-dot menu on the right. +Save the pipeline. To see **Save**, select the chevron next to **Run**. Go to the **Pipelines** section and select the pipeline. Choose **Rename/Move** from the ellipsis menu on the right and rename the pipeline as **Repository updater**. This pipeline should be used when there's an update in the sap-automation repository that you want to use. -## Import Ansible task from Visual Studio Marketplace +## Import the Ansible task from Visual Studio Marketplace -The pipelines use a custom task to run Ansible. The custom task can be installed from [Ansible](https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.vss-services-ansible). Install it to your Azure DevOps organization before running the _Configuration and SAP installation_ or _SAP software acquisition_ pipelines. +The pipelines use a custom task to run Ansible. You can install the custom task from [Ansible](https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.vss-services-ansible). Install it to your Azure DevOps organization before you run the **Configuration and SAP installation** or **SAP software acquisition** pipelines. -## Import Cleanup task from Visual Studio Marketplace +## Import the cleanup task from Visual Studio Marketplace -The pipelines use a custom task to perform cleanup activities post deployment. The custom task can be installed from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before running the pipelines. +The pipelines use a custom task to perform cleanup activities post deployment. You can install the custom task from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before you run the pipelines. +## Preparations for a self-hosted agent -## Preparations for self-hosted agent +1. Create an agent pool by going to **Organizational Settings**. Under the **Pipelines** section, select **Agent Pools** > **Add Pool**. Select **Self-hosted** as the pool type. Name the pool to align with the control plane environment. For example, use `MGMT-WEEU-POOL`. Ensure that **Grant access permission to all pipelines** is selected and select **Create** to create the pool. +1. Sign in with the user account you plan to use in your [Azure DevOps](https://dev.azure.com) organization. -1. Create an Agent Pool by navigating to the Organizational Settings and selecting _Agent Pools_ from the Pipelines section. Click the _Add Pool_ button and choose Self-hosted as the pool type. Name the pool to align with the control plane environment, for example `MGMT-WEEU-POOL`. Ensure _Grant access permission to all pipelines_ is selected and create the pool using the _Create_ button. +1. From your home page, open your user settings and select **Personal access tokens**. -1. Sign in with the user account you plan to use in your Azure DevOps organization (https://dev.azure.com). + :::image type="content" source="./media/devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram that shows the creation of a personal access token."::: -1. From your home page, open your user settings, and then select _Personal access tokens_. +1. Create a personal access token with these settings: - :::image type="content" source="./media/devops/automation-select-personal-access-tokens.jpg" alt-text="Diagram showing the creation of the Personal Access Token (PAT)."::: + - **Agent Pools**: Select **Read & manage**. + - **Build**: Select **Read & execute**. + - **Code**: Select **Read & write**. + - **Variable Groups**: Select **Read, create, & manage**. -1. Create a personal access token. Ensure that _Read & manage_ is selected for _Agent Pools_, _Read & write_ is selected for _Code_, _Read & execute_ is selected for _Build_, and _Read, create, & manage_ is selected for _Variable Groups_. Write down the created token value. + Write down the created token value. - :::image type="content" source="./media/devops/automation-new-pat.png" alt-text="Diagram showing the attributes of the Personal Access Token (PAT)."::: + :::image type="content" source="./media/devops/automation-new-pat.png" alt-text="Diagram that shows the attributes of the personal access token."::: ## Variable definitions -The deployment pipelines are configured to use a set of predefined parameter values defined using variable groups. -+The deployment pipelines are configured to use a set of predefined parameter values defined by using variable groups. ### Common variables -There's a set of common variables that are used by all the deployment pipelines. These variables are stored in a variable group called 'SDAF-General'. +Common variables are used by all the deployment pipelines. They're stored in a variable group called `SDAF-General`. -Create a new variable group 'SDAF-General' using the Library page in the Pipelines section. Add the following variables: +Create a new variable group named `SDAF-General` by using the **Library** page in the **Pipelines** section. Add the following variables: | Variable | Value | Notes | | - | | - |-| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. | +| Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration, use `samples/WORKSPACES` instead of WORKSPACES. | | Branch | main | | | S-Username | `<SAP Support user account name>` | |-| S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon. | -| `tf_version` | 1.3.0 | The Terraform version to use, see [Terraform download](https://www.terraform.io/downloads) | +| S-Password | `<SAP Support user password>` | Change the variable type to secret by selecting the lock icon. | +| `tf_version` | 1.3.0 | The Terraform version to use. See [Terraform download](https://www.terraform.io/downloads). | Save the variables. -Or alternatively you can use the Azure DevOps CLI to set up the groups. +Alternatively, you can use the Azure DevOps CLI to set up the groups. ```bash s-user="<SAP Support user account name>" az pipelines variable-group create --name SDAF-General --variables ANSIBLE_HOST_ ``` -> [!NOTE] -> Remember to assign permissions for all pipelines using _Pipeline permissions_. +Remember to assign permissions for all pipelines by using **Pipeline permissions**. -### Environment specific variables +### Environment-specific variables -As each environment may have different deployment credentials you'll need to create a variable group per environment, for example 'SDAF-MGMT','SDAF-DEV', 'SDAF-QA'. +Because each environment might have different deployment credentials, you need to create a variable group per environment. For example, use `SDAF-MGMT`,`SDAF-DEV`, and `SDAF-QA`. -Create a new variable group 'SDAF-MGMT' for the control plane environment using the Library page in the Pipelines section. Add the following variables: +Create a new variable group named `SDAF-MGMT` for the control plane environment by using the **Library** page in the **Pipelines** section. Add the following variables: | Variable | Value | Notes | | - | | -- |-| Agent | 'Azure Pipelines' or the name of the agent pool | Note, this pool will be created in a later step. | -| CP_ARM_CLIENT_ID | 'Service principal application ID'. | | -| CP_ARM_OBJECT_ID | 'Service principal object ID'. | | -| CP_ARM_CLIENT_SECRET | 'Service principal password'. | Change variable type to secret by clicking the lock icon | -| CP_ARM_SUBSCRIPTION_ID | 'Target subscription ID'. | | -| CP_ARM_TENANT_ID | 'Tenant ID' for the service principal. | | -| AZURE_CONNECTION_NAME | Previously created connection name. | | -| sap_fqdn | SAP Fully Qualified Domain Name, for example 'sap.contoso.net'. | Only needed if Private DNS isn't used. | -| FENCING_SPN_ID | 'Service principal application ID' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. | -| FENCING_SPN_PWD | 'Service principal password' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. | -| FENCING_SPN_TENANT | 'Service principal tenant ID' for the fencing agent. | Required for highly available deployments using a service principal for fencing agent. | -| PAT | `<Personal Access Token>` | Use the Personal Token defined in the previous step | -| POOL | `<Agent Pool name>` | The Agent pool to use for this environment | +| Agent | `Azure Pipelines` or the name of the agent pool | This pool is created in a later step. | +| CP_ARM_CLIENT_ID | `Service principal application ID` | | +| CP_ARM_OBJECT_ID | `Service principal object ID` | | +| CP_ARM_CLIENT_SECRET | `Service principal password` | Change the variable type to secret by selecting the lock icon. | +| CP_ARM_SUBSCRIPTION_ID | `Target subscription ID` | | +| CP_ARM_TENANT_ID | `Tenant ID` for the service principal | | +| AZURE_CONNECTION_NAME | Previously created connection name | | +| sap_fqdn | SAP fully qualified domain name, for example, `sap.contoso.net` | Only needed if Private DNS isn't used. | +| FENCING_SPN_ID | `Service principal application ID` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. | +| FENCING_SPN_PWD | `Service principal password` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. | +| FENCING_SPN_TENANT | `Service principal tenant ID` for the fencing agent | Required for highly available deployments that use a service principal for the fencing agent. | +| PAT | `<Personal Access Token>` | Use the personal token defined in the previous step. | +| POOL | `<Agent Pool name>` | The agent pool to use for this environment. | | | | |-| APP_REGISTRATION_APP_ID | 'App registration application ID' | Required if deploying the web app | -| WEB_APP_CLIENT_SECRET | 'App registration password' | Required if deploying the web app | +| APP_REGISTRATION_APP_ID | `App registration application ID` | Required if deploying the web app. | +| WEB_APP_CLIENT_SECRET | `App registration password` | Required if deploying the web app. | | | | |-| SDAF_GENERAL_GROUP_ID | The group ID for the SDAF-General group | The ID can be retrieved from the URL parameter 'variableGroupId' when accessing the variable group using a browser. For example: 'variableGroupId=8 | -| WORKLOADZONE_PIPELINE_ID | The ID for the 'SAP workload zone deployment' pipeline | The ID can be retrieved from the URL parameter 'definitionId' from the pipeline page in Azure DevOps. For example: 'definitionId=31. | -| SYSTEM_PIPELINE_ID | The ID for the 'SAP system deployment (infrastructure)' pipeline | The ID can be retrieved from the URL parameter 'definitionId' from the pipeline page in Azure DevOps. For example: 'definitionId=32. | +| SDAF_GENERAL_GROUP_ID | The group ID for the SDAF-General group | The ID can be retrieved from the URL parameter `variableGroupId` when accessing the variable group by using a browser. For example: `variableGroupId=8`. | +| WORKLOADZONE_PIPELINE_ID | The ID for the `SAP workload zone deployment` pipeline | The ID can be retrieved from the URL parameter `definitionId` from the pipeline page in Azure DevOps. For example: `definitionId=31`. | +| SYSTEM_PIPELINE_ID | The ID for the `SAP system deployment (infrastructure)` pipeline | The ID can be retrieved from the URL parameter `definitionId` from the pipeline page in Azure DevOps. For example: `definitionId=32`. | Save the variables. -> [!NOTE] -> Remember to assign permissions for all pipelines using _Pipeline permissions_. -> -> When using the web app, ensure that the Build Service has at least Contribute permissions. -> -> You can use the clone functionality to create the next environment variable group. APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID, WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-MGMT group. +Remember to assign permissions for all pipelines by using **Pipeline permissions**. +When you use the web app, ensure that the Build Service has at least Contribute permissions. +You can use the clone functionality to create the next environment variable group. APP_REGISTRATION_APP_ID, WEB_APP_CLIENT_SECRET, SDAF_GENERAL_GROUP_ID, WORKLOADZONE_PIPELINE_ID and SYSTEM_PIPELINE_ID are only needed for the SDAF-MGMT group. ## Create a service connection -To remove the Azure resources, you need an Azure Resource Manager service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true) +To remove the Azure resources, you need an Azure Resource Manager service connection. For more information, see [Manage service connections](/azure/devops/pipelines/library/service-endpoints?view=azure-devops&preserve-view=true). -To create the service connection, go to Project settings and navigate to the Service connections setting in the Pipelines section. +To create the service connection, go to **Project Settings**. Under the **Pipelines** section, select **Service connections**. -Choose _Azure Resource Manager_ as the service connection type and _Service principal (manual)_ as the authentication method. Enter the target subscription, typically the control plane subscription, and provide the service principal details. Validate the credentials using the _Verify_ button. For more information on how to create a service principal, see [Creating a Service Principal](deploy-control-plane.md#prepare-the-deployment-credentials). +Select **Azure Resource Manager** as the service connection type and **Service principal (manual)** as the authentication method. Enter the target subscription, which is typically the control plane subscription. Enter the service principal details. Select **Verify** to validate the credentials. For more information on how to create a service principal, see [Create a service principal](deploy-control-plane.md#prepare-the-deployment-credentials). -Enter a Service connection name, for instance 'Connection to MGMT subscription' and ensure that the _Grant access permission to all pipelines_ checkbox is checked. Select _Verify and save_ to save the service connection. +Enter a **Service connection name**, for instance, use `Connection to MGMT subscription`. Ensure that the **Grant access permission to all pipelines** checkbox is selected. Select **Verify and save** to save the service connection. ## Permissions -> [!NOTE] -> Most of the pipelines will add files to the Azure Repos and therefore require pull permissions. Assign "Contribute" permissions to the 'Build Service' using the Security tab of the source code repository in the Repositories section in Project settings. +Most of the pipelines add files to the Azure repos and therefore require pull permissions. On **Project Settings**, under the **Repositories** section, select the **Security** tab of the source code repository and assign Contribute permissions to the `Build Service`. -## Deploy the Control Plane +## Deploy the control plane -Newly created pipelines might not be visible in the default view. Select on recent tab and go back to All tab to view the new pipelines. --Select the _Control plane deployment_ pipeline, provide the configuration names for the deployer and the SAP library and choose "Run" to deploy the control plane. Make sure to check "Deploy the configuration web application" if you would like to set up the configuration web app. +Newly created pipelines might not be visible in the default view. Select the **Recent** tab and go back to **All tabs** to view the new pipelines. +Select the **Control plane deployment** pipeline and enter the configuration names for the deployer and the SAP library. Select **Run** to deploy the control plane. Make sure to select the **Deploy the configuration web application** checkbox if you want to set up the configuration web app. ### Configure the Azure DevOps Services self-hosted agent manually -> [!NOTE] ->This is only needed if the Azure DevOps Services agent is not automatically configured. Please check that the agent pool is empty before proceeding. -+Manual configuration is only needed if the Azure DevOps Services agent isn't automatically configured. Check that the agent pool is empty before you proceed. -Connect to the deployer by following these steps: +To connect to the deployer: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to the resource group containing the deployer virtual machine. +1. Go to the resource group that contains the deployer virtual machine. -1. Connect to the virtual machine using Azure Bastion. +1. Connect to the virtual machine by using Azure Bastion. -1. The default username is *azureadm* +1. The default username is **azureadm**. -1. Choose *SSH Private Key from Azure Key Vault* +1. Select **SSH Private Key from Azure Key Vault**. -1. Select the subscription containing the control plane. +1. Select the subscription that contains the control plane. 1. Select the deployer key vault. -1. From the list of secrets choose the secret ending with *-sshkey*. +1. From the list of secrets, select the secret that ends with **-sshkey**. 1. Connect to the virtual machine. -Run the following script to configure the deployer. +Run the following script to configure the deployer: ```bash mkdir -p ~/Azure_SAP_Automated_Deployment cd sap-automation/deploy/scripts ./configure_deployer.sh ``` -Reboot the deployer and reconnect and run the following script to set up the Azure DevOps agent. +Reboot the deployer, reconnect, and run the following script to set up the Azure DevOps agent: ```bash cd ~/Azure_SAP_Automated_Deployment/ cd ~/Azure_SAP_Automated_Deployment/ $DEPLOYMENT_REPO_PATH/deploy/scripts/setup_ado.sh ``` -Accept the license and when prompted for server URL, enter the URL you captured when you created the Azure DevOps Project. For authentication, choose PAT and enter the token value from the previous step. +Accept the license and, when you're prompted for the server URL, enter the URL you captured when you created the Azure DevOps project. For authentication, select **PAT** and enter the token value from the previous step. -When prompted enter the application pool name, you created in the previous step. Accept the default agent name and the default work folder name. -The agent will now be configured and started. +When prompted, enter the application pool name that you created in the previous step. Accept the default agent name and the default work folder name. The agent is now configured and starts. +## Deploy the control plane web application -## Deploy the Control Plane Web Application +Selecting the `deploy the web app infrastructure` parameter when you run the control plane deployment pipeline provisions the infrastructure necessary for hosting the web app. The **Deploy web app** pipeline publishes the application's software to that infrastructure. -Checking the "deploy the web app infrastructure" parameter when running the Control plane deployment pipeline will provision the infrastructure necessary for hosting the web app. The "Deploy web app" pipeline will publish the application's software to that infrastructure. +Wait for the deployment to finish. Select the **Extensions** tab and follow the instructions to finalize the configuration. Update the `reply-url` values for the app registration. -Wait for the deployment to finish. Once the deployment is complete, navigate to the Extensions tab and follow the instructions to finalize the configuration and update the 'reply-url' values for the app registration. --As a result of running the control plane pipeline, part of the web app URL needed will be stored in a variable named "WEBAPP_URL_BASE" in your environment-specific variable group. You can at any time update the URLs of the registered application web app using the following command. +As a result of running the control plane pipeline, part of the web app URL that's needed is stored in a variable named `WEBAPP_URL_BASE` in your environment-specific variable group. At any time, you can update the URLs of the registered application web app by using the following command. # [Linux](#tab/linux) As a result of running the control plane pipeline, part of the web app URL neede webapp_url_base=<WEBAPP_URL_BASE> az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback ```+ # [Windows](#tab/windows) ```powershell $webapp_url_base="<WEBAPP_URL_BASE>" az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https://${webapp_url_base}.azurewebsites.net --web-redirect-uris https://${webapp_url_base}.azurewebsites.net/ https://${webapp_url_base}.azurewebsites.net/.auth/login/aad/callback ``` -You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality won't work. +You also need to grant reader permissions to the app service system-assigned managed identity. Go to the app service resource. On the left side, select **Identity**. On the **System assigned** tab, select **Azure role assignments** > **Add role assignment**. Select **Subscription** as the scope and **Reader** as the role. Then select **Save**. Without this step, the web app dropdown functionality won't work. -You should now be able to visit the web app, and use it to deploy SAP workload zones and SAP system infrastructure. +You should now be able to visit the web app and use it to deploy SAP workload zones and SAP system infrastructure. ## Next step > [!div class="nextstepaction"]-> [DevOps hands on lab](devops-tutorial.md) +> [Azure DevOps hands-on lab](devops-tutorial.md) |
sap | Configure Extra Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-extra-disks.md | Title: Custom disk configurations -description: Provide custom disk configurations for your system in the SAP on Azure Deployment Automation Framework. Add extra disks to a new system, or an existing system. +description: Provide custom disk configurations for your system in SAP Deployment Automation Framework. Add extra disks to a new system or an existing system. -# Change the disk configuration for the SAP deployment automation +# Change the disk configuration for SAP Deployment Automation Framework -By default, the [SAP on Azure Deployment Automation Framework](deployment-framework.md) defines the disk configuration for the SAP systems. As needed, you can change the default configuration by providing a custom disk configuration json file. +By default, [SAP Deployment Automation Framework](deployment-framework.md) defines the disk configuration for SAP systems. As needed, you can change the default configuration by providing a custom disk configuration JSON file. > [!TIP] > When possible, it's a best practice to increase the disk size instead of adding more disks. - ### HANA databases -The table shows the default disk configuration for HANA systems. +The table shows the default disk configuration for HANA systems. -| Size | VM SKU | OS disk | Data disks | Log disks | Hana shared | User SAP | Backup | +| Size | VM SKU | OS disk | Data disks | Log disks | HANA shared | User SAP | Backup | |--|||||-|--|--| | Default | Standard_D8s_v3 | E6 (64 GB) | P20 (512 GB) | P20 (512 GB) | E20 (512 GB) | E6 (64 GB) | E20 (512 GB) | | S4DEMO | Standard_E32ds_v4 | P10 (128 GB) | P10x4 (128 GB) | P10x3 (128 GB) | | P20 (512 GB) | P20 (512 GB) | The table shows the default disk configuration for HANA systems. ### AnyDB databases -The table shows the default disk configuration for AnyDB systems. +The table shows the default disk configuration for AnyDB systems. | Size | VM SKU | OS disk | Data disks | Log disks | |||-||--| The table shows the default disk configuration for AnyDB systems. | 40 TB | Standard_M128s | P10(128 GB) | P50x10 (4096 GB) | P40x2 (2048 GB) | | 50 TB | Standard_M128s | P10(128 GB) | P50x13 (4096 GB) | P40x2 (2048 GB) | - ## Custom sizing file -The disk sizing for an SAP system can be defined using a custom sizing json file. The file is grouped in four sections: "db", "app", "scs", and "web" and each section contains a list of disk configuration names, for example for the database tier "M32ts", "M64s", etc. +You can define the disk sizing for an SAP system by using a custom sizing JSON file. The file is grouped in four sections: `db`, `app`, `scs`, and `web`. Each section contains a list of disk configuration names. For example, for the database tier, the names might be `M32ts` or `M64s`. -These sections contain the information for which is the default Virtual machine size and the list of disk to be deployed for each tier. +These sections contain the information for the default virtual machine size and the list of disks to be deployed for each tier. -Create a file using the structure shown below and save the file in the same folder as the parameter file for the system, for instance 'XO1_sizes.json'. Then, define the parameter `custom_disk_sizes_filename` in the parameter file. For example, `custom_disk_sizes_filename = "XO1_db_sizes.json"`. +Create a file by using the structure shown in the following code sample. Save the file in the same folder as the parameter file for the system. For instance, use `XO1_sizes.json`. Then define the parameter `custom_disk_sizes_filename` in the parameter file. For example, use `custom_disk_sizes_filename = "XO1_db_sizes.json"`. > [!TIP]-> The path to the disk configuration needs to be relative to the folder containing the tfvars file. -+> The path to the disk configuration needs to be relative to the folder that contains the `tfvars` file. -The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU) and a backup disk (LUN 13). The application tier servers (Application, Central Services amd Web Dispatchers) will be deployed with jus a single 'sap' data disk. --The three data disks will be striped using LVM. The log disk will be mounted as a single disk. The backup disk will be mounted as a single disk. +The following sample code is an example configuration file. It defines three data disks (LUNs 0, 1, and 2), a log disk (LUN 9, using the Ultra SKU), and a backup disk (LUN 13). The application tier servers (application, central services, and web dispatchers) are deployed with just a single `sap` data disk. +The three data disks are striped by using LVM. The log disk and the backup disk are each mounted as a single disk. ```json { The three data disks will be striped using LVM. The log disk will be mounted as } ``` -## Add extra disks to existing system +## Add extra disks to an existing system If you need to add disks to an already deployed system, you can add a new block to your JSON structure. Include the attribute `append` in this block, and set the value to `true`. For example, in the following sample code, the last block contains the attribute `"append" : true,`. The last block adds a new disk to the database tier, which is already configured in the first `"data"` block in the code. If you need to add disks to an already deployed system, you can add a new block } ``` -## Next steps +## Next step > [!div class="nextstepaction"]-> [Configure custom naming module](naming-module.md) -+> [Configure custom naming](naming-module.md) |
sap | Configure Sap Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-sap-parameters.md | Title: Configure SAP parameters file for Ansible -description: Define SAP parameters for Ansible + Title: Configure SAP parameters files for Ansible +description: Learn how to define SAP parameters for Ansible. -# Configure SAP Installation parameters +# Configure SAP installation parameters -The Ansible playbooks use a combination of default parameters and parameters defined by the Terraform deployment for the SAP installation. +The Ansible playbooks use a combination of default parameters and parameters defined by the Terraform deployment for the SAP installation. +## Default parameters -## Default Parameters --This table contains the default parameters defined by the framework. +The following tables contain the default parameters defined by the framework. ### User IDs This table contains the IDs for the SAP users and groups for the different platforms. > [!div class="mx-tdCol2BreakAll "]-> | Parameter | Description | Default Value | +> | Parameter | Description | Default value | > | - | -- | - | > | HANA | | |-> | `sapadm_uid` | The UID for the sapadm account. | 2100 | -> | `sidadm_uid` | The UID for the sidadm account. | 2003 | -> | `hdbadm_uid` | The UID for the hdbadm account. | 2200 | -> | `sapinst_gid` | The GID for the sapinst group. | 2001 | -> | `sapsys_gid` | The GID for the sapsys group. | 2000 | -> | `hdbshm_gid` | The GID for the hdbshm group. | 2002 | +> | `sapadm_uid` | The UID for the sapadm account | 2100 | +> | `sidadm_uid` | The UID for the sidadm account | 2003 | +> | `hdbadm_uid` | The UID for the hdbadm account | 2200 | +> | `sapinst_gid` | The GID for the sapinst group | 2001 | +> | `sapsys_gid` | The GID for the sapsys group | 2000 | +> | `hdbshm_gid` | The GID for the hdbshm group | 2002 | > | DB2 | | |-> | `db2sidadm_uid` | The UID for the db2sidadm account. | 3004 | -> | `db2sapsid_uid` | The UID for the db2sapsid account. | 3005 | -> | `db2sysadm_gid` | The UID for the db2sysadm group. | 3000 | -> | `db2sysctrl_gid` | The UID for the db2sysctrl group. | 3001 | -> | `db2sysmaint_gid` | The UID for the db2sysmaint group. | 3002 | -> | `db2sysmon_gid` | The UID for the db2sysmon group. | 2003 | +> | `db2sidadm_uid` | The UID for the db2sidadm account | 3004 | +> | `db2sapsid_uid` | The UID for the db2sapsid account | 3005 | +> | `db2sysadm_gid` | The UID for the db2sysadm group | 3000 | +> | `db2sysctrl_gid` | The UID for the db2sysctrl group | 3001 | +> | `db2sysmaint_gid` | The UID for the db2sysmaint group | 3002 | +> | `db2sysmon_gid` | The UID for the db2sysmon group | 2003 | > | ORACLE | | |-> | `orasid_uid` | The UID for the orasid account. | 3100 | -> | `oracle_uid` | The UID for the oracle account. | 3101 | -> | `observer_uid` | The UID for the observer account. | 4000 | -> | `dba_gid` | The GID for the dba group. | 3100 | -> | `oper_gid` | The GID for the oper group. | 3101 | -> | `asmoper_gid` | The GID for the asmoper group. | 3102 | -> | `asmadmin_gid` | The GID for the asmadmin group. | 3103 | -> | `asmdba_gid` | The GID for the asmdba group. | 3104 | -> | `oinstall_gid` | The GID for the oinstall group. | 3105 | -> | `backupdba_gid` | The GID for the backupdba group. | 3106 | -> | `dgdba_gid` | The GID for the dgdba group. | 3107 | -> | `kmdba_gid` | The GID for the kmdba group. | 3108 | -> | `racdba_gid` | The GID for the racdba group. | 3108 | -+> | `orasid_uid` | The UID for the orasid account | 3100 | +> | `oracle_uid` | The UID for the oracle account | 3101 | +> | `observer_uid` | The UID for the observer account | 4000 | +> | `dba_gid` | The GID for the dba group | 3100 | +> | `oper_gid` | The GID for the oper group | 3101 | +> | `asmoper_gid` | The GID for the asmoper group | 3102 | +> | `asmadmin_gid` | The GID for the asmadmin group | 3103 | +> | `asmdba_gid` | The GID for the asmdba group | 3104 | +> | `oinstall_gid` | The GID for the oinstall group | 3105 | +> | `backupdba_gid` | The GID for the backupdba group | 3106 | +> | `dgdba_gid` | The GID for the dgdba group | 3107 | +> | `kmdba_gid` | The GID for the kmdba group | 3108 | +> | `racdba_gid` | The GID for the racdba group | 3108 | ### Windows parameters This table contains the information pertinent to Windows deployments. > [!div class="mx-tdCol2BreakAll "]-> | Parameter | Description | Default Value | +> | Parameter | Description | Default value | > | - | -- | - | > | `mssserver_version` | SQL Server version | `mssserver2019` | - ## Parameters -This table contains the parameters stored in the sap-parameters.yaml file, most of the values are prepopulated via the Terraform deployment. +The following tables contain the parameters stored in the *sap-parameters.yaml* file. Most of the values are prepopulated via the Terraform deployment. ### Infrastructure This table contains the parameters stored in the sap-parameters.yaml file, most > | - | - | - | > | `sap_fqdn` | The FQDN suffix for the virtual machines to be added to the local hosts file | Required | -### Application Tier +### Application tier > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | > | `bom_base_name` | The name of the SAP Application Bill of Materials file | Required | > | `sap_sid` | The SID of the SAP application | Required |-> | `scs_high_availability` | Defines if the Central Services is deployed highly available | Required | +> | `scs_high_availability` | Defines if the central services is deployed highly available | Required | > | `scs_instance_number` | Defines the instance number for ASCS | Optional | > | `scs_lb_ip` | IP address of ASCS instance | Optional | > | `scs_virtual_hostname` | The host name of the ASCS instance | Optional | This table contains the parameters stored in the sap-parameters.yaml file, most > | `ers_lb_ip` | IP address of ERS instance | Optional | > | `ers_virtual_hostname` | The host name of the ERS instance | Optional | > | `pas_instance_number` | Defines the instance number for PAS | Optional |-> | `web_sid` | The SID for the Web Dispatcher | Required if web dispatchers are deployed | -> | `scs_clst_lb_ip` | IP address of Windows Cluster service | Optional | +> | `web_sid` | The SID for the web dispatcher | Required if web dispatchers are deployed | +> | `scs_clst_lb_ip` | IP address of Windows cluster service | Optional | -### Database Tier +### Database tier > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | -> | `db_sid` | The SID of the SAP database | Required | -> | `db_instance_number` | Defines the instance number for the database | Required | -> | `db_high_availability` | Defines if the database is deployed highly available | Required | -> | `db_lb_ip` | IP address of the database load balancer | Optional | -> | `platform` | The database platform. Valid values are: ASE, DB2, HANA, ORACLE, SQLSERVER | Required | -> | `db_clst_lb_ip` | IP address of database cluster for Windows | Optional | +> | `db_sid` | The SID of the SAP database. | Required | +> | `db_instance_number` | Defines the instance number for the database. | Required | +> | `db_high_availability` | Defines if the database is deployed highly available. | Required | +> | `db_lb_ip` | IP address of the database load balancer. | Optional | +> | `platform` | The database platform. Valid values are ASE, DB2, HANA, ORACLE, and SQLSERVER. | Required | +> | `db_clst_lb_ip` | IP address of database cluster for Windows. | Optional | ### NFS > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | -> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional | -> | `sap_mnt` | The NFS path for sap_mnt | Required | -> | `sap_trans` | The NFS path for sap_trans | Required | -> | `usr_sap_install_mountpoint` | The NFS path for usr/sap/install | Required | +> | `NFS_provider` | Defines what NFS back end to use. The options are `AFS` for Azure Files NFS or `ANF` for Azure NetApp Files, `NONE` for NFS from the SCS server, or `NFS` for an external NFS solution. | Optional | +> | `sap_mnt` | The NFS path for sap_mnt. | Required | +> | `sap_trans` | The NFS path for sap_trans. | Required | +> | `usr_sap_install_mountpoint` | The NFS path for usr/sap/install. | Required | ### Azure NetApp Files > [!div class="mx-tdCol2BreakAll "] This table contains the parameters stored in the sap-parameters.yaml file, most > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | -> | `domain_name` | Defines the Windows domain name, for example sap.contoso.net. | Required | -> | `domain` | Defines the Windows domain Netbios name, for example sap. | Optional | +> | `domain_name` | Defines the Windows domain name, for example, sap.contoso.net | Required | +> | `domain` | Defines the Windows domain Netbios name, for example, sap | Optional | > | SQL | | |-> | `use_sql_for_SAP` | Uses the SAP defined SQL Server media, defaults to 'true' | Optional | +> | `use_sql_for_SAP` | Uses the SAP-defined SQL Server media, defaults to `true` | Optional | > | `win_cluster_share_type` | Defines the cluster type (CSD/FS), defaults to CSD | Optional | ### Miscellaneous This table contains the parameters stored in the sap-parameters.yaml file, most > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | -> | `kv_name` | The name of the Azure key vault containing the system credentials | Required | -> | `secret_prefix` | The prefix for the name of the secrets for the SID stored in key vault | Required | -> | `upgrade_packages` | Update all installed packages on the virtual machines | Required | -> | `use_msi_for_clusters` | Use managed identities for fencing | Required | +> | `kv_name` | The name of the Azure key vault that contains the system credentials | Required | +> | `secret_prefix` | The prefix for the name of the secrets for the SID stored in the key vault | Required | +> | `upgrade_packages` | Updates all installed packages on the virtual machines | Required | +> | `use_msi_for_clusters` | Uses managed identities for fencing | Required | ### Disks -Disks define a dictionary with information about the disks of all the virtual machines in the SAP Application virtual machines. +Disks define a dictionary with information about the disks of all the virtual machines in the SAP application virtual machines. > [!div class="mx-tdCol2BreakAll "]-> | attribute | Description | Type | +> | Attribute | Description | Type | > | - | - | - | -> | `host` | The computer name of the virtual machine | Required | -> | `LUN` | Defines the LUN number that the disk is attached to | Required | -> | `type` | This attribute is used to group the disks, each disk of the same type will be added to the LVM on the virtual machine | Required | -+> | `host` | The computer name of the virtual machine. | Required | +> | `LUN` | Defines the LUN number that the disk is attached to. | Required | +> | `type` | This attribute is used to group the disks. Each disk of the same type is added to the LVM on the virtual machine. | Required | Example of the disks dictionary:+ ```yaml disks: disks: ### Oracle support -From the v3.4 release, it's possible to deploy SAP on Azure systems in a Shared Home configuration using an Oracle database backend. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](../workloads/dbms-guide-oracle.md). +From the v3.4 release, it's possible to deploy SAP on Azure systems in a shared home configuration by using an Oracle database back end. For more information on running SAP on Oracle in Azure, see [Azure Virtual Machines Oracle DBMS deployment for SAP workload](../workloads/dbms-guide-oracle.md). -In order to install the Oracle backend using the SAP on Azure Deployment Automation Framework, you need to provide the following parameters +To install the Oracle back end by using SAP Deployment Automation Framework, you need to provide the following parameters: > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | -> | `platform` | The database backend, 'ORACLE' | Required | -> | `ora_release` | The Oracle release version, for example 19 | Required | -> | `ora_release` | The Oracle release version, for example 19.0.0 | Required | +> | `platform` | The database back end, `ORACLE` | Required | +> | `ora_release` | The Oracle release version, for example, 19 | Required | +> | `ora_release` | The Oracle release version, for example, 19.0.0 | Required | > | `oracle_sbp_patch` | The Oracle SBP patch file name | Required | -#### Shared Home support +#### Shared home support -To configure shared home support for Oracle, you need to add a dictionary defining the SIDs to be deployed. You can do that by adding the parameter 'MULTI_SIDS' that contains a list of the SIDs and the SID details. +To configure shared home support for Oracle, you need to add a dictionary that defines the SIDs to be deployed. You can do that by adding the parameter `MULTI_SIDS` that contains a list of the SIDs and the SID details. ```yaml MULTI_SIDS: MULTI_SIDS: - {sid: 'QE1', dbsid_uid: '3006', sidadm_uid: '2002', ascs_inst_no: '01', pas_inst_no: '01', app_inst_no: '01'} ``` -Each row must specify the following parameters. - +Each row must specify the following parameters: + > [!div class="mx-tdCol2BreakAll "] > | Parameter | Description | Type | > | - | - | - | Each row must specify the following parameters. > | `pas_inst_no` | The PAS instance number for the instance | Required | > | `app_inst_no` | The APP instance number for the instance | Required | +## Override the default parameters -## Overriding the default parameters --You can override the default parameters by either specifying them in the sap-parameters.yaml file or by passing them as command line parameters to the Ansible playbooks. +You can override the default parameters by either specifying them in the *sap-parameters.yaml* file or by passing them as command-line parameters to the Ansible playbooks. -For example if you want to override the default value of the group ID for the sapinst group (`sapinst_gid`) parameter, you can do it by adding the following line to the sap-parameters.yaml file: +For example, if you want to override the default value of the group ID for the `sapinst` group (`sapinst_gid`) parameter, add the following line to the *sap-parameters.yaml* file: ```yaml sapinst_gid: 1000 ``` -If you want to provide them as parameters for the Ansible playbooks, you can do it by adding the following parameter to the command line: +If you want to provide them as parameters for the Ansible playbooks, add the following parameter to the command line: ```bash ansible-playbook -i hosts SID_hosts.yaml --extra-vars "sapinst_gid=1000" ..... ``` -You can also override the default parameters by specifying them in the `configuration_settings' variable in your tfvars file. For example, if you want to override 'sapinst_gid' your tfvars file should contain the following line: +You can also override the default parameters by specifying them in the `configuration_settings` variable in your `tfvars` file. For example, if you want to override `sapinst_gid`, your `tfvars` file should contain the following line: ```terraform configuration_settings = { configuration_settings = { } ``` ---## Next steps +## Next step > [!div class="nextstepaction"]-> [Deploy SAP System](deploy-system.md) +> [Deploy the SAP system](deploy-system.md) |
sap | Configure System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md | Title: Configure SAP system parameters for automation -description: Define the SAP system properties for the SAP on Azure Deployment Automation Framework using a parameters file. +description: Define the SAP system properties for SAP Deployment Automation Framework by using a parameters file. -Configuration for the [SAP on Azure Deployment Automation Framework](deployment-framework.md)] happens through parameters files. You provide information about your SAP system infrastructure in a tfvars file, which the automation framework uses for deployment. You can find examples of the variable file in the 'samples' repository. +Configuration for [SAP Deployment Automation Framework](deployment-framework.md) happens through parameters files. You provide information about your SAP system infrastructure in a `tfvars` file, which the automation framework uses for deployment. You can find examples of the variable file in the `samples` repository. -The automation supports both creating resources (green field deployment) or using existing resources (brownfield deployment). --For the green field scenario, the automation defines default names for resources, however some resource names may be defined in the tfvars file. -For the brownfield scenario, the Azure resource identifiers for the resources must be specified. +The automation supports creating resources (green-field deployment) or using existing resources (brown-field deployment): +- **Green-field scenario**: The automation defines default names for resources, but some resource names might be defined in the `tfvars` file. +- **Brown-field scenario**: The Azure resource identifiers for the resources must be specified. ## Deployment topologies -The automation framework can be used to deploy the following SAP architectures: +You can use the automation framework to deploy the following SAP architectures: - Standalone - Distributed-- Distributed (Highly Available)+- Distributed (highly available) ### Standalone -In the Standalone architecture, all the SAP roles are installed on a single server. -+In the standalone architecture, all the SAP roles are installed on a single server. To configure this topology, define the database tier values and set `enable_app_tier_deployment` to false. ### Distributed -The distributed architecture has a separate database server and application tier. The application tier can further be separated by having SAP Central Services on a virtual machine and one or more application servers. -To configure this topology, define the database tier values and define `scs_server_count` = 1, `application_server_count` >= 1 +The distributed architecture has a separate database server and application tier. The application tier can further be separated by having SAP central services on a virtual machine and one or more application servers. -### High Availability +To configure this topology, define the database tier values and define `scs_server_count` = 1, `application_server_count` >= 1. -The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters or in case of Windows with Windows Failover clustering. +### High availability -To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count = 1` and `scs_high_availability` = true and -`application_server_count` >= 1 +The distributed (highly available) deployment is similar to the distributed architecture. In this deployment, the database and/or SAP central services can both be configured by using a highly available configuration that uses two virtual machines, each with Pacemaker clusters or Windows failover clustering. -## Environment parameters +To configure this topology, define the database tier values and set `database_high_availability` to true. Set `scs_server_count` = 1 and `scs_high_availability` = true and `application_server_count` >= 1. -The table below contains the parameters that define the environment settings. +## Environment parameters +This section contains the parameters that define the environment settings. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | -- | - | - |-> | `environment` | Identifier for the workload zone (max 5 chars) | Mandatory | For example, `PROD` for a production environment and `NP` for a non-production environment. | -> | `location` | The Azure region in which to deploy. | Required | | +> | `environment` | Identifier for the workload zone (maximum five characters) | Mandatory | For example, `PROD` for a production environment and `NP` for a nonproduction environment. | +> | `location` | The Azure region in which to deploy | Required | | > | `custom_prefix` | Specifies the custom prefix used in the resource naming | Optional | | > | `use_prefix` | Controls if the resource naming includes the prefix | Optional | DEV-WEEU-SAP01-X00_xxxx |-> | 'name_override_file' | Name override file | Optional | see [Custom naming](naming-module.md) | -> | 'save_naming_information | Create a sample naming json file | Optional | see [Custom naming](naming-module.md) | +> | 'name_override_file' | Name override file | Optional | See [Custom naming](naming-module.md). | +> | 'save_naming_information | Creates a sample naming JSON file | Optional | See [Custom naming](naming-module.md). | ## Resource group parameters -The table below contains the parameters that define the resource group. +This section contains the parameters that define the resource group. > [!div class="mx-tdCol2BreakAll "] The table below contains the parameters that define the resource group. > | `resourcegroup_tags` | Tags to be associated to the resource group | Optional | -## SAP Virtual Hostname parameters +## SAP virtual hostname parameters -In the SAP on Azure Deployment Automation Framework, the SAP virtual hostname is defined by specifying the `use_secondary_ips` parameter. +In SAP Deployment Automation Framework, the SAP virtual hostname is defined by specifying the `use_secondary_ips` parameter. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | - |-> | `use_secondary_ips` | Boolean flag indicating if SAP should be installed using Virtual hostnames | Optional | +> | `use_secondary_ips` | Boolean flag that indicates if SAP should be installed by using virtual hostnames | Optional | ### Database tier parameters -The database tier defines the infrastructure for the database tier, supported database backends are: +The database tier defines the infrastructure for the database tier. Supported database back ends are: - `HANA` - `DB2` The database tier defines the infrastructure for the database tier, supported da - `ORACLE-ASM` - `ASE` - `SQLSERVER`-- `NONE` (in this case no database tier is deployed)-+- `NONE` (in this case, no database tier is deployed) > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | -- | -- | |-> | `database_sid` | Defines the database SID. | Required | | -> | `database_platform` | Defines the database backend. | Supported values are `HANA`, `DB2`, `ORACLE`, `ASE`, `SQLSERVER`, `NONE` | -> | `database_high_availability` | Defines if the database tier is deployed highly available. | Optional | See [High availability configuration](configure-system.md#high-availability-configuration) | -> | `database_server_count` | Defines the number of database servers. | Optional | Default value is 1 | -> | `database_vm_zones` | Defines the Availability Zones for the database servers. | Optional | | -> | `db_sizing_dictionary_key` | Defines the database sizing information. | Required | See [Custom Sizing](configure-extra-disks.md) | -> | `db_disk_sizes_filename` | Defines the custom database sizing file name. | Optional | See [Custom Sizing](configure-extra-disks.md) | -> | `database_vm_use_DHCP` | Controls if Azure subnet provided IP addresses should be used. | Optional | | -> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet). | Optional | | -> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet). | Optional | | -> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet). | Optional | | -> | `database_vm_image` | Defines the Virtual machine image to use, see below. | Optional | | -> | `database_vm_authentication_type` | Defines the authentication type (key/password). | Optional | | -> | `database_use_avset` | Controls if the database servers are placed in availability sets. | Optional | default is false | -> | `database_use_ppg` | Controls if the database servers will be placed in proximity placement groups. | Optional | default is true | -> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs. | Optional | Primarily used with ANF pinning | -> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces. | Optional | default is true | --The Virtual Machine and the operating system image is defined using the following structure: +> | `database_sid` | Defines the database SID | Required | | +> | `database_platform` | Defines the database back end | Supported values are `HANA`, `DB2`, `ORACLE`, `ASE`, `SQLSERVER`, and `NONE`. | +> | `database_high_availability` | Defines if the database tier is deployed highly available | Optional | See [High-availability configuration](configure-system.md#high-availability-configuration). | +> | `database_server_count` | Defines the number of database servers | Optional | Default value is 1. | +> | `database_vm_zones` | Defines the availability zones for the database servers | Optional | | +> | `db_sizing_dictionary_key` | Defines the database sizing information | Required | See [Custom sizing](configure-extra-disks.md). | +> | `db_disk_sizes_filename` | Defines the custom database sizing file name | Optional | See [Custom sizing](configure-extra-disks.md). | +> | `database_vm_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used | Optional | | +> | `database_vm_db_nic_ips` | Defines the IP addresses for the database servers (database subnet) | Optional | | +> | `database_vm_db_nic_secondary_ips` | Defines the secondary IP addresses for the database servers (database subnet) | Optional | | +> | `database_vm_admin_nic_ips` | Defines the IP addresses for the database servers (admin subnet) | Optional | | +> | `database_vm_image` | Defines the virtual machine image to use | Optional | | +> | `database_vm_authentication_type` | Defines the authentication type (key/password) | Optional | | +> | `database_use_avset` | Controls if the database servers are placed in availability sets | Optional | Default is false. | +> | `database_use_ppg` | Controls if the database servers are placed in proximity placement groups | Optional | Default is true. | +> | `database_vm_avset_arm_ids` | Defines the existing availability sets Azure resource IDs | Optional | Primarily used with ANF pinning. | +> | `hana_dual_nics` | Controls if the HANA database servers will have dual network interfaces | Optional | Default is true. | ++The virtual machine and the operating system image are defined by using the following structure: ```python { The Virtual Machine and the operating system image is defined using the followin ### Common application tier parameters -The application tier defines the infrastructure for the application tier, which can consist of application servers, central services servers and web dispatch servers -+The application tier defines the infrastructure for the application tier, which can consist of application servers, central services servers, and web dispatch servers. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | | --| | > | `enable_app_tier_deployment` | Defines if the application tier is deployed | Optional | | > | `sid` | Defines the SAP application SID | Required | |-> | `app_tier_sizing_dictionary_key` | Lookup value defining the VM SKU and the disk layout for tha application tier servers | Optional | -> | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom Sizing](configure-extra-disks.md) | -> | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machine(s) | Optional | | -> | `app_tier_use_DHCP` | Controls if Azure subnet provided IP addresses should be used (dynamic) | Optional | | +> | `app_tier_sizing_dictionary_key` | Lookup value that defines the VM SKU and the disk layout for the application tier servers | Optional | +> | `app_disk_sizes_filename` | Defines the custom disk size file for the application tier servers | Optional | See [Custom sizing](configure-extra-disks.md). | +> | `app_tier_authentication_type` | Defines the authentication type for the application tier virtual machines | Optional | | +> | `app_tier_use_DHCP` | Controls if Azure subnet-provided IP addresses should be used (dynamic) | Optional | | > | `app_tier_dual_nics` | Defines if the application tier server will have two network interfaces | Optional | | -### SAP Central services parameters -+### SAP central services parameters > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | -| |-> | `scs_server_count` | Defines the number of SCS servers. | Required | | -> | `scs_high_availability` | Defines if the Central Services is highly available. | Optional | See [High availability configuration](configure-system.md#high-availability-configuration) | -> | `scs_instance_number` | The instance number of SCS. | Optional | | -> | `ers_instance_number` | The instance number of ERS. | Optional | | -> | `scs_server_sku` | Defines the Virtual machine SKU to use. | Optional | | -> | `scs_server_image` | Defines the Virtual machine image to use. | Required | | -> | `scs_server_zones` | Defines the availability zones of the SCS servers. | Optional | | -> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet). | Optional | | -> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet). | Optional | | -> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet). | Optional | | -> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet). | Optional | | -> | `scs_server_use_ppg` | Controls if the SCS servers are placed in availability sets. | Optional | | -> | `scs_server_use_avset` | Controls if the SCS servers will be placed in proximity placement groups.| Optional | | -> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers. | Optional | | +> | `scs_server_count` | Defines the number of SCS servers | Required | | +> | `scs_high_availability` | Defines if the central services is highly available | Optional | See [High availability configuration](configure-system.md#high-availability-configuration). | +> | `scs_instance_number` | The instance number of SCS | Optional | | +> | `ers_instance_number` | The instance number of ERS | Optional | | +> | `scs_server_sku` | Defines the virtual machine SKU to use | Optional | | +> | `scs_server_image` | Defines the virtual machine image to use | Required | | +> | `scs_server_zones` | Defines the availability zones of the SCS servers | Optional | | +> | `scs_server_app_nic_ips` | List of IP addresses for the SCS servers (app subnet) | Optional | | +> | `scs_server_app_nic_secondary_ips[]` | List of secondary IP addresses for the SCS servers (app subnet) | Optional | | +> | `scs_server_app_admin_nic_ips` | List of IP addresses for the SCS servers (admin subnet) | Optional | | +> | `scs_server_loadbalancer_ips` | List of IP addresses for the scs load balancer (app subnet) | Optional | | +> | `scs_server_use_ppg` | Controls if the SCS servers are placed in availability sets | Optional | | +> | `scs_server_use_avset` | Controls if the SCS servers are placed in proximity placement groups | Optional | | +> | `scs_server_tags` | Defines a list of tags to be applied to the SCS servers | Optional | | ### Application server parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | - | --| |-> | `application_server_count` | Defines the number of application servers. | Required | | -> | `application_server_sku` | Defines the Virtual machine SKU to use. | Optional | | -> | `application_server_image` | Defines the Virtual machine image to use. | Required | | -> | `application_server_zones` | Defines the availability zones to which the application servers are deployed.| Optional | | -> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet). | Optional | | -> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet). | Optional | | -> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet). | Optional | | -> | `application_server_use_ppg` | Controls if application servers are placed in availability sets. | Optional | | -> | `application_server_use_avset` | Controls if application servers will be placed in proximity placement | Optional | | -> | `application_server_tags` | Defines a list of tags to be applied to the application servers. | Optional | | +> | `application_server_count` | Defines the number of application servers | Required | | +> | `application_server_sku` | Defines the virtual machine SKU to use | Optional | | +> | `application_server_image` | Defines the virtual machine image to use | Required | | +> | `application_server_zones` | Defines the availability zones to which the application servers are deployed| Optional | | +> | `application_server_app_nic_ips[]` | List of IP addresses for the application servers (app subnet) | Optional | | +> | `application_server_nic_secondary_ips[]` | List of secondary IP addresses for the application servers (app subnet) | Optional | | +> | `application_server_app_admin_nic_ips` | List of IP addresses for the application server (admin subnet) | Optional | | +> | `application_server_use_ppg` | Controls if application servers are placed in availability sets | Optional | | +> | `application_server_use_avset` | Controls if application servers are placed in proximity placement groups | Optional | | +> | `application_server_tags` | Defines a list of tags to be applied to the application servers | Optional | | ### Web dispatcher parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | | | | |-> | `webdispatcher_server_count` | Defines the number of web dispatcher servers. | Required | | -> | `webdispatcher_server_sku` | Defines the Virtual machine SKU to use. | Optional | | -> | `webdispatcher_server_image` | Defines the Virtual machine image to use. | Optional | | -> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed. | Optional | | -> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app/web subnet). | Optional | | -> | `webdispatcher_server_nic_secondary_ips[]` | List of secondary IP addresses for the web dispatcher server (app/web subnet). | Optional | | -> | `webdispatcher_server_app_admin_nic_ips` | List of IP addresses for the web dispatcher server (admin subnet). | Optional | | -> | `webdispatcher_server_use_ppg` | Controls if web dispatchers are placed in availability sets. | Optional | | -> | `webdispatcher_server_use_avset` | Controls if web dispatchers will be placed in proximity placement | Optional | | -> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers. | Optional | | +> | `webdispatcher_server_count` | Defines the number of web dispatcher servers | Required | | +> | `webdispatcher_server_sku` | Defines the virtual machine SKU to use | Optional | | +> | `webdispatcher_server_image` | Defines the virtual machine image to use | Optional | | +> | `webdispatcher_server_zones` | Defines the availability zones to which the web dispatchers are deployed | Optional | | +> | `webdispatcher_server_app_nic_ips[]` | List of IP addresses for the web dispatcher server (app/web subnet) | Optional | | +> | `webdispatcher_server_nic_secondary_ips[]` | List of secondary IP addresses for the web dispatcher server (app/web subnet) | Optional | | +> | `webdispatcher_server_app_admin_nic_ips` | List of IP addresses for the web dispatcher server (admin subnet) | Optional | | +> | `webdispatcher_server_use_ppg` | Controls if web dispatchers are placed in availability sets | Optional | | +> | `webdispatcher_server_use_avset` | Controls if web dispatchers are placed in proximity placement groups | Optional | | +> | `webdispatcher_server_tags` | Defines a list of tags to be applied to the web dispatcher servers | Optional | | ## Network parameters If the subnets aren't deployed using the workload zone deployment, they can be added in the system's tfvars file. -The automation framework can either deploy the virtual network and the subnets (green field deployment) or using an existing virtual network and existing subnets (brown field deployments). +The automation framework can either deploy the virtual network and the subnets (green-field deployment) or use an existing virtual network and existing subnets (brown-field deployments): -Ensure that the virtual network address space is large enough to host all the resources. + - **Green-field scenario**: The virtual network address space and the subnet address prefixes must be specified. + - **Brown-field scenario**: The Azure resource identifier for the virtual network and the subnets must be specified. -The table below contains the networking parameters. +Ensure that the virtual network address space is large enough to host all the resources. +This section contains the networking parameters. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | -- | | - |-> | `network_logical_name` | The logical name of the network. | Required | | +> | `network_logical_name` | The logical name of the network | Required | | > | | | | |-> | `admin_subnet_name` | The name of the 'admin' subnet. | Optional | | -> | `admin_subnet_address_prefix` | The address range for the 'admin' subnet. | Mandatory | For green field deployments. | -> | `admin_subnet_arm_id` * | The Azure resource identifier for the 'admin' subnet. | Mandatory | For brown field deployments. | -> | `admin_subnet_nsg_name` | The name of the 'admin' Network Security Group name. | Optional | | -> | `admin_subnet_nsg_arm_id` * | The Azure resource identifier for the 'admin' Network Security Group | Mandatory | For brown field deployments. | +> | `admin_subnet_name` | The name of the `admin` subnet | Optional | | +> | `admin_subnet_address_prefix` | The address range for the `admin` subnet | Mandatory | For green-field deployments | +> | `admin_subnet_arm_id` * | The Azure resource identifier for the `admin` subnet | Mandatory | For brown-field deployments | +> | `admin_subnet_nsg_name` | The name of the `admin` network security group | Optional | | +> | `admin_subnet_nsg_arm_id` * | The Azure resource identifier for the `admin` network security group | Mandatory | For brown-field deployments | > | | | Optional | |-> | `db_subnet_name` | The name of the 'db' subnet. | Optional | | -> | `db_subnet_address_prefix` | The address range for the 'db' subnet. | Mandatory | For green field deployments. | -> | `db_subnet_arm_id` * | The Azure resource identifier for the 'db' subnet. | Mandatory | For brown field deployments. | -> | `db_subnet_nsg_name` | The name of the 'db' Network Security Group name. | Optional | | -> | `db_subnet_nsg_arm_id` * | The Azure resource identifier for the 'db' Network Security Group. | Mandatory | For brown field deployments. | +> | `db_subnet_name` | The name of the `db` subnet | Optional | | +> | `db_subnet_address_prefix` | The address range for the `db` subnet | Mandatory | For green-field deployments | +> | `db_subnet_arm_id` * | The Azure resource identifier for the `db` subnet | Mandatory | For brown-field deployments | +> | `db_subnet_nsg_name` | The name of the `db` network security group name | Optional | | +> | `db_subnet_nsg_arm_id` * | The Azure resource identifier for the `db` network security group | Mandatory | For brown-field deployments | > | | | Optional | |-> | `app_subnet_name` | The name of the 'app' subnet. | Optional | | -> | `app_subnet_address_prefix` | The address range for the 'app' subnet. | Mandatory | For green field deployments. | -> | `app_subnet_arm_id` * | The Azure resource identifier for the 'app' subnet. | Mandatory | For brown field deployments. | -> | `app_subnet_nsg_name` | The name of the 'app' Network Security Group name. | Optional | | -> | `app_subnet_nsg_arm_id` * | The Azure resource identifier for the 'app' Network Security Group. | Mandatory | For brown field deployments. | +> | `app_subnet_name` | The name of the `app` subnet | Optional | | +> | `app_subnet_address_prefix` | The address range for the `app` subnet | Mandatory | For green-field deployments | +> | `app_subnet_arm_id` * | The Azure resource identifier for the `app` subnet | Mandatory | For brown-field deployments | +> | `app_subnet_nsg_name` | The name of the `app` network security group name | Optional | | +> | `app_subnet_nsg_arm_id` * | The Azure resource identifier for the `app` network security group | Mandatory | For brown-field deployments | > | | | Optional | |-> | `web_subnet_name` | The name of the 'web' subnet. | Optional | | -> | `web_subnet_address_prefix` | The address range for the 'web' subnet. | Mandatory | For green field deployments. | -> | `web_subnet_arm_id` * | The Azure resource identifier for the 'web' subnet. | Mandatory | For brown field deployments. | -> | `web_subnet_nsg_name` | The name of the 'web' Network Security Group name. | Optional | | -> | `web_subnet_nsg_arm_id` * | The Azure resource identifier for the 'web' Network Security Group. | Mandatory | For brown field deployments. | +> | `web_subnet_name` | The name of the `web` subnet | Optional | | +> | `web_subnet_address_prefix` | The address range for the `web` subnet | Mandatory | For green-field deployments | +> | `web_subnet_arm_id` * | The Azure resource identifier for the `web` subnet | Mandatory | For brown-field deployments | +> | `web_subnet_nsg_name` | The name of the `web` network security group name | Optional | | +> | `web_subnet_nsg_arm_id` * | The Azure resource identifier for the `web` network security group | Mandatory | For brown-field deployments | -\* = Required For brown field deployments. +\* = Required for brown-field deployments -## Key Vault Parameters +## Key vault parameters -If you don't want to use the workload zone key vault but another one, this can be added in the system's tfvars file. +If you don't want to use the workload zone key vault but another one, you can define the key vault's Azure resource identifier in the system's `tfvar` file. -The table below defines the parameters used for defining the Key Vault information. +This section defines the parameters used for defining the key vault information. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | | | -- | > | `user_keyvault_id` | Azure resource identifier for existing system credentials key vault | Optional | | > | `spn_keyvault_id` | Azure resource identifier for existing deployment credentials (SPNs) key vault | Optional | | -> | `enable_purge_control_for_keyvaults | Disables the purge protection for Azure key vaults. | Optional | Only use this for test environments | -+> | `enable_purge_control_for_keyvaults` | Disables the purge protection for Azure key vaults | Optional | Only use for test environments. | ### Anchor virtual machine parameters -The SAP on Azure Deployment Automation Framework supports having an Anchor virtual machine. The anchor virtual machine will be the first virtual machine to be deployed and is used to anchor the proximity placement group. --The table below contains the parameters related to the anchor virtual machine. +SAP Deployment Automation Framework supports having an anchor virtual machine. The anchor virtual machine is the first virtual machine to be deployed. It's used to anchor the proximity placement group. +This section contains the parameters related to the anchor virtual machine. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | | -- |-> | `deploy_anchor_vm` | Defines if the anchor Virtual Machine is used | Optional | -> | `anchor_vm_sku` | Defines the VM SKU to use. For example, Standard_D4s_v3. | Optional | -> | `anchor_vm_image` | Defines the VM image to use. See the following code sample. | Optional | -> | `anchor_vm_use_DHCP` | Controls whether to use dynamic IP addresses provided by Azure subnet. | Optional | -> | `anchor_vm_accelerated_networking` | Defines if the Anchor VM is configured to use accelerated networking | Optional | +> | `deploy_anchor_vm` | Defines if the anchor virtual machine is used | Optional | +> | `anchor_vm_sku` | Defines the VM SKU to use, for example, Standard_D4s_v3 | Optional | +> | `anchor_vm_image` | Defines the VM image to use (as shown in the following code sample) | Optional | +> | `anchor_vm_use_DHCP` | Controls whether to use dynamic IP addresses provided by Azure subnet | Optional | +> | `anchor_vm_accelerated_networking` | Defines if the anchor VM is configured to use accelerated networking | Optional | > | `anchor_vm_authentication_type` | Defines the authentication type for the anchor VM key and password | Optional | -The Virtual Machine and the operating system image is defined using the following structure: +The virtual machine and the operating system image are defined by using the following structure: + ```python { os_type = "linux" The Virtual Machine and the operating system image is defined using the followin ### Authentication parameters -By default the SAP System deployment uses the credentials from the SAP Workload zone. If the SAP system needs unique credentials, you can provide them using these parameters. +By default, the SAP system deployment uses the credentials from the SAP workload zone. If the SAP system needs unique credentials, you can provide them by using these parameters. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | By default the SAP System deployment uses the credentials from the SAP Workload > | `automation_path_to_public_key` | Path to existing public key | Optional | > | `automation_path_to_private_key` | Path to existing private key | Optional | - ## Other parameters - > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | -- |-> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster using managed Identities | Optional | -> | `resource_offset` | Provides and offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional | -> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks using customer provided keys | Optional | -> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional | -> | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows the possible values are `None`, `Windows_Client` and `Windows_Server`. | -> | `use_zonal_markers` | Specifies if zonal Virtual Machines will include a zonal identifier. 'xooscs_z1_00l###' vs 'xooscs00l###'| Default value is true. | -> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups | | -> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups| | -+> | `use_msi_for_clusters` | If defined, configures the Pacemaker cluster by using managed identities. | Optional | +> | `resource_offset` | Provides an offset for resource naming. The offset number for resource naming when creating multiple resources. The default value is 0, which creates a naming pattern of disk0, disk1, and so on. An offset of 1 creates a naming pattern of disk1, disk2, and so on. | Optional | +> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional | +> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations. | Optional | +> | `license_type` | Specifies the license type for the virtual machines. | Possible values are `RHEL_BYOS` and `SLES_BYOS`. For Windows, the possible values are `None`, `Windows_Client`, and `Windows_Server`. | +> | `use_zonal_markers` | Specifies if zonal virtual machines will include a zonal identifier: `xooscs_z1_00l###` versus `xooscs00l###`.| Default value is true. | +> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | | +> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | | +> | `use_simple_mount` | Specifies if simple mounts are used (applicable for SLES 15 SP# or newer). | Optional | ## NFS support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |-> | `NFS_provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files. | -> | `sapmnt_volume_size` | Defines the size (in GB) for the 'sapmnt' volume | Optional | --### Azure files NFS Support +> | `NFS_provider` | Defines what NFS back end to use. The options are `AFS` for Azure Files NFS or `ANF` for Azure NetApp files. | +> | `sapmnt_volume_size` | Defines the size (in GB) for the `sapmnt` volume. | Optional | +### Azure files NFS support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | -- | -- |-> | `azure_files_storage_account_id` | If provided the Azure resource ID of the storage account used for sapmnt | Optional | +> | `azure_files_storage_account_id` | If provided, the Azure resource ID of the storage account used for `sapmnt` | Optional | -### Azure NetApp Files Support +### Azure NetApp Files support > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | -- | --| -- | | > | `ANF_HANA_data` | Create Azure NetApp Files volume for HANA data. | Optional | |-> | `ANF_HANA_data_use_existing_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes | +> | `ANF_HANA_data_use_existing_volume` | Use existing Azure NetApp Files volume for HANA data. | Optional | Use for pre-created volumes. | > | `ANF_HANA_data_volume_name` | Azure NetApp Files volume name for HANA data. | Optional | |-> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | default size 256 | -> | `ANF_HANA_data_volume_throughput` | Azure NetApp Files volume throughput for HANA data. | Optional | default is 128 MBs/s | +> | `ANF_HANA_data_volume_size` | Azure NetApp Files volume size in GB for HANA data. | Optional | Default size is 256. | +> | `ANF_HANA_data_volume_throughput` | Azure NetApp Files volume throughput for HANA data. | Optional | Default is 128 MBs/s. | > | | | | | > | `ANF_HANA_log` | Create Azure NetApp Files volume for HANA log. | Optional | |-> | `ANF_HANA_log_use_existing` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes | +> | `ANF_HANA_log_use_existing` | Use existing Azure NetApp Files volume for HANA log. | Optional | Use for pre-created volumes. | > | `ANF_HANA_log_volume_name` | Azure NetApp Files volume name for HANA log. | Optional | |-> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | default size 128 | -> | `ANF_HANA_log_volume_throughput` | Azure NetApp Files volume throughput for HANA log. | Optional | default is 128 MBs/s | +> | `ANF_HANA_log_volume_size` | Azure NetApp Files volume size in GB for HANA log. | Optional | Default size is 128. | +> | `ANF_HANA_log_volume_throughput` | Azure NetApp Files volume throughput for HANA log. | Optional | Default is 128 MBs/s. | > | | | | | > | `ANF_HANA_shared` | Create Azure NetApp Files volume for HANA shared. | Optional | |-> | `ANF_HANA_shared_use_existing` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes | +> | `ANF_HANA_shared_use_existing` | Use existing Azure NetApp Files volume for HANA shared. | Optional | Use for pre-created volumes. | > | `ANF_HANA_shared_volume_name` | Azure NetApp Files volume name for HANA shared. | Optional | |-> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | default size 128 | -> | `ANF_HANA_shared_volume_throughput` | Azure NetApp Files volume throughput for HANA shared. | Optional | default is 128 MBs/s | +> | `ANF_HANA_shared_volume_size` | Azure NetApp Files volume size in GB for HANA shared. | Optional | Default size is 128. | +> | `ANF_HANA_shared_volume_throughput` | Azure NetApp Files volume throughput for HANA shared. | Optional | Default is 128 MBs/s. | > | | | | |-> | `ANF_sapmnt` | Create Azure NetApp Files volume for sapmnt. | Optional | | -> | `ANF_sapmnt_use_existing_volume` | Use existing Azure NetApp Files volume for sapmnt. | Optional | Use for pre-created volumes | -> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for sapmnt. | Optional | | -> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for sapmnt. | Optional | default size 128 | -> | `ANF_sapmnt_throughput` | Azure NetApp Files volume throughput for sapmnt. | Optional | default is 128 MBs/s | +> | `ANF_sapmnt` | Create Azure NetApp Files volume for `sapmnt`. | Optional | | +> | `ANF_sapmnt_use_existing_volume` | Use existing Azure NetApp Files volume for `sapmnt`. | Optional | Use for pre-created volumes. | +> | `ANF_sapmnt_volume_name` | Azure NetApp Files volume name for `sapmnt`. | Optional | | +> | `ANF_sapmnt_volume_size` | Azure NetApp Files volume size in GB for `sapmnt`. | Optional | Default size is 128. | +> | `ANF_sapmnt_throughput` | Azure NetApp Files volume throughput for `sapmnt`. | Optional | Default is 128 MBs/s. | > | | | | |-> | `ANF_usr_sap` | Create Azure NetApp Files volume for usrsap. | Optional | | -> | `ANF_usr_sap_use_existing` | Use existing Azure NetApp Files volume for usrsap. | Optional | Use for pre-created volumes | -> | `ANF_usr_sap_volume_name` | Azure NetApp Files volume name for usrsap. | Optional | | -> | `ANF_usr_sap_volume_size` | Azure NetApp Files volume size in GB for usrsap. | Optional | default size 128 | -> | `ANF_usr_sap_throughput` | Azure NetApp Files volume throughput for usrsap. | Optional | default is 128 MBs/s | -+> | `ANF_usr_sap` | Create Azure NetApp Files volume for `usrsap`. | Optional | | +> | `ANF_usr_sap_use_existing` | Use existing Azure NetApp Files volume for `usrsap`. | Optional | Use for pre-created volumes. | +> | `ANF_usr_sap_volume_name` | Azure NetApp Files volume name for `usrsap`. | Optional | | +> | `ANF_usr_sap_volume_size` | Azure NetApp Files volume size in GB for `usrsap`. | Optional | Default size is 128. | +> | `ANF_usr_sap_throughput` | Azure NetApp Files volume throughput for `usrsap`. | Optional | Default is 128 MBs/s. | ## Oracle parameters -> [!NOTE] -> These parameters need to be updated in the sap-parameters.yaml file when deploying Oracle based systems. -+These parameters need to be updated in the *sap-parameters.yaml* file when you deploy Oracle-based systems. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | Notes | > | - | --| -- | |-> | `ora_release` | Release of Oracle, e.g. 19 | Mandatory | | -> | `ora_version` | Version of Oracle, e.g. 19.0.0 | Mandatory | | -> | `oracle_sbp_patch` | Oracle SBP patch file name, e.g. SAP19P_2202-70004508.ZIP | Mandatory | Must be part of the Bill of Materials | +> | `ora_release` | Release of Oracle, for example, 19 | Mandatory | | +> | `ora_version` | Version of Oracle, for example, 19.0.0 | Mandatory | | +> | `oracle_sbp_patch` | Oracle SBP patch file name, for example, SAP19P_2202-70004508.ZIP | Mandatory | Must be part of the Bill of Materials | ## Terraform parameters -The table below contains the Terraform parameters, these parameters need to be entered manually if not using the deployment scripts. -+This section contains the Terraform parameters. These parameters need to be entered manually if you're not using the deployment scripts. > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | - | - | - |-> | `tfstate_resource_id` | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | Required * | -> | `deployer_tfstate_key` | The name of the state file for the Deployer | Required * | +> | `tfstate_resource_id` | Azure resource identifier for the storage account in the SAP library that will contain the Terraform state files | Required * | +> | `deployer_tfstate_key` | The name of the state file for the deployer | Required * | > | `landscaper_tfstate_key` | The name of the state file for the workload zone | Required * | -\* = required for manual deployments +\* = Required for manual deployments -## High availability configuration +## High-availability configuration -The high availability configuration for the database tier and the SCS tier is configured using the `database_high_availability` and `scs_high_availability` flags. For Red Hat and SUSE should use the appropriate 'HA' version of the virtual machine images (RHEL-SAP-HA, sles-sap-15-sp?). +The high-availability configuration for the database tier and the SCS tier is configured by using the `database_high_availability` and `scs_high_availability` flags. Red Hat and SUSE should use the appropriate HA version of the virtual machine images (RHEL-SAP-HA, sles-sap-15-sp?). -High availability configurations use Pacemaker with Azure fencing agents. +High-availability configurations use Pacemaker with Azure fencing agents. > [!NOTE]-> The highly available Central Services deployment requires using a shared file system for sap_mnt. This can be achieved by using Azure Files or Azure NetApp Files, using the NFS_provider attribute. The default is Azure Files. To use Azure NetApp Files, set the NFS_provider attribute to ANF. - +> The highly available central services deployment requires using a shared file system for `sap_mnt`. You can use Azure Files or Azure NetApp Files by using the `NFS_provider` attribute. The default is Azure Files. To use Azure NetApp Files, set the `NFS_provider` attribute to `ANF`. -### Fencing agent configuration +### Fencing agent configuration -SDAF supports using either managed identities or service principals for fencing agents. The following section describe how to configure each option. +SAP Deployment Automation Framework supports using either managed identities or service principals for fencing agents. The following section describes how to configure each option. -By defining the variable 'use_msi_for_clusters' to true the fencing agent will use managed identities. This is the recommended option. +If you set the variable `use_msi_for_clusters` to `true`, the fencing agent uses managed identities. -If you want to use a service principal for the fencing agent set that variable to false. +If you want to use a service principal for the fencing agent, set that variable to false. -The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create Fencing Agent](../../virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device) +The fencing agents should be configured to use a unique service principal with permissions to stop and start virtual machines. For more information, see [Create a fencing agent](../../virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker.md#create-an-azure-fence-agent-device). ```azurecli-interactive az ad sp create-for-rbac --role="Linux Fence Agent Role" --scopes="/subscriptions/<subscriptionID>" --name="<prefix>-Fencing-Agent" ``` -Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01` and `<subscriptionID>` with the workload zone subscription ID. +Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01`. Replace `<subscriptionID>` with the workload zone subscription ID. > [!IMPORTANT]-> The name of the Fencing Agent Service Principal must be unique in the tenant. The script assumes that a role 'Linux Fence Agent Role' has already been created +> The name of the fencing agent service principal must be unique in the tenant. The script assumes that a role `Linux Fence Agent Role` was already created. >-> Record the values from the Fencing Agent SPN. +> Record the values from the fencing agent SPN: > - appId > - password > - tenant -The fencing agent details must be stored in the workload zone key vault using a predefined naming convention. Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01`, `<workload_kv_name>` with the name of the key vault from the workload zone resource group and for the other values use the values recorded from the previous step and run the script. -+The fencing agent details must be stored in the workload zone key vault by using a predefined naming convention. Replace `<prefix>` with the name prefix of your environment, such as `DEV-WEEU-SAP01`. Replace `<workload_kv_name>` with the name of the key vault from the workload zone resource group. For the other values, use the values recorded from the previous step and run the script. ```azurecli-interactive az keyvault secret set --name "<prefix>-fencing-spn-id" --vault-name "<workload_kv_name>" --value "<appId>"; |
sap | Deploy Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-control-plane.md | Title: About Control Plane deployment for the SAP on Azure Deployment Automation Framework -description: Overview of the Control Plan deployment process within the SAP on Azure Deployment Automation Framework. + Title: Deploy the control plane for SAP Deployment Automation Framework +description: Overview of the control plane deployment process in SAP Deployment Automation Framework. -The control plane deployment for the [SAP on Azure Deployment Automation Framework](deployment-framework.md) consists of the following components: +The control plane deployment for [SAP Deployment Automation Framework](deployment-framework.md) consists of the: +- Deployer +- SAP library + ## Prepare the deployment credentials -The SAP Deployment Frameworks uses Service Principals when doing the deployments. You can create the Service Principal for the Control Plane deployment using the following steps using an account with permissions to create Service Principals: +SAP Deployment Automation Framework uses service principals for deployments. To create a service principal for the control plane deployment, use an account that has permissions to create service principals: ```azurecli az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-Account" az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip ``` > [!IMPORTANT]-> The name of the Service Principal must be unique. +> The name of the service principal must be unique. >-> Record the output values from the command. +> Record the output values from the command: > - appId > - password > - tenant -Optionally assign the following permissions to the Service Principal: +Optionally, assign the following permissions to the service principal: ```azurecli az role assignment create --assignee <appId> --role "User Access Administrator" --scope /subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName> ``` --## Prepare the webapp -This step is optional. If you would like a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before deploying the control plane. +## Prepare the web app +This step is optional. If you want a browser-based UX to help the configuration of SAP workload zones and systems, run the following commands before you deploy the control plane. # [Linux](#tab/linux) del manifest.json # [Azure DevOps](#tab/devops) -It's currently not possible to perform this action from Azure DevOps. +Currently, it isn't possible to perform this action from Azure DevOps. - ## Deploy the control plane- -The sample Deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder. -The sample SAP Library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder. +The sample deployer configuration file `MGMT-WEEU-DEP00-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` folder. -Running the following command creates the Deployer, the SAP Library and adds the Service Principal details to the deployment key vault. If you followed the web app setup in the previous step, this command also creates the infrastructure to host the application. +The sample SAP library configuration file `MGMT-WEEU-SAP_LIBRARY.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folder. ++Run the following command to create the deployer and the SAP library. The command adds the service principal details to the deployment key vault. If you followed the web app setup in the previous step, this command also creates the infrastructure to host the application. # [Linux](#tab/linux) Run the following command to deploy the control plane: ```bash -az logout -cd ~/Azure_SAP_Automated_Deployment -cp -Rp samples/Terraform/WORKSPACES config -cd config/WORKSPACES - export ARM_SUBSCRIPTION_ID="<subscriptionId>" export ARM_CLIENT_ID="<appId>" export ARM_CLIENT_SECRET="<password>" export ARM_TENANT_ID="<tenantId>" export env_code="MGMT" export region_code="WEEU"-export vnet_code="WEEU" +export vnet_code="DEP01" +export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES" +export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" ++az logout az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" -export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" -="${subscriptionId}" -export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES" -export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +cd ~/Azure_SAP_Automated_Deployment/WORKSPACES sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \ sudo ${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh --subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \- --tenant_id "${ARM_TENANT_ID}" \ - --auto-approve + --tenant_id "${ARM_TENANT_ID}" ``` - # [Windows](#tab/windows) You can't perform a control plane deployment from Windows.+ # [Azure DevOps](#tab/devops) -Open (https://dev.azure.com) and go to your Azure DevOps project. +Open [Azure DevOps](https://dev.azure.com) and go to your Azure DevOps project. -> [!NOTE] -> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'. +Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to the folder that contains your configuration files. For this example, you can use `samples/WORKSPACES`. -The deployment uses the configuration defined in the Terraform variable files located in the 'WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE' and 'WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY' folders. +The deployment uses the configuration defined in the Terraform variable files located in the `WORKSPACES/DEPLOYER/MGMT-WEEU-DEP00-INFRASTRUCTURE` and `WORKSPACES/LIBRARY/MGMT-WEEU-SAP_LIBRARY` folders. -Run the pipeline by selecting the _Deploy control plane_ pipeline from the Pipelines section. Enter the configuration names for the deployer and the SAP library. Use 'MGMT-WEEU-DEP00-INFRASTRUCTURE' as the Deployer configuration name and 'MGMT-WEEU-SAP_LIBRARY' as the SAP Library configuration name. +Run the pipeline by selecting the `Deploy control plane` pipeline from the **Pipelines** section. Enter the configuration names for the deployer and the SAP library. Use `MGMT-WEEU-DEP00-INFRASTRUCTURE` as the deployer configuration name and `MGMT-WEEU-SAP_LIBRARY` as the SAP library configuration name. -You can track the progress in the Azure DevOps portal. Once the deployment is complete, you can see the Control Plane details in the _Extensions_ tab. +You can track the progress in the Azure DevOps portal. After the deployment is finished, you can see the control plane details on the **Extensions** tab. - :::image type="content" source="media/devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot of the run Azure DevOps pipeline run results."::: + :::image type="content" source="media/devops/automation-run-pipeline-control-plane.png" alt-text="Screenshot that shows the run Azure DevOps pipeline run results."::: -### Manually configure the deployer using Azure Bastion +### Manually configure the deployer by using Azure Bastion -Connect to the deployer by following these steps: +To connect to the deployer: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. Navigate to the resource group containing the deployer virtual machine. +1. Go to the resource group that contains the deployer virtual machine (VM). -1. Connect to the virtual machine using Azure Bastion. +1. Connect to the VM by using Azure Bastion. -1. The default username is *azureadm* +1. The default username is **azureadm**. -1. Choose *SSH Private Key from Azure Key Vault* +1. Select **SSH Private Key from Azure Key Vault**. -1. Select the subscription containing the control plane. +1. Select the subscription that contains the control plane. 1. Select the deployer key vault. -1. From the list of secrets choose the secret ending with *-sshkey*. +1. From the list of secrets, choose the secret that ends with **-sshkey**. -1. Connect to the virtual machine. +1. Connect to the VM. -Run the following script to configure the deployer. +Run the following script to configure the deployer: ```bash cd sap-automation/deploy/scripts ./configure_deployer.sh ``` -The script installs Terraform and Ansible and configure the deployer. +The script installs Terraform and Ansible and configures the deployer. ### Manually configure the deployer -> [!NOTE] ->You need to connect to the deployer virtual Machine from a computer that is able to reach the Azure Virtual Network +Connect to the deployer VM from a computer that can reach the Azure virtual network. -Connect to the deployer by following these steps: +To connect to the deployer: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select or search for **Key vaults**. -1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by the **Resource group** or **Location** if necessary. +1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by the **Resource group** or **Location**, if necessary. -1. Select **Secrets** from the **Settings** section in the left pane. +1. On the **Settings** section in the left pane, select **Secrets**. -1. Find and select the secret containing **sshkey**. It might look like this: `MGMT-[REGION]-DEP00-sshkey` +1. Find and select the secret that contains **sshkey**. It might look like `MGMT-[REGION]-DEP00-sshkey`. -1. On the secret's page, select the current version. Then, copy the **Secret value**. +1. On the secret's page, select the current version. Then copy the **Secret value**. -1. Open a plain text editor. Copy in the secret value. - -1. Save the file where you keep SSH keys. For example, `C:\\Users\\<your-username>\\.ssh`. - -1. Save the file. If you're prompted to **Save as type**, select **All files** if **SSH** isn't an option. For example, use `deployer.ssh`. +1. Open a plain text editor. Copy the secret value. -1. Connect to the deployer VM through any SSH client such as Visual Studio Code. Use the private IP address of the deployer, and the SSH key you downloaded. For instructions on how to connect to the Deployer using Visual Studio Code see [Connecting to Deployer using Visual Studio Code](tools-configuration.md#configuring-visual-studio-code). If you're using PuTTY, convert the SSH key file first using PuTTYGen. +1. Save the file where you keep SSH keys. An example is `C:\Users\<your-username>\.ssh`. -> [!NOTE] ->The default username is *azureadm* +1. Save the file. If you're prompted to **Save as type**, select **All files** if **SSH** isn't an option. For example, use `deployer.ssh`. ++1. Connect to the deployer VM through any SSH client, such as Visual Studio Code. Use the private IP address of the deployer and the SSH key you downloaded. For instructions on how to connect to the deployer by using Visual Studio Code, see [Connect to the deployer by using Visual Studio Code](tools-configuration.md#configuring-visual-studio-code). If you're using PuTTY, convert the SSH key file first by using PuTTYGen. -Configure the deployer using the following script: +> [!NOTE] +>The default username is **azureadm**. +Configure the deployer by using the following script: ```bash mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_ cd sap-automation/deploy/scripts ./configure_deployer.sh ``` -The script installs Terraform and Ansible and configure the deployer. --+The script installs Terraform and Ansible and configures the deployer. ## Next step > [!div class="nextstepaction"]-> [Configure SAP Workload Zone](configure-workload-zone.md) +> [Configure SAP workload zone](configure-workload-zone.md) |
sap | Deploy System | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-system.md | Title: About SAP system deployment for the automation framework -description: Overview of the SAP system deployment process within the SAP on Azure Deployment Automation Framework. + Title: SAP system deployment for the automation framework +description: Overview of the SAP system deployment process in SAP Deployment Automation Framework. -The creation of the [SAP system](deployment-framework.md#sap-concepts) is part of the [SAP on Azure Deployment Automation Framework](deployment-framework.md) process. The SAP system creates your virtual machines (VMs), and supporting components for your [SAP application](deployment-framework.md#sap-concepts). +The creation of the [SAP system](deployment-framework.md#sap-concepts) is part of the [SAP Deployment Automation Framework](deployment-framework.md) process. The SAP system deployment creates your virtual machines (VMs) and supporting components for your [SAP application](deployment-framework.md#sap-concepts). The SAP system deploys: -- The [database tier](#database-tier), which deploys database VMs, their disks, and a Standard Azure Load Balancer. You can run [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) in this tier.-- The [SAP central services tier](#central-services-tier), which deploys a customer-defined number of VMs and an Azure Standard Load Balancer.+- The [database tier](#database-tier), which deploys database VMs, their disks, and a Standard instance of Azure Load Balancer. You can run [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) in this tier. +- The [SAP central services tier](#central-services-tier), which deploys a customer-defined number of VMs and a Standard instance of Load Balancer. - The [application tier](#application-tier), which deploys the VMs and their disks.-- The [web dispatcher tier](#web-dispatcher-tier)+- The [web dispatcher tier](#web-dispatcher-tier). ## Application tier The application tier deploys a customer-defined number of VMs. These VMs are size **Standard_D4s_v3** with a 30-GB operating system (OS) disk and a 512-GB data disk. -To set the application server count, define the parameter `application_server_count` for this tier in your parameter file. For example, `application_server_count= 3`. -+To set the application server count, define the parameter `application_server_count` for this tier in your parameter file. For example, use `application_server_count= 3`. ## Central services tier -The SAP central services (SCS) tier deploys a customer-defined number of VMs. These VMs are size **Standard_D4s_v3** with a 30-GB OS disk and a 512-GB data disk. This tier also deploys an [Azure Standard Load Balancer](../../load-balancer/load-balancer-overview.md). --To set the SCS server count, define the parameter `scs_server_count` for this tier in your parameter file. For example, `scs_server_count=1`. +The SAP central services (SCS) tier deploys a customer-defined number of VMs. These VMs are size **Standard_D4s_v3** with a 30-GB OS disk and a 512-GB data disk. This tier also deploys a [Standard instance of Load Balancer](../../load-balancer/load-balancer-overview.md). +To set the SCS server count, define the parameter `scs_server_count` for this tier in your parameter file. For example, use `scs_server_count=1`. ## Web dispatcher tier -The web dispatcher tier deploys a customer-defined number of VMs. This tier also deploys an [Azure Standard Load Balancer](../../load-balancer/load-balancer-overview.md). +The web dispatcher tier deploys a customer-defined number of VMs. This tier also deploys a [Standard instance of Load Balancer](../../load-balancer/load-balancer-overview.md). -To set the web server count, define the parameter `web_server_count` for this tier in your parameter file. For example, `web_server_count = 2`. +To set the web server count, define the parameter `web_server_count` for this tier in your parameter file. For example, use `web_server_count = 2`. ## Database tier -The database tier deploys the VMs and their disks, and also an [Azure Standard Load Balancer](../../load-balancer/load-balancer-overview.md). You can use either [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) as your database VMs. +The database tier deploys the VMs and their disks and also deploys a [Standard instance of Load Balancer](../../load-balancer/load-balancer-overview.md). You can use either [HANA databases](configure-extra-disks.md#hana-databases) or [AnyDB databases](configure-extra-disks.md#anydb-databases) as your database VMs. -You can set the size of database VMs with the parameter `size` for this tier. For example, `"size": "S4Demo"` for HANA databases or `"size": "1 TB"` for AnyDB databases. Refer to the **Size** parameter in the tables of [HANA database VM options](configure-extra-disks.md#hana-databases) and [AnyDB database VM options](configure-extra-disks.md#anydb-databases) for possible values. +You can set the size of database VMs with the parameter `size` for this tier. For example, use `"size": "S4Demo"` for HANA databases or `"size": "1 TB"` for AnyDB databases. For possible values, see the **Size** parameter in the tables of [HANA database VM options](configure-extra-disks.md#hana-databases) and [AnyDB database VM options](configure-extra-disks.md#anydb-databases). -By default, the automation framework deploys the correct disk configuration for HANA database deployments. For HANA database deployments, the framework calculates default disk configuration based on VM size. However, for AnyDB database deployments, the framework calculates default disk configuration based on database size. You can set a disk size as needed by creating a custom JSON file in your deployment. For an example, [see the following JSON code sample and replace values as necessary for your configuration](configure-extra-disks.md#custom-sizing-file). Then, define the parameter `db_disk_sizes_filename` in the parameter file for the database tier. For example, `db_disk_sizes_filename = "path/to/JSON/file"`. +By default, the automation framework deploys the correct disk configuration for HANA database deployments. For HANA database deployments, the framework calculates default disk configuration based on VM size. However, for AnyDB database deployments, the framework calculates default disk configuration based on database size. You can set a disk size as needed by creating a custom JSON file in your deployment. For an example, [see the following JSON code sample and replace values as necessary for your configuration](configure-extra-disks.md#custom-sizing-file). Then, define the parameter `db_disk_sizes_filename` in the parameter file for the database tier. An example is `db_disk_sizes_filename = "path/to/JSON/file"`. -You can also [add extra disks to a new system](configure-extra-disks.md#custom-sizing-file), or [add extra disks to an existing system](configure-extra-disks.md#add-extra-disks-to-existing-system). +You can also [add extra disks to a new system](configure-extra-disks.md#custom-sizing-file) or [add extra disks to an existing system](configure-extra-disks.md#add-extra-disks-to-an-existing-system). ## Core configuration webdispatcher_server_count=0 ``` -## Deploying the SAP system - -The sample SAP System configuration file `DEV-WEEU-SAP01-X01.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01` folder. +## Deploy the SAP system ++The sample SAP system configuration file `DEV-WEEU-SAP01-X01.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X01` folder. -Running the command below will deploy the SAP System. +Run the following command to deploy the SAP system. # [Linux](#tab/linux) -> [!TIP] -> Perform this task from the deployer. +Perform this task from the deployer. You can copy the sample configuration files to start testing the deployment automation framework. xcopy sap-automation\deploy\samples\WORKSPACES WORKSPACES ``` - ```powershell cd C:\Azure_SAP_Automated_Deployment\WORKSPACES\SYSTEM\DEV-WEEU-SAP01-X01 New-SAPSystem -Parameterfile DEV-WEEU-SAP01-X01.tfvars # [Azure DevOps](#tab/devops) -Open (https://dev.azure.com) and go to your Azure DevOps Services project. +Open [Azure DevOps](https://dev.azure.com) and go to your Azure DevOps Services project. -> [!NOTE] -> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'. +Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to the folder that contains your configuration files. For this example, you can use `samples/WORKSPACES`. -The deployment will use the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00' folder. +The deployment uses the configuration defined in the Terraform variable file located in the `samples/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00` folder. -Run the pipeline by selecting the _SAP system deployment_ pipeline from the Pipelines section. Enter 'DEV-WEEU-SAP01-X00' as the SAP System configuration name. +Run the pipeline by selecting the `SAP system deployment` pipeline from the **Pipelines** section. Enter `DEV-WEEU-SAP01-X00` as the SAP system configuration name. -You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the SAP System details in the _Extensions_ tab. +You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the SAP system details on the **Extensions** tab. ### Output files -The deployment will create an Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`) that are required input for the Ansible playbooks. -## Next steps +The deployment creates an Ansible hosts file (`SID_hosts.yaml`) and an Ansible parameter file (`sap-parameters.yaml`). These files are required input for the Ansible playbooks. ++## Next step > [!div class="nextstepaction"]-> [About workload zone deployment with automation framework](software.md) +> [Workload zone deployment with automation framework](software.md) |
sap | Deploy Workload Zone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deploy-workload-zone.md | Title: About workload zone deployment in automation framework -description: Overview of the SAP workload zone deployment process within the SAP on Azure Deployment Automation Framework. +description: Overview of the SAP workload zone deployment process within SAP Deployment Automation Framework. -# Workload zone deployment in SAP automation framework +# Workload zone deployment in the SAP automation framework -An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. The [SAP on Azure Deployment Automation Framework](deployment-framework.md) refers to these tiers as [workload zones](deployment-framework.md#deployment-components). +An [SAP application](deployment-framework.md#sap-concepts) typically has multiple development tiers. For example, you might have development, quality assurance, and production tiers. [SAP Deployment Automation Framework](deployment-framework.md) calls these tiers [workload zones](deployment-framework.md#deployment-components). -You can use workload zones in multiple Azure regions. Each workload zone then has its own Azure Virtual Network (Azure virtual network) +You can use workload zones in multiple Azure regions. Each workload zone then has its own instance of Azure Virtual Network. The following services are provided by the SAP workload zone: -- Azure Virtual Network, including subnets and network security groups.-- Azure Key Vault, for system credentials.-- Storage account for boot diagnostics-- Storage account for cloud witnesses-- Azure NetApp account and capacity pools (optional)-- Azure Files NFS Shares (optional)+- A virtual network, including subnets and network security groups +- An Azure Key Vault instance, for system credentials +- An Azure Storage account for boot diagnostics +- A Storage account for cloud witnesses +- An Azure NetApp Files account and capacity pools (optional) +- Azure Files NFS shares (optional) -The workload zones are typically deployed in spokes in a hub and spoke architecture. They may be in their own subscriptions. --Supports the Private DNS from the Control Plane or from a configurable source. +The workload zones are typically deployed in spokes in a hub-and-spoke architecture. They can be in their own subscriptions. +The private DNS is supported from the control plane or from a configurable source. ## Core configuration location="westeurope" # The network logical name is mandatory - it is used in the naming convention and should map to the workload virtual network logical name network_name="SAP01" -# network_address_space is a mandatory parameter when an existing Virtual network is not used +# network_address_space is a mandatory parameter when an existing virtual network is not used network_address_space="10.110.0.0/16" # admin_subnet_address_prefix is a mandatory parameter if the subnets are not defined in the workload or if existing subnets are not used automation_username="azureadm" ``` -## Preparing the Workload zone deployment credentials --The SAP Deployment Frameworks uses Service Principals when doing the deployment. You can create the Service Principal for the Workload Zone deployment using the following steps using an account with permissions to create Service Principals: +## Prepare the workload zone deployment credentials +SAP Deployment Automation Framework uses service principals when doing the deployment. To create the service principal for the workload zone deployment, use an account with permissions to create service principals. ```azurecli-interactive az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscriptionID>" --name="<environment>-Deployment-Account" az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/<subscrip ``` > [!IMPORTANT]-> The name of the Service Principal must be unique. +> The name of the service principal must be unique. >-> Record the output values from the command. +> Record the output values from the command: > - appId > - password > - tenant -Assign the correct permissions to the Service Principal: +Assign the correct permissions to the service principal. ```azurecli az role assignment create --assignee <appId> \ az role assignment create --assignee <appId> \ --role "User Access Administrator" ``` -## Deploying the SAP Workload zone +## Deploy the SAP workload zone -The sample Workload Zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder. +The sample workload zone configuration file `DEV-WEEU-SAP01-INFRASTRUCTURE.tfvars` is located in the `~/Azure_SAP_Automated_Deployment/samples/Terraform/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder. -Running the following command deploys the SAP Workload Zone. +Run the following command to deploy the SAP workload zone. # [Linux](#tab/linux) -> [!TIP] -> Perform this task from the deployer. +Perform this task from the deployer. You can copy the sample configuration files to start testing the deployment automation framework. export region_code="<region_code>" export vnet_code="SAP02" export deployer_environment="MGMT" -az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" - export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/config/WORKSPACES" export SAP_AUTOMATION_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" +az login --service-principal -u "${ARM_CLIENT_ID}" -p="${ARM_CLIENT_SECRET}" --tenant "${ARM_TENANT_ID}" ++ cd "${CONFIG_REPO_PATH}/LANDSCAPE/${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE" parameterFile="${env_code}-${region_code}-${vnet_code}-INFRASTRUCTURE.tfvars" $SAP_AUTOMATION_REPO_PATH/deploy/scripts/install_workloadzone.sh \ --subscription "${ARM_SUBSCRIPTION_ID}" \ --spn_id "${ARM_CLIENT_ID}" \ --spn_secret "${ARM_CLIENT_SECRET}" \- --tenant_id "${ARM_TENANT_ID}" \ - --auto-approve + --tenant_id "${ARM_TENANT_ID}" ```+ # [Windows](#tab/windows) It isn't possible to perform the deployment from Windows.- -> [!NOTE] -> Be sure to replace the sample value `<subscriptionID>` with your subscription ID. -> Replace the `<appID>`, `<password>`, `<tenant>` values with the output values of the SPN creation -> Replace `<keyvault>` with the deployer key vault name -> Replace `<storageaccount>` with the name of the storage account containing the Terraform state files -> Replace `<statefile_subscription>` with the subscription ID for the storage account containing the Terraform state files +To begin, be sure to replace: ++- The sample value `<subscriptionID>` with your subscription ID. +- The `<appID>`, `<password>`, and `<tenant>` values with the output values of the SPN creation. +- The `<keyvault>` value with the deployer key vault name. +- The `<storageaccount>` value with the name of the storage account that contains the Terraform state files. +- The `<statefile_subscription>` value with the subscription ID for the storage account that contains the Terraform state files. # [Azure DevOps](#tab/devops) -Open (https://dev.azure.com) and go to your Azure DevOps Services project. +Open [Azure DevOps](https://dev.azure.com) and go to your Azure DevOps Services project. -> [!NOTE] -> Ensure that the 'Deployment_Configuration_Path' variable in the 'SDAF-General' variable group is set to the folder that contains your configuration files, for this example you can use 'samples/WORKSPACES'. +Ensure that the `Deployment_Configuration_Path` variable in the `SDAF-General` variable group is set to the folder that contains your configuration files. For this example, you can use `samples/WORKSPACES`. -The deployment uses the configuration defined in the Terraform variable file located in the 'samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE' folder. +The deployment uses the configuration defined in the Terraform variable file located in the `samples/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE` folder. -Run the pipeline by selecting the _Deploy workload zone_ pipeline from the Pipelines section. Enter the workload zone configuration name and the deployer environment name. Use 'DEV-WEEU-SAP01-INFRASTRUCTURE' as the Workload zone configuration name and 'MGMT' as the Deployer Environment Name. +Run the pipeline by selecting the `Deploy workload zone` pipeline from the **Pipelines** section. Enter the workload zone configuration name and the deployer environment name. Use `DEV-WEEU-SAP01-INFRASTRUCTURE` as the workload zone configuration name and `MGMT` as the deployer environment name. -You can track the progress in the Azure DevOps Services portal. Once the deployment is complete, you can see the Workload Zone details in the _Extensions_ tab. +You can track the progress in the Azure DevOps Services portal. After the deployment is finished, you can see the workload zone details on the **Extensions** tab. - > [!TIP]-> If the scripts fail to run, it can sometimes help to clear the local cache files by removing `~/.sap_deployment_automation/` and `~/.terraform.d/` directories before running the scripts again. +> If the scripts fail to run, it can sometimes help to clear the local cache files by removing the `~/.sap_deployment_automation/` and `~/.terraform.d/` directories before you run the scripts again. -## Next steps +## Next step > [!div class="nextstepaction"]-> [About SAP system deployment in automation framework](configure-system.md) +> [SAP system deployment with the automation framework](configure-system.md) |
sap | Deployment Framework | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deployment-framework.md | Title: About SAP on Azure Deployment Automation Framework -description: Overview of the framework and tooling for the SAP on Azure Deployment Automation Framework. + Title: About SAP Deployment Automation Framework +description: Overview of the framework and tooling for SAP Deployment Automation Framework. -# SAP on Azure Deployment Automation Framework +# SAP Deployment Automation Framework -The [SAP on Azure Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool for deploying, installing and maintaining SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB using [Terraform](https://www.terraform.io/), and [Ansible](https://www.ansible.com/) for the operating system and application configuration. The systems can be deployed on any of the SAP-supported operating system versions and deployed into any Azure region. +[SAP Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool that's used to deploy, install, and maintain SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB by using [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/) for the operating system and application configuration. You can deploy the systems on any of the SAP-supported operating system versions and into any Azure region. -Hashicorp [Terraform](https://www.terraform.io/) is an open-source tool for provisioning and managing cloud infrastructure. +[Terraform](https://www.terraform.io/) from Hashicorp is an open-source tool for provisioning and managing cloud infrastructure. -[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. Using Ansible, you can automate deployment and configuration of resources in your environment. +[Ansible](https://www.ansible.com/) is an open-source platform by Red Hat that automates cloud provisioning, configuration management, and application deployments. When you use Ansible, you can automate deployment and configuration of resources in your environment. The [automation framework](https://github.com/Azure/sap-automation) has two main components:-- Deployment infrastructure (control plane, hub component)-- SAP Infrastructure (SAP Workload, spoke component) -You'll use the control plane of the SAP on Azure Deployment Automation Framework to deploy the SAP Infrastructure and the SAP application. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas) defined infrastructure to host the SAP Applications. +- Deployment infrastructure (control plane and hub component) +- SAP infrastructure (SAP workload and spoke component) ++You use the control plane of SAP Deployment Automation Framework to deploy the SAP infrastructure and the SAP application. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas)-defined infrastructure to host the SAP applications. > [!NOTE]-> This automation framework is based on Microsoft best practices and principles for SAP on Azure. Review the [get-started guide for SAP on Azure virtual machines (Azure VMs)](get-started.md) to understand how to use certified virtual machines and storage solutions for stability, reliability, and performance. +> This automation framework is based on Microsoft best practices and principles for SAP on Azure. To understand how to use certified virtual machines (VMs) and storage solutions for stability, reliability, and performance, see [Get started with SAP automation framework on Azure](get-started.md). > > This automation framework also follows the [Microsoft Cloud Adoption Framework for Azure](/azure/cloud-adoption-framework/). -The automation framework can be used to deploy the following SAP architectures: --- Standalone-- Distributed-- Distributed (Highly Available)--In the Standalone architecture, all the SAP roles are installed on a single server. In the distributed architecture, you can separate the database server and the application tier. The application tier can further be separated in two by having SAP Central Services on a virtual machine and one or more application servers. +You can use the automation framework to deploy the following SAP architectures: -The Distributed (Highly Available) deployment is similar to the Distributed architecture. In this deployment, the database and/or SAP Central Services can both be configured using a highly available configuration using two virtual machines each with Pacemaker clusters. +- **Standalone**: For this architecture, all the SAP roles are installed on a single server. +- **Distributed**: With this architecture, you can separate the database server and the application tier. The application tier can further be separated in two by having SAP central services on a VM and one or more application servers. +- **Distributed (highly available)**: This architecture is similar to the distributed architecture. In this deployment, the database and/or SAP central services can both be configured by using a highly available configuration that uses two VMs, each with Pacemaker clusters. -The dependency between the control plane and the application plane is illustrated in the diagram below. In a typical deployment, a single control plane is used to manage multiple SAP deployments. +The dependency between the control plane and the application plane is illustrated in the following diagram. In a typical deployment, a single control plane is used to manage multiple SAP deployments. ## About the control plane -The control plane houses the deployment infrastructure from which other environments will be deployed. Once the control plane is deployed, it rarely needs to be redeployed, if ever. +The control plane houses the deployment infrastructure from which other environments are deployed. After the control plane is deployed, it rarely needs to be redeployed, if ever. ++The control plane provides the following -The control plane provides the following services - Deployment agents for running:- - Terraform Deployment + - Terraform deployment - Ansible configuration - Persistent storage for the Terraform state files-- Persistent storage for the Downloaded SAP Software+- Persistent storage for the downloaded SAP software - Azure Key Vault for secure storage for deployment credentials - Private DNS zone (optional)-- Configuration Web Application+- Configuration for web applications ++The control plane is typically a regional resource deployed into the hub subscription in a [hub-and-spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). -The control plane is typically a regional resource deployed in to the hub subscription in a [hub and spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). +The following diagram shows the key components of the control plane and the workload zone. -The following diagram shows the key components of the control plane and workload zone. +The application configuration is performed from the deployment agents in the control plane by using a set of predefined playbooks. These playbooks will: +- Configure base operating system settings. +- Configure SAP-specific operating system settings. +- Make the installation media available in the system. +- Install the SAP system components. +- Install the SAP database (SAP HANA and AnyDB). +- Configure high availability by using Pacemaker. +- Configure high availability for your SAP database. -The application configuration will be performed from the deployment agents in the Control plane using a set of pre-defined playbooks. These playbooks will: +For more information about how to configure and deploy the control plane, see [Configure the control plane](configure-control-plane.md) and [Deploy the control plane](deploy-control-plane.md). -- Configure base operating system settings-- Configure SAP-specific operating system settings-- Make the installation media available in the system-- Install the SAP system components-- Install the SAP database (SAP HANA, AnyDB)-- Configure high availability (HA) using Pacemaker-- Configure high availability (HA) for your SAP database+## Software acquisition process +The framework also provides an Ansible playbook that can be used to download the software from SAP and persist it in the storage accounts in the control plane's SAP library resource group. -For more information of how to configure and deploy the control plane, see [Configuring the control plane](configure-control-plane.md) and [Deploying the control plane](deploy-control-plane.md). +The software acquisition is using an SAP application manifest file that contains the list of SAP software to be downloaded. The manifest file is a YAML file that contains the: -## Software acquisition process +- List of files to be downloaded. +- List of the product IDs for the SAP application components. +- Set of template files used to provide the parameters for the unattended installation. ++The SAP software download playbook processes the manifest file and the dependent manifest files and downloads the SAP software from SAP by using the specified SAP user account. The software is downloaded to the SAP library storage account and is available for the installation process. -The framework also provides an Ansible playbook that can be used to download the software from SAP and persist it in the storage accounts in the Control Plane's SAP Library resource group. +As part of the download process, the application manifest and the supporting templates are also persisted in the storage account. The application manifest and the dependent manifests are aggregated into a single manifest file that's used by the installation process. -The software acquisition is using an SAP Application manifest file that contains the list of SAP software to be downloaded. The manifest file is a YAML file that contains the following information: +### Deployer VMs -- List of files to be downloaded-- List of the Product IDs for the SAP application components-- A set of template files used to provide the parameters for the unattended installation+These VMs are used to run the orchestration scripts that deploy the Azure resources by using Terraform. They're also Ansible controllers and are used to execute the Ansible playbooks on all the managed nodes, that is, the VMs of an SAP deployment. -The SAP Software download playbook will process the manifest file and the dependent manifest files and download the SAP software from SAP using the specified SAP user account. The software will be downloaded to the SAP Library storage account and will be available for the installation process. As part of the download the process the application manifest and the supporting templates will also be persisted in the storage account. The application manifest and the dependent manifests will be aggregated into a single manifest file that will be used by the installation process. +## About the SAP workload -### Deployer Virtual Machines +The SAP workload contains all the Azure infrastructure resources for the SAP deployments. These resources are deployed from the control plane. -These virtual machines are used to run the orchestration scripts that will deploy the Azure resources using Terraform. They are also Ansible Controllers and are used to execute the Ansible playbooks on all the managed nodes, i.e the virtual machines of an SAP deployment. +The SAP workload has two main components: -## About the SAP Workload +- SAP workload zone +- SAP systems -The SAP Workload contains all the Azure infrastructure resources for the SAP Deployments. These resources are deployed from the control plane. -The SAP Workload has two main components: -- SAP Workload Zone-- SAP System(s)+## About the SAP workload zone -## About the SAP Workload Zone +The workload zone allows for partitioning of the deployments into different environments, such as development, test, and production. The workload zone provides the shared services (networking and credentials management) to the SAP systems. -The workload zone allows for partitioning of the deployments into different environments (Development, Test, Production). The Workload zone will provide the shared services (networking, credentials management) to the SAP systems. +The SAP workload zone provides the following services to the SAP systems: -The SAP Workload Zone provides the following services to the SAP Systems -- Virtual Networking infrastructure-- Azure Key Vault for system credentials (Virtual Machines and SAP)-- Shared Storage (optional)+- Virtual networking infrastructure +- Azure Key Vault for system credentials (VMs and SAP) +- Shared storage (optional) -For more information of how to configure and deploy the SAP Workload zone, see [Configuring the workload zone](configure-workload-zone.md) and [Deploying the SAP workload zone](deploy-workload-zone.md). +For more information about how to configure and deploy the SAP workload zone, see [Configure the workload zone](configure-workload-zone.md) and [Deploy the SAP workload zone](deploy-workload-zone.md). -## About the SAP System +## About the SAP system -The system deployment consists of the virtual machines that will be running the SAP application, including the web, app and database tiers. +The system deployment consists of the VMs that run the SAP application, including the web, app, and database tiers. -The SAP System provides the following services -- Virtual machine, storage, and supporting infrastructure to host the SAP applications.+The SAP system provides VM, storage, and support infrastructure to host the SAP applications. -For more information of how to configure and deploy the SAP System, see [Configuring the SAP System](configure-system.md) and [Deploying the SAP system](deploy-system.md). +For more information about how to configure and deploy the SAP system, see [Configure the SAP system](configure-system.md) and [Deploy the SAP system](deploy-system.md). ## Glossary The following terms are important concepts for understanding the automation fram > [!div class="mx-tdCol2BreakAll "] > | Term | Description | > | - | -- |-> | System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the **SID**. +> | System | An instance of an SAP application that contains the resources the application needs to run. Defined by a unique three-letter identifier, the *SID*. > | Landscape | A collection of systems in different environments within an SAP application. For example, SAP ERP Central Component (ECC), SAP customer relationship management (CRM), and SAP Business Warehouse (BW). |-> | Workload zone | Partitions the SAP applications to environments, such as non-production and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vault, to all systems within. | +> | Workload zone | Partitions the SAP applications to environments, such as nonproduction and production environments or development, quality assurance, and production environments. Provides shared resources, such as virtual networks and key vaults, to all systems within. | The following diagram shows the relationships between SAP systems, workload zones (environments), and landscapes. In this example setup, the customer has three SAP landscapes: ECC, CRM, and BW. Each landscape contains three workload zones: production, quality assurance, and development. Each workload zone contains one or more systems. ### Deployment components > [!div class="mx-tdCol2BreakAll "] > | Term | Description | Scope | > | - | -- | -- |-> | Deployer | A virtual machine that can execute Terraform and Ansible commands. | Region | +> | Deployer | A VM that can execute Terraform and Ansible commands. | Region | > | Library | Provides storage for the Terraform state files and the SAP installation media. | Region |-> | Workload zone | Contains the virtual network for the SAP systems and a key vault that holds the system credentials | Workload zone | -> | System | The deployment unit for the SAP application (SID). Contains all infrastructure assets | Workload zone | -+> | Workload zone | Contains the virtual network for the SAP systems and a key vault that holds the system credentials. | Workload zone | +> | System | The deployment unit for the SAP application (SID). Contains all infrastructure assets. | Workload zone | ## Next steps > [!div class="nextstepaction"]-> [Get started with the deployment automation framework](get-started.md) -> [Planning for the automation framwework](plan-deployment.md) -> [Configuring Azure DevOps for the automation framwework](configure-devops.md) -> [Configuring the control plane](configure-control-plane.md) -> [Configuring the workload zone](configure-workload-zone.md) -> [Configuring the SAP System](configure-system.md) -+> - [Get started with the deployment automation framework](get-started.md) +> - [Plan for the automation framework](plan-deployment.md) +> - [Configure Azure DevOps for the automation framework](configure-devops.md) +> - [Configure the control plane](configure-control-plane.md) +> - [Configure the workload zone](configure-workload-zone.md) +> - [Configure the SAP system](configure-system.md) |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md | Title: Get started with the SAP on Azure deployment automation framework -description: Quickly get started with the SAP on Azure Deployment Automation Framework. Deploy an example configuration using sample parameter files. + Title: Get started with SAP Deployment Automation Framework +description: Quickly get started with SAP Deployment Automation Framework. Deploy an example configuration by using sample parameter files. -# Get started with SAP automation framework on Azure +# Get started with SAP Deployment Automation Framework -Get started quickly with the [SAP on Azure Deployment Automation Framework](deployment-framework.md). +Get started quickly with [SAP Deployment Automation Framework](deployment-framework.md). ## Prerequisites +To get started with SAP Deployment Automation Framework, you need: - An Azure subscription. If you don't have an Azure subscription, you can [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Ability to [download of the SAP software](software.md) in your Azure environment.+- The ability to [download the SAP software](software.md) in your Azure environment. - An [Azure CLI](/cli/azure/install-azure-cli) installation on your local computer. - An [Azure PowerShell](/powershell/azure/install-az-ps#update-the-azure-powershell-module) installation on your local computer.-- A Service Principal to use for the control plane deployment-- Ability to create an Azure Devops project if you want to use Azure DevOps for deployment.+- A service principal to use for the control plane deployment. +- An ability to create an Azure DevOps project if you want to use Azure DevOps for deployment. -Some of the prerequisites may already be installed in your deployment environment. Both Cloud Shell and the deployer have Terraform and the Azure CLI installed. +Some of the prerequisites might already be installed in your deployment environment. Both Azure Cloud Shell and the deployer have Terraform and the Azure CLI installed. -## Use SAP on Azure Deployment Automation Framework from Azure DevOps Services +## Use SAP Deployment Automation Framework from Azure DevOps Services -Using Azure DevOps streamlines the deployment process by providing pipelines that can be executed to perform both the infrastructure deployment and the configuration and SAP installation activities. -You can use Azure Repos to store your configuration files and Azure Pipelines to deploy and configure the infrastructure and the SAP application. +Using Azure DevOps streamlines the deployment process. Azure DevOps provides pipelines that you can run to perform the infrastructure deployment and the configuration and SAP installation activities. ++You can use Azure Repos to store your configuration files. Use Azure Pipelines to deploy and configure the infrastructure and the SAP application. ### Sign up for Azure DevOps Services -To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory (Azure AD). To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either _sign-in_ or create a new account. +To use Azure DevOps Services, you need an Azure DevOps organization. An organization is used to connect groups of related projects. Use your work or school account to automatically connect your organization to your Azure Active Directory. To create an account, open [Azure DevOps](https://azure.microsoft.com/services/devops/) and either sign in or create a new account. ++To configure Azure DevOps for SAP Deployment Automation Framework, see [Configure Azure DevOps for SAP Deployment Automation Framework](configure-devops.md). -Follow the guidance here [Configure Azure DevOps for SDAF](configure-devops.md) to configure Azure DevOps for the SAP on Azure Deployment Automation Framework. +## Create the SAP Deployment Automation Framework environment without Azure DevOps -## Creating the SAP on Azure Deployment Automation Framework environment without Azure DevOps +You can run SAP Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment. -You can run the SAP on Azure Deployment Automation Framework from a virtual machine in Azure. The following steps describe how to create the environment. +> [!IMPORTANT] +> Ensure that the virtual machine is using either a system-assigned or user-assigned identity with permissions on the subscription to create resources. -Clone the repository and prepare the execution environment by using the following steps on a Linux Virtual machine in Azure: +Ensure the virtual machine has the following prerequisites installed: -Ensure the Virtual Machine has the following prerequisites installed: - git - jq - unzip- -Ensure that the virtual machine is using either a system assigned or user assigned identity with permissions on the subscription to create resources. -+ - virtualenv (if running on Ubuntu) -- Create a directory called `Azure_SAP_Automated_Deployment` for your automation framework deployment. +You can install the prerequisites on an Ubuntu virtual machine by using the following command: ```bash-mkdir -p ~/Azure_SAP_Automated_Deployment; cd $_ +sudo apt-get install -y git jq unzip virtualenv -git clone https://github.com/Azure/sap-automation.git sap-automation +``` -git clone https://github.com/Azure/sap-automation-samples.git samples +You can then install the deployer components by using the following commands: -git clone https://github.com/Azure/sap-automation-bootstrap.git config +```bash -cd sap-automation/deploy/scripts - +wget https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/configure_deployer.sh -O configure_deployer.sh +chmod +x ./configure_deployer.sh ./configure_deployer.sh-``` +# Source the new variables +. /etc/profile.d/deploy_server.sh -> [!TIP] -> The deployer already clones the required repositories. +``` ## Samples -The ~/Azure_SAP_Automated_Deployment/samples folder contains a set of sample configuration files to start testing the deployment automation framework. You can copy them using the following steps. -+The `~/Azure_SAP_Automated_Deployment/samples` folder contains a set of sample configuration files to start testing the deployment automation framework. You can copy them by using the following commands: ```bash cd ~/Azure_SAP_Automated_Deployment -cp -Rp samples/Terraform/WORKSPACES config +cp -Rp samples/Terraform/WORKSPACES ~/Azure_SAP_Automated_Deployment ``` - ## Next step > [!div class="nextstepaction"] |
sap | Get Sap Installation Media | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/get-sap-installation-media.md | Next, set up a virtual machine (VM) where you will download the SAP components l Next, download the SAP installation media to the VM using a script. -1. Run the Ansible script **playbook_bom_download** with your own information. Enter the actual values **within** double quotes but **without** the triangular brackets. The Ansible command that you run should look like: +1. Run the Ansible script **playbook_bom_download** with your own information. With the exception of the `s_password` variable, enter the actual values **within** double quotes but **without** the triangular brackets. For the `s_password` variable, use single quotes. The Ansible command that you run should look like: ```bash export bom_base_name="<Enter bom base name>" export s_user="<s-user>"- export s_password="<password>" + export s_password='<password>' export storage_account_access_key="<storageAccountAccessKey>" export sapbits_location_base_path="<containerBasePath>" export BOM_directory="<BOM_directory_path>" |
sap | Large Instance High Availability Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/large-instance-high-availability-rhel.md | Last updated 04/19/2021 # Azure Large Instances high availability for SAP on RHEL > [!NOTE]-> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article. In this article, you learn how to configure the Pacemaker cluster in RHEL 7 to automate an SAP HANA database failover. You need to have a good understanding of Linux, SAP HANA, and Pacemaker to complete the steps in this guide. |
sap | Os Upgrade Hana Large Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/large-instances/os-upgrade-hana-large-instance.md | -> This article contains references to the terms *blacklist* and *slave*, terms that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article. >[!NOTE] >Upgrading the OS is your responsibility. Microsoft operations support can guide you in key areas of the upgrade, but consult your operating system vendor as well when planning an upgrade. |
sap | Cal S4h | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md | The online library is continuously updated with Appliances for demo, proof of co | Appliance Template | Date | Description | Creation Link | | | - | -- | - |+| [**SAP S/4HANA 2022 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/983008db-db92-4d4d-ac79-7e2afa95a2e0)| July 16 2023 |This appliance contains SAP S/4HANA 2022 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=983008db-db92-4d4d-ac79-7e2afa95a2e0&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | [**SAP S/4HANA 2022 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3722f683-42af-4059-90db-4e6a52dc9f54) | April 20 2023 |This appliance contains SAP S/4HANA 2022 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3722f683-42af-4059-90db-4e6a52dc9f54&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |-| [**SAP S/4HANA 2022, Fully-Activated Appliance**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311) | December 15 2022 |This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) +| [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/a954cc12-da16-4caa-897e-cf84bc74cf15)| April 26 2022 |This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. |[Create Appliance](https://cal.sap.com/registration?sguid=a954cc12-da16-4caa-897e-cf84bc74cf15&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP BW/4HANA 2021 SP04 Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db)| March 23 2023 | This solution offers you an insight of SAP BW/4HANA2021 SP04. SAP BW/4HANA is the next generation Data Warehouse optimized for SAP HANA. Beside the basic BW/4HANA options the solution offers a bunch of SAP HANA optimized BW/4HANA Content and the next step of Hybrid Scenarios with SAP Data Warehouse Cloud. | [Create Appliance](https://cal.sap.com/registration?sguid=1b0ac659-a5b4-4d3b-b1ae-f1a1cb89c6db&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/5a830213-f0cb-423e-ab5f-f7736e57f5a1)| May 10 2023 | The SAP ABAP Platform on SAP HANA gives you access to your own copy of SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements, including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. | [Create Appliance](https://cal.sap.com/registration?sguid=5a830213-f0cb-423e-ab5f-f7736e57f5a1&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |-| [**SAP Focused Run 4.0 FP01, unconfigured**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/2afd7a3e-ecf4-4a20-a975-ce05c4360e55) | June 29 2023 | SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics.| [Create Appliance](https://cal.sap.com/registration?sguid=2afd7a3e-ecf4-4a20-a975-ce05c4360e55&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP NetWeaver 7.5 SP15 on SAP ASE**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) | January 3 2018 | SAP NetWeaver 7.5 SP15 on SAP ASE | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |
sap | Dbms Guide Ha Ibm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ha-ibm.md | -> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article. This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration. |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md | Title: Get started with SAP on Azure VMs | Microsoft Docs description: Learn about SAP solutions that run on virtual machines (VMs) in Microsoft Azure - -tags: azure-resource-manager -keywords: '' Previously updated : 08/03/2023 Last updated : 08/24/2023 - # Use Azure to host and run SAP workload scenarios When you use Microsoft Azure, you can reliably run your mission-critical SAP workloads and scenarios on a scalable, compliant, and enterprise-proven platform. You get the scalability, flexibility, and cost savings of Azure. With the expanded partnership between Microsoft and SAP, you can run SAP applications across development and test and production scenarios in Azure and be fully supported. From SAP NetWeaver to SAP S/4HANA, SAP BI on Linux to Windows, and SAP HANA to SQL Server, Oracle, Db2, etc., we've got you covered. -Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure. Our partnership with SAP resulted in a variety of integration scenarios with the overall Microsoft ecosystem. Check out the **dedicated [Integration section](./integration-get-started.md)** to learn more. +Besides hosting SAP NetWeaver and S/4HANA scenarios with the different DBMS on Azure, you can host other SAP workload scenarios, like SAP BI on Azure. Our partnership with SAP resulted in various integration scenarios with the overall Microsoft ecosystem. Check out the **dedicated [Integration section](./integration-get-started.md)** to learn more. We just announced our new services of Azure Center for SAP solutions and Azure Monitor for SAP solutions 2.0 entering the public preview stage. These services give you the possibility to deploy SAP workload on Azure in a highly automated manner in an optimal architecture and configuration. And monitor your Azure infrastructure, OS, DBMS, and ABAP stack deployments on one single pane of glass. In the SAP workload documentation space, you can find the following areas: ## Change Log +- August 24, 2023: Support of priority-fencing-delay cluster property on two-node pacemaker cluster to address split-brain situation in RHEL is updated on [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md), [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md), [High availability of SAP HANA Scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](./high-availability-guide-rhel-nfs-azure-files.md), and [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](./high-availability-guide-rhel-netapp-files.md) documents. - August 03, 2023: Change of recommendation to use a /25 IP range for delegated subnet for ANF for SAP workload [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - August 03, 2023: Change in support of block storage and NFS on ANF storage for SAP HANA documented in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - July 25, 2023: Adding reference to SAP Note #3074643 to [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)-- July 13, 2023: Clarifying dfifferences in zonal replication between NFS on AFS and ANF in table in [Azure Storage types for SAP workload](./planning-guide-storage.md)-- July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 do not show any performance difference in [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md)+- July 21, 2023: Support of priority-fencing-delay cluster property on two-node pacemaker cluster to address split-brain situation in SLES is updated on [High availability for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](./sap-hana-high-availability-netapp-files-suse.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](./high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure Files](./high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](./high-availability-guide-suse-netapp-files.md) document. +- July 13, 2023: Clarifying differences in zonal replication between NFS on AFS and ANF in table in [Azure Storage types for SAP workload](./planning-guide-storage.md) +- July 13, 2023: Statement that 512byte and 4096 sector size for Premium SSD v2 don't show any performance difference in [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md) - July 13, 2023: Replaced links in ANF section of [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) to new ANF related documentation - July 11, 2023: Add a note about Azure NetApp Files application volume group for SAP HANA in [HA for HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [HANA scale-out with standby node with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for HANA Scale-out HA on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) and [HA for HANA scale-out on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) - June 29, 2023: Update important considerations and sizing information in [HA for HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) In the SAP workload documentation space, you can find the following areas: - November 30, 2022: Added storage recommendations for Premium SSD v2 into [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-sapase.md) - November 22, 2022: Release of Disaster Recovery guidelines for SAP workload on Azure - [Disaster Recovery overview and infrastructure guidelines for SAP workload](disaster-recovery-overview-guide.md) and [Disaster Recovery recommendation for SAP workload](disaster-recovery-sap-guide.md). - November 22, 2022: Update of [SAP workloads on Azure: planning and deployment checklist](deployment-checklist.md) to add latest recommendations-- November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md) +- November 18, 2022: Add a recommendation to use Pacemaker simple mount configuration for new implementations on SLES 15 in [Azure VMs HA for SAP NW on SLES with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs HA for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs HA for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md) and [Azure VMs HA for SAP NW on SLES](high-availability-guide-suse.md) - November 15, 2022: Change in [HA for SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add recommendation to use mount option `nconnect` for workloads with higher throughput requirements - November 15, 2022: Add a recommendation for minimum required version of package resource-agents in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) - November 14, 2022: Provided more details about nconnect mount option in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- November 14, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to update suggested timeouts for `FileSystem` Pacemaker cluster resources +- November 14, 2022: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to update suggested timeouts for `FileSystem` Pacemaker cluster resources - November 07, 2022: Added HANA hook susChkSrv for scale-up pacemaker cluster in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [High availability of SAP HANA Scale-up with ANF on SLES](sap-hana-high-availability-netapp-files-suse.md) - November 07, 2022: Added monitor operation for azure-lb resource in [High availability of SAP HANA on Azure VMs on SLES](sap-hana-high-availability.md), [SAP HANA scale-out with HSR and Pacemaker on SLES](sap-hana-high-availability-scale-out-hsr-suse.md), [Set up IBM Db2 HADR on Azure virtual machines (VMs)](dbms-guide-ha-ibm.md), [Azure VMs high availability for SAP NetWeaver on SLES for SAP Applications with simple mount and NFS](high-availability-guide-suse-nfs-simple-mount.md), [Azure VMs high availability for SAP NW on SLES with NFS on Azure File](high-availability-guide-suse-nfs-azure-files.md), [Azure VMs high availability for SAP NW on SLES with Azure NetApp Files](high-availability-guide-suse-netapp-files.md), [Azure VMs high availability for SAP NetWeaver on SLES](high-availability-guide-suse.md), [High availability for NFS on Azure VMs on SLES](high-availability-guide-suse-nfs.md), [Azure VMs high availability for SAP NetWeaver on SLES multi-SID guide](high-availability-guide-suse-multi-sid.md) - October 31, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) to fix script location for DRBD 9.0 In the SAP workload documentation space, you can find the following areas: - October 27, 2022: Adding Ev4 and Ev5 VM families and updated OS releases to table in [SAP ASE Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-sapase.md) - October 20, 2022: Change in [HA for NFS on Azure VMs on SLES](./high-availability-guide-suse-nfs.md) and [HA for SAP NW on Azure VMs on SLES for SAP applications](./high-availability-guide-suse.md) to indicate that we're de-emphasizing SAP reference architectures, utilizing NFS clusters - October 18, 2022: Clarify some considerations around using Azure Availability Zones in [SAP workload configurations with Azure Availability Zones](./high-availability-zones.md)-- October 17, 2022: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add guidance for setting up parameter `AUTOMATED_REGISTER` +- October 17, 2022: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md) and [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to add guidance for setting up parameter `AUTOMATED_REGISTER` - September 29, 2022: Announcing HANA Large Instances being in sunset mode in [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md) and [What is SAP HANA on Azure (Large Instances)?](../../virtual-machines/workloads/sap/hana-overview-architecture.md). Adding some statements around Azure VMware and Azure Active Directory support status in [SAP workload on Azure virtual machine supported scenarios](./planning-supported-configurations.md) - September 27, 2022: Minor changes in [HA for SAP ASCS/ERS with NFS simple mount](./high-availability-guide-suse-nfs-simple-mount.md) on SLES 15 for SAP Applications to adjust mount instructions - September 14, 2022 Release of updated SAP on Oracle guide with new and updated content [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) In the SAP workload documentation space, you can find the following areas: - June 17, 2020: Change in [High availability of SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) to remove meta keyword from HANA resource creation command (RHEL 8.x) - June 09, 2021: Correct VM SKU names for M192_v2 in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - May 26, 2021: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md), [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) and [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to add configuration to prepare the OS for running HANA on ANF -- May 13, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to clarify how resource agent azure-events operates +- May 13, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to clarify how resource agent azure-events operate - April 30, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to include warning about incompatible change with Azure Fence Agent in a version of package python3-azure-mgmt-compute (SLES 15) - April 27, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md) to add links to important SAP notes in the prerequisites section - April 27, 2021: Added new Msv2, Mdsv2 VMs into HANA storage configuration in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) In the SAP workload documentation space, you can find the following areas: - February 03, 2021: More details on I/O scheduler settings for SUSE in article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - February 01, 2021: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add a link to [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - January 23, 2021: Introduce the functionality of HANA data volume partitioning as functionality to stripe I/O operations against HANA data files across different Azure disks or NFS shares without using a disk volume manager in articles [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- January18, 2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)+- January 18, 2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) - January 11, 2021: Minor changes in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to adjust commands to work for both RHEL8 and RHEL7, and ENSA1 and ENSA2 - January 05, 2021: Changes in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md), revising the recommended configuration to allow SAP Host Agent to manage the local port range - January 04,2021: Add new Azure regions supported by HLI into [What is SAP HANA on Azure (Large Instances)](../large-instances/hana-overview-architecture.md) |
sap | Hana Vm Premium Ssd V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md | The caching recommendations for Azure premium disks below are assuming the I/O c **Recommendation: As a result of these observed I/O patterns by SAP HANA, the caching for the different volumes using Azure premium storage should be set like:** -- **/hana/data** - no caching or read caching-- **/hana/log** - no caching - exception for M- and Mv2-Series VMs where Azure Write Accelerator should be enabled +- **/hana/data** - None or read caching +- **/hana/log** - None. Enable Write Accelerator for M- and Mv2-Series VMs, the option in the Azure portal is "None + Write Accelerator." - **/hana/shared** - read caching - **OS disk** - don't change default caching that is set by Azure at creation time of the VM |
sap | High Availability Guide Rhel Ibm Db2 Luw | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-ibm-db2-luw.md | -> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article. This article describes how to deploy and configure the Azure virtual machines (VMs), install the cluster framework, and install the IBM Db2 LUW with HADR configuration. |
sap | High Availability Guide Rhel Netapp Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md | The following items are prefixed with either **[A]** - applicable to all nodes, If using enqueue server 1 architecture (ENSA1), define the resources as follows: - ```bash - sudo pcs property set maintenance-mode=true - - # If using NFSv3 - sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ + ```bash + sudo pcs property set maintenance-mode=true ++ # If using NFSv3 + sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \ The following items are prefixed with either **[A]** - applicable to all nodes, op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS - # If using NFSv4.1 - sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ + # If using NFSv4.1 + sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 migration-threshold=1 failure-timeout=60 \ The following items are prefixed with either **[A]** - applicable to all nodes, op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS - sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000 + sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000 - # If using NFSv3 - sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ + # If using NFSv3 + sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS - # If using NFSv4.1 - sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ + # If using NFSv4.1 + sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=105 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS - sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000 - sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1 - sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false + sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000 + sudo pcs constraint location rsc_sap_QAS_ASCS00 rule score=2000 runs_ers_QAS eq 1 + sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false - sudo pcs node unstandby anftstsapcl1 - sudo pcs property set maintenance-mode=false - ``` + sudo pcs node unstandby anftstsapcl1 + sudo pcs property set maintenance-mode=false + ``` SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows: ```bash- sudo pcs property set maintenance-mode=true + sudo pcs property set maintenance-mode=true # If using NFSv3- sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ + sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \ The following items are prefixed with either **[A]** - applicable to all nodes, op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS - # If using NFSv4.1 - sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ + # If using NFSv4.1 + sudo pcs resource create rsc_sap_QAS_ASCS00 SAPInstance \ InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 \ The following items are prefixed with either **[A]** - applicable to all nodes, op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_ASCS - sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000 + sudo pcs resource meta g-QAS_ASCS resource-stickiness=3000 - # If using NFSv3 - sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ + # If using NFSv3 + sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=60 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS - # If using NFSv4.1 - sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ + # If using NFSv4.1 + sudo pcs resource create rsc_sap_QAS_ERS01 SAPInstance \ InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" \ AUTOMATIC_RECOVER=false IS_ERS=true \ op monitor interval=20 on-fail=restart timeout=105 op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-QAS_AERS - sudo pcs resource meta rsc_sap_QAS_ERS01 resource-stickiness=3000 + sudo pcs resource meta rsc_sap_QAS_ERS01 resource-stickiness=3000 - sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000 - sudo pcs constraint order start g-QAS_ASCS then start g-QAS_AERS kind=Optional symmetrical=false - sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false + sudo pcs constraint colocation add g-QAS_AERS with g-QAS_ASCS -5000 + sudo pcs constraint order start g-QAS_ASCS then start g-QAS_AERS kind=Optional symmetrical=false + sudo pcs constraint order start g-QAS_ASCS then stop g-QAS_AERS kind=Optional symmetrical=false - sudo pcs node unstandby anftstsapcl1 - sudo pcs property set maintenance-mode=false + sudo pcs node unstandby anftstsapcl1 + sudo pcs property set maintenance-mode=false ``` - If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322). + If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322). - > [!NOTE] - > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals. - > For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). - > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup. + > [!NOTE] + > The higher timeouts, suggested when using NFSv4.1 are necessary due to protocol-specific pause, related to NFSv4.1 lease renewals. + > For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). + > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup. - Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running. + Make sure that the cluster status is ok and that all resources are started. It isn't important on which node the resources are running. ```bash sudo pcs status The following items are prefixed with either **[A]** - applicable to all nodes, # rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 ``` -1. **[A]** Add firewall rules for ASCS and ERS on both nodes +1. **[1]** Execute below step to configure priority-fencing-delay (applicable only as of pacemaker-2.0.4-6.el8 or higher) - Add the firewall rules for ASCS and ERS on both nodes. + > [!NOTE] + > If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521). + > + > The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device. - ```bash - # Probe Port of ASCS - sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent - sudo firewall-cmd --zone=public --add-port=62000/tcp - sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent - sudo firewall-cmd --zone=public --add-port=3200/tcp - sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent - sudo firewall-cmd --zone=public --add-port=3600/tcp - sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent - sudo firewall-cmd --zone=public --add-port=3900/tcp - sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent - sudo firewall-cmd --zone=public --add-port=8100/tcp - sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent - sudo firewall-cmd --zone=public --add-port=50013/tcp - sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent - sudo firewall-cmd --zone=public --add-port=50014/tcp - sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent - sudo firewall-cmd --zone=public --add-port=50016/tcp - - # Probe Port of ERS - sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent - sudo firewall-cmd --zone=public --add-port=62101/tcp - sudo firewall-cmd --zone=public --add-port=3201/tcp --permanent - sudo firewall-cmd --zone=public --add-port=3201/tcp - sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent - sudo firewall-cmd --zone=public --add-port=3301/tcp - sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent - sudo firewall-cmd --zone=public --add-port=50113/tcp - sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent - sudo firewall-cmd --zone=public --add-port=50114/tcp - sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent - sudo firewall-cmd --zone=public --add-port=50116/tcp - ``` + ```bash + sudo pcs resource defaults update priority=1 + sudo pcs resource update rsc_sap_QAS_ASCS00 meta priority=10 ++ sudo pcs property set priority-fencing-delay=15s + ``` ++1. **[A]** Add firewall rules for ASCS and ERS on both node. ++ ```bash + # Probe Port of ASCS + sudo firewall-cmd --zone=public --add-port=62000/tcp --permanent + sudo firewall-cmd --zone=public --add-port=62000/tcp + sudo firewall-cmd --zone=public --add-port=3200/tcp --permanent + sudo firewall-cmd --zone=public --add-port=3200/tcp + sudo firewall-cmd --zone=public --add-port=3600/tcp --permanent + sudo firewall-cmd --zone=public --add-port=3600/tcp + sudo firewall-cmd --zone=public --add-port=3900/tcp --permanent + sudo firewall-cmd --zone=public --add-port=3900/tcp + sudo firewall-cmd --zone=public --add-port=8100/tcp --permanent + sudo firewall-cmd --zone=public --add-port=8100/tcp + sudo firewall-cmd --zone=public --add-port=50013/tcp --permanent + sudo firewall-cmd --zone=public --add-port=50013/tcp + sudo firewall-cmd --zone=public --add-port=50014/tcp --permanent + sudo firewall-cmd --zone=public --add-port=50014/tcp + sudo firewall-cmd --zone=public --add-port=50016/tcp --permanent + sudo firewall-cmd --zone=public --add-port=50016/tcp + + # Probe Port of ERS + sudo firewall-cmd --zone=public --add-port=62101/tcp --permanent + sudo firewall-cmd --zone=public --add-port=62101/tcp + sudo firewall-cmd --zone=public --add-port=3201/tcp --permanent + sudo firewall-cmd --zone=public --add-port=3201/tcp + sudo firewall-cmd --zone=public --add-port=3301/tcp --permanent + sudo firewall-cmd --zone=public --add-port=3301/tcp + sudo firewall-cmd --zone=public --add-port=50113/tcp --permanent + sudo firewall-cmd --zone=public --add-port=50113/tcp + sudo firewall-cmd --zone=public --add-port=50114/tcp --permanent + sudo firewall-cmd --zone=public --add-port=50114/tcp + sudo firewall-cmd --zone=public --add-port=50116/tcp --permanent + sudo firewall-cmd --zone=public --add-port=50116/tcp + ``` ## SAP NetWeaver application server preparation Follow these steps to install an SAP application server. ## Test the cluster setup -1. Manually migrate the ASCS instance -- Resource state before starting the test: -- ```bash - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` -- Run the following commands as root to migrate the ASCS instance. -- ```bash - [root@anftstsapcl1 ~]# pcs resource move rsc_sap_QAS_ASCS00 - - [root@anftstsapcl1 ~]# pcs resource clear rsc_sap_QAS_ASCS00 - - # Remove failed actions for the ERS that occurred as part of the migration - [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01 - ``` -- Resource state after the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - ``` --1. Simulate node crash -- Resource state before starting the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - ``` -- Run the following command as root on the node where the ASCS instance is running -- ```bash - [root@anftstsapcl2 ~]# echo b > /proc/sysrq-trigger - ``` -- The status after the node is started again should look like this. -- ```text - Online: [ anftstsapcl1 anftstsapcl2 ] - - Full list of resources: - - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - - Failed Actions: - * rsc_sap_QAS_ERS01_monitor_11000 on anftstsapcl1 'not running' (7): call=45, status=complete, exitreason='', - ``` -- Use the following command to clean the failed resources. -- ```bash - [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01 - ``` -- Resource state after the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` --1. Kill message server process -- Resource state before starting the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` -- Run the following commands as root to identify the process of the message server and kill it. -- ```bash - [root@anftstsapcl1 ~]# pgrep -f ms.sapQAS | xargs kill -9 - ``` -- If you only kill the message server once, it will be restarted by `sapstart`. If you kill it often enough, Pacemaker will eventually move the ASCS instance to the other node. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test. -- ```bash - [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00 - [root@anftstsapcl1 ~]# pcs resource cleanup rsc_sap_QAS_ERS01 - ``` -- Resource state after the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - ``` --1. Kill enqueue server process -- Resource state before starting the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - ``` -- Run the following commands as root on the node where the ASCS instance is running to kill the enqueue server. -- ```bash - #If using ENSA1 - [root@anftstsapcl2 ~]# pgrep -f en.sapQAS | xargs kill -9 - - #If using ENSA2 - [root@anftstsapcl2 ~]# pgrep -f enq.sapQAS | xargs kill -9 - ``` -- The ASCS instance should immediately fail over to the other node, in the case of ENSA1. The ERS instance should also fail over after the ASCS instance is started. Run the following commands as root to clean up the resource state of the ASCS and ERS instance after the test. -- ```bash - [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ASCS00 - [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01 - ``` -- Resource state after the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` --1. Kill enqueue replication server process -- Resource state before starting the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` -- Run the following command as root on the node where the ERS instance is running to kill the enqueue replication server process. -- ```bash - #If using ENSA1 - [root@anftstsapcl2 ~]# pgrep -f er.sapQAS | xargs kill -9 - - #If using ENSA2 - [root@anftstsapcl2 ~]# pgrep -f enqr.sapQAS | xargs kill -9 - ``` -- If you only run the command once, `sapstart` will restart the process. If you run it often enough, `sapstart` will not restart the process, and the resource will be in a stopped state. Run the following commands as root to clean up the resource state of the ERS instance after the test. -- ```bash - [root@anftstsapcl2 ~]# pcs resource cleanup rsc_sap_QAS_ERS01 - ``` -- Resource state after the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` --1. Kill enqueue sapstartsrv process -- Resource state before starting the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` -- Run the following commands as root on the node where the ASCS is running. -- ```bash - [root@anftstsapcl1 ~]# pgrep -fl ASCS00.*sapstartsrv - # 59545 sapstartsrv - - [root@anftstsapcl1 ~]# kill -9 59545 - ``` -- The sapstartsrv process should always be restarted by the Pacemaker resource agent as part of the monitoring. Resource state after the test: -- ```text - rsc_st_azure (stonith:fence_azure_arm): Started anftstsapcl1 - Resource Group: g-QAS_ASCS - fs_QAS_ASCS (ocf::heartbeat:Filesystem): Started anftstsapcl1 - nc_QAS_ASCS (ocf::heartbeat:azure-lb): Started anftstsapcl1 - vip_QAS_ASCS (ocf::heartbeat:IPaddr2): Started anftstsapcl1 - rsc_sap_QAS_ASCS00 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 - Resource Group: g-QAS_AERS - fs_QAS_AERS (ocf::heartbeat:Filesystem): Started anftstsapcl2 - nc_QAS_AERS (ocf::heartbeat:azure-lb): Started anftstsapcl2 - vip_QAS_AERS (ocf::heartbeat:IPaddr2): Started anftstsapcl2 - rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl2 - ``` +Thoroughly test your Pacemaker cluster. [Execute the typical failover tests](high-availability-guide-rhel.md#test-the-cluster-setup). ## Next steps |
sap | High Availability Guide Rhel Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md | The following items are prefixed with either **[A]** - applicable to all nodes, pcs resource defaults update migration-threshold=3 ``` -1. **[1]** Create a virtual IP resource and health-probe for the ASCS instance +2. **[1]** Create a virtual IP resource and health-probe for the ASCS instance ```bash sudo pcs node standby sap-cl2 The following items are prefixed with either **[A]** - applicable to all nodes, # vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started sap-cl1 ``` -1. **[1]** Install SAP NetWeaver ASCS +3. **[1]** Install SAP NetWeaver ASCS Install SAP NetWeaver ASCS as root on the first node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ASCS, for example **sapascs**, **10.90.90.10** and the instance number that you used for the probe of the load balancer, for example **00**. The following items are prefixed with either **[A]** - applicable to all nodes, sudo chgrp sapsys /usr/sap/NW1/ASCS00 ``` -1. **[1]** Create a virtual IP resource and health-probe for the ERS instance +4. **[1]** Create a virtual IP resource and health-probe for the ERS instance ```bash sudo pcs node unstandby sap-cl2 The following items are prefixed with either **[A]** - applicable to all nodes, # vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-cl2 ``` -1. **[2]** Install SAP NetWeaver ERS +5. **[2]** Install SAP NetWeaver ERS Install SAP NetWeaver ERS as root on the second node using a virtual hostname that maps to the IP address of the load balancer frontend configuration for the ERS, for example **sapers**, **10.90.90.9** and the instance number that you used for the probe of the load balancer, for example **01**. The following items are prefixed with either **[A]** - applicable to all nodes, sudo chgrp sapsys /usr/sap/NW1/ERS01 ``` -1. **[1]** Adapt the ASCS/SCS and ERS instance profiles +6. **[1]** Adapt the ASCS/SCS and ERS instance profiles * ASCS/SCS profile The following items are prefixed with either **[A]** - applicable to all nodes, # Autostart = 1 ``` -1. **[A]** Configure Keep Alive +7. **[A]** Configure Keep Alive The communication between the SAP NetWeaver application server and the ASCS/SCS is routed through a software load balancer. The load balancer disconnects inactive connections after a configurable timeout. To prevent this, you need to set a parameter in the SAP NetWeaver ASCS/SCS profile, if using ENSA1. Change the Linux system `keepalive` settings on all SAP servers for both ENSA1/ENSA2. Read [SAP Note 1410736][1410736] for more information. The following items are prefixed with either **[A]** - applicable to all nodes, sudo sysctl net.ipv4.tcp_keepalive_time=300 ``` -1. **[A]** Update the /usr/sap/sapservices file +8. **[A]** Update the /usr/sap/sapservices file To prevent the start of the instances by the sapinit startup script, all instances managed by Pacemaker must be commented out from /usr/sap/sapservices file. The following items are prefixed with either **[A]** - applicable to all nodes, # LD_LIBRARY_PATH=/usr/sap/NW1/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW1/ERS01/exe/sapstartsrv pf=/usr/sap/NW1/ERS01/profile/NW1_ERS01_sapers -D -u nw1adm ``` -1. **[1]** Create the SAP cluster resources +9. **[1]** Create the SAP cluster resources. - If using enqueue server 1 architecture (ENSA1), define the resources as follows: + If using enqueue server 1 architecture (ENSA1), define the resources as follows: ```bash sudo pcs property set maintenance-mode=true The following items are prefixed with either **[A]** - applicable to all nodes, op monitor interval=20 on-fail=restart timeout=60 \ op start interval=0 timeout=600 op stop interval=0 timeout=600 \ --group g-NW1_ASCS- + sudo pcs resource meta g-NW1_ASCS resource-stickiness=3000 sudo pcs resource create rsc_sap_NW1_ERS01 SAPInstance \ The following items are prefixed with either **[A]** - applicable to all nodes, sudo pcs property set maintenance-mode=false ``` - SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. - If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows: + SAP introduced support for enqueue server 2, including replication, as of SAP NW 7.52. Starting with ABAP Platform 1809, enqueue server 2 is installed by default. See SAP note [2630416](https://launchpad.support.sap.com/#/notes/2630416) for enqueue server 2 support. + If using enqueue server 2 architecture ([ENSA2](https://help.sap.com/viewer/cff8531bc1d9416d91bb6781e628d4e0/1709%20001/en-US/6d655c383abf4c129b0e5c8683e7ecd8.html)), install resource agent resource-agents-sap-4.1.1-12.el7.x86_64 or newer and define the resources as follows: ```bash sudo pcs property set maintenance-mode=true The following items are prefixed with either **[A]** - applicable to all nodes, sudo pcs property set maintenance-mode=false ``` - If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322). + If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641322](https://launchpad.support.sap.com/#/notes/2641322). - > [!NOTE] - > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup. + > [!NOTE] + > The timeouts in the above configuration are just examples and may need to be adapted to the specific SAP setup. - Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running. + Make sure that the cluster status is ok and that all resources are started. It is not important on which node the resources are running. ```bash sudo pcs status The following items are prefixed with either **[A]** - applicable to all nodes, # nc_NW1_AERS (ocf::heartbeat:azure-lb): Started sap-cl1 # vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started sap-cl1 # rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1- ``` + ``` ++10. **[1]** Execute below step to configure priority-fencing-delay (applicable only as of pacemaker-2.0.4-6.el8 or higher) ++ > [!NOTE] + > If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521). + > + > The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device. ++ ```bash + sudo pcs resource defaults update priority=1 + sudo pcs resource update rsc_sap_NW1_ASCS00 meta priority=10 ++ sudo pcs property set priority-fencing-delay=15s + ``` -1. **[A]** Add firewall rules for ASCS and ERS on both nodes - Add the firewall rules for ASCS and ERS on both nodes. +11. **[A]** Add firewall rules for ASCS and ERS on both nodes. ```bash # Probe Port of ASCS |
sap | High Availability Guide Rhel Pacemaker | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-pacemaker.md | Title: Setting up Pacemaker on RHEL in Azure | Microsoft Docs description: Setting up Pacemaker on Red Hat Enterprise Linux in Azure -tags: azure-resource-manager -keywords: '' - vm-windows - Previously updated : 04/10/2022 Last updated : 08/23/2023 -The article describes how to configure basic Pacemaker cluster on Red Hat Enterprise Server(RHEL). The instructions cover RHEL 7, RHEL 8 and RHEL 9. +The article describes how to configure basic Pacemaker cluster on Red Hat Enterprise Server(RHEL). The instructions cover RHEL 7, RHEL 8 and RHEL 9. ## Prerequisites+ Read the following SAP Notes and papers first: * SAP Note [1928533], which has: Read the following SAP Notes and papers first: > [!NOTE] > Red Hat doesn't support software-emulated watchdog. Red Hat doesn't support SBD on cloud platforms. For details see [Support Policies for RHEL High Availability Clusters - sbd and fence_sbd](https://access.redhat.com/articles/2800691).+> > The only supported fencing mechanism for Pacemaker Red Hat Enterprise Linux clusters on Azure, is Azure fence agent. The following items are prefixed with either **[A]** - applicable to all nodes, **[1]** - only applicable to node 1 or **[2]** - only applicable to node 2. Differences in the commands or the configuration between RHEL 7 and RHEL 8/RHEL 9 are marked in the document. The following items are prefixed with either **[A]** - applicable to all nodes, For example, if deploying on RHEL 7, register your virtual machine and attach it to a pool that contains repositories for RHEL 7. - <pre><code>sudo subscription-manager register + ```bash + sudo subscription-manager register # List the available pools sudo subscription-manager list --available --matches '*SAP*'- sudo subscription-manager attach --pool=<pool id> - </code></pre> + sudo subscription-manager attach --pool=<pool id> + ``` By attaching a pool to an Azure Marketplace PAYG RHEL image, you will be effectively double-billed for your RHEL usage: once for the PAYG image, and once for the RHEL entitlement in the pool you attach. To mitigate this situation, Azure now provides BYOS RHEL images. For more information, see [Red Hat Enterprise Linux bring-your-own-subscription Azure images](../../virtual-machines/workloads/redhat/byos.md). The following items are prefixed with either **[A]** - applicable to all nodes, In order to install the required packages on RHEL 7, enable the following repositories. - <pre><code>sudo subscription-manager repos --disable "*" + ```bash + sudo subscription-manager repos --disable "*" sudo subscription-manager repos --enable=rhel-7-server-rpms sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms sudo subscription-manager repos --enable=rhel-sap-for-rhel-7-server-rpms sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-eus-rpms- </code></pre> + ``` 1. **[A]** Install RHEL HA Add-On ```bash sudo yum install -y pcs pacemaker fence-agents-azure-arm nmap-ncat ```- + > [!IMPORTANT] > We recommend the following versions of Azure Fence agent (or later) for customers to benefit from a faster failover time, if a resource stop fails or the cluster nodes cannot communicate which each other anymore: - > RHEL 7.7 or higher use the latest available version of fence-agents package - > RHEL 7.6: fence-agents-4.2.1-11.el7_6.8 + > + > RHEL 7.7 or higher use the latest available version of fence-agents package. + > + > RHEL 7.6: fence-agents-4.2.1-11.el7_6.8 + > > RHEL 7.5: fence-agents-4.0.11-86.el7_5.8 - > RHEL 7.4: fence-agents-4.0.11-66.el7_4.12 + > + > RHEL 7.4: fence-agents-4.0.11-66.el7_4.12 + > > For more information, see [Azure VM running as a RHEL High Availability cluster member take a very long time to be fenced, or fencing fails / times-out before the VM shuts down](https://access.redhat.com/solutions/3408711). > [!IMPORTANT]- > We recommend the following versions of Azure Fence agent (or later) for customers wishing to use Managed Identities for Azure resources instead of service principal names for the fence agent. - > RHEL 8.4: fence-agents-4.2.1-54.el8 + > We recommend the following versions of Azure Fence agent (or later) for customers wishing to use Managed Identities for Azure resources instead of service principal names for the fence agent. + > + > RHEL 8.4: fence-agents-4.2.1-54.el8. + > > RHEL 8.2: fence-agents-4.2.1-41.el8_2.4 + > > RHEL 8.1: fence-agents-4.2.1-30.el8_1.4 + > > RHEL 7.9: fence-agents-4.2.1-41.el7_9.4. > [!IMPORTANT]- > On RHEL 9, we recommend the following package versions (or later) to avoid issues with Azure Fence agent: + > On RHEL 9, we recommend the following package versions (or later) to avoid issues with Azure Fence agent: + > > fence-agents-4.10.0-20.el9_0.7 - > fence-agents-common-4.10.0-20.el9_0.6 + > + > fence-agents-common-4.10.0-20.el9_0.6 + > > ha-cloud-support-4.10.0-20.el9_0.6.x86_64.rpm Check the version of the Azure fence agent. If necessary, update it to the minimum required version or later. - <pre><code># Check the version of the Azure Fence Agent + ```bash + # Check the version of the Azure Fence Agent sudo yum info fence-agents-azure-arm- </code></pre> + ``` > [!IMPORTANT] > If you need to update the Azure Fence agent, and if using custom role, make sure to update the custom role to include action **powerOff**. For details see [Create a custom role for the fence agent](#1-create-a-custom-role-for-the-fence-agent). -1. If deploying on RHEL 9, install also the resource agents for cloud deployment: - +1. If deploying on RHEL 9, install also the resource agents for cloud deployment: + ```bash sudo yum install -y resource-agents-cloud ``` The following items are prefixed with either **[A]** - applicable to all nodes, You can either use a DNS server or modify the /etc/hosts on all nodes. This example shows how to use the /etc/hosts file. Replace the IP address and the hostname in the following commands. - >[!IMPORTANT] + > [!IMPORTANT] > If using host names in the cluster configuration, it's vital to have reliable host name resolution. The cluster communication will fail, if the names are not available and that can lead to cluster failover delays.+ > > The benefit of using /etc/hosts is that your cluster becomes independent of DNS, which could be a single point of failures too. - <pre><code>sudo vi /etc/hosts - </code></pre> + ```bash + sudo vi /etc/hosts + ``` Insert the following lines to /etc/hosts. Change the IP address and hostname to match your environment - <pre><code># IP address of the first cluster node - <b>10.0.0.6 prod-cl1-0</b> + ```text + # IP address of the first cluster node + 10.0.0.6 prod-cl1-0 # IP address of the second cluster node- <b>10.0.0.7 prod-cl1-1</b> - </code></pre> + 10.0.0.7 prod-cl1-1 + ``` 1. **[A]** Change hacluster password to the same password - <pre><code>sudo passwd hacluster - </code></pre> + ```bash + sudo passwd hacluster + ``` 1. **[A]** Add firewall rules for pacemaker Add the following firewall rules to all cluster communication between the cluster nodes. - <pre><code>sudo firewall-cmd --add-service=high-availability --permanent + ```bash + sudo firewall-cmd --add-service=high-availability --permanent sudo firewall-cmd --add-service=high-availability- </code></pre> + ``` 1. **[A]** Enable basic cluster services Run the following commands to enable the Pacemaker service and start it. - <pre><code>sudo systemctl start pcsd.service + ```bash + sudo systemctl start pcsd.service sudo systemctl enable pcsd.service- </code></pre> + ``` 1. **[1]** Create Pacemaker cluster Run the following commands to authenticate the nodes and create the cluster. Set the token to 30000 to allow Memory preserving maintenance. For more information, see [this article for Linux][virtual-machines-linux-maintenance]. - + If building a cluster on **RHEL 7.x**, use the following commands: - <pre><code>sudo pcs cluster auth <b>prod-cl1-0</b> <b>prod-cl1-1</b> -u hacluster - sudo pcs cluster setup --name <b>nw1-azr</b> <b>prod-cl1-0</b> <b>prod-cl1-1</b> --token 30000 ++ ```bash + sudo pcs cluster auth prod-cl1-0 prod-cl1-1 -u hacluster + sudo pcs cluster setup --name nw1-azr prod-cl1-0 prod-cl1-1 --token 30000 sudo pcs cluster start --all- </code></pre> + ``` If building a cluster on **RHEL 8.x/RHEL 9.x**, use the following commands: - <pre><code>sudo pcs host auth <b>prod-cl1-0</b> <b>prod-cl1-1</b> -u hacluster - sudo pcs cluster setup <b>nw1-azr</b> <b>prod-cl1-0</b> <b>prod-cl1-1</b> totem token=30000 ++ ```bash + sudo pcs host auth prod-cl1-0 prod-cl1-1 -u hacluster + sudo pcs cluster setup nw1-azr prod-cl1-0 prod-cl1-1 totem token=30000 sudo pcs cluster start --all- </code></pre> + ``` Verify the cluster status, by executing the following command: - <pre><code> # Run the following command until the status of both nodes is online ++ ```bash + # Run the following command until the status of both nodes is online sudo pcs status+ # Cluster name: nw1-azr # WARNING: no stonith devices and stonith-enabled is not false # Stack: corosync- # Current DC: <b>prod-cl1-1</b> (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum + # Current DC: prod-cl1-1 (version 1.1.18-11.el7_5.3-2b07d5c5a9) - partition with quorum # Last updated: Fri Aug 17 09:18:24 2018- # Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on <b>prod-cl1-1</b> + # Last change: Fri Aug 17 09:17:46 2018 by hacluster via crmd on prod-cl1-1 # # 2 nodes configured # 0 resources configured #- # Online: [ <b>prod-cl1-0</b> <b>prod-cl1-1</b> ] + # Online: [ prod-cl1-0 prod-cl1-1 ] # # No resources # The following items are prefixed with either **[A]** - applicable to all nodes, # corosync: active/disabled # pacemaker: active/disabled # pcsd: active/enabled- </code></pre> + ``` ++1. **[A]** Set Expected Votes. -1. **[A]** Set Expected Votes. + ```bash + # Check the quorum votes + pcs quorum status - <pre><code># Check the quorum votes - pcs quorum status - # If the quorum votes are not set to 2, execute the next command - sudo pcs quorum expected-votes 2 - </code></pre> + # If the quorum votes are not set to 2, execute the next command + sudo pcs quorum expected-votes 2 + ``` - >[!TIP] - > If building multi-node cluster, that is cluster with more than two nodes, don't set the votes to 2. + > [!TIP] + > If building multi-node cluster, that is cluster with more than two nodes, don't set the votes to 2. 1. **[1]** Allow concurrent fence actions - <pre><code>sudo pcs property set concurrent-fencing=true - </code></pre> + ```bash + sudo pcs property set concurrent-fencing=true + ``` ## Create fencing device -The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure. +The fencing device uses either a managed identity for Azure resource or service principal to authorize against Microsoft Azure. ++### [Managed Identity](#tab/msi) ++To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x. -### Using Managed Identity -To create a managed identity (MSI), [create a system-assigned](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#system-assigned-managed-identity) managed identity for each VM in the cluster. Should a system-assigned managed identity already exist, it will be used. User assigned managed identities should not be used with Pacemaker at this time. Fence device, based on managed identity is supported on RHEL 7.9 and RHEL 8.x/RHEL 9.x. +### [Service Principal](#tab/spn) -### Using Service Principal Follow these steps to create a service principal, if not using managed identity. 1. Go to the [Azure portal](https://portal.azure.com).-1. Open the Azure Active Directory blade +2. Open the Azure Active Directory blade. Go to Properties and make a note of the Directory ID. This is the **tenant ID**.-1. Click App registrations -1. Click New Registration -1. Enter a Name, select "Accounts in this organization directory only" -2. Select Application Type "Web", enter a sign-on URL (for example http:\//localhost) and click Add - The sign-on URL isn't used and can be any valid URL -1. Select Certificates and Secrets, then click New client secret -1. Enter a description for a new key, select "Never expires" and click Add -1. Make a node the Value. It is used as the **password** for the service principal -1. Select Overview. Make a note the Application ID. It's used as the username (**login ID** in the steps below) of the service principal +3. Click App registrations. +4. Click New Registration. +5. Enter a Name, select "Accounts in this organization directory only". +6. Select Application Type "Web", enter a sign-on URL (for example http:\//localhost) and click Add. + The sign-on URL isn't used and can be any valid URL. +7. Select Certificates and Secrets, then click New client secret. +8. Enter a description for a new key, select "Never expires" and click Add. +9. Make a node the Value. It is used as the **password** for the service principal. +10. Select Overview. Make a note the Application ID. It's used as the username (**login ID** in the steps below) of the service principal. ++ ### **[1]** Create a custom role for the fence agent Use the following content for the input file. You need to adapt the content to y ### **[A]** Assign the custom role -#### Using Managed Identity +#### [Managed Identity](#tab/msi) Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to each managed identity of the cluster VMs. Each VM system-assigned managed identity needs the role assigned for every cluster VM's resource. For detailed steps, see [Assign a managed identity access to a resource by using the Azure portal](../../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md). Verify each VM's managed identity role assignment contains all cluster VMs. > [!IMPORTANT] > Be aware assignment and removal of authorization with managed identities [can be delayed](../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md#limitation-of-using-managed-identities-for-authorization) until effective. -#### Using Service Principal +#### [Service Principal](#tab/spn) ++Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Don't use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). -Assign the custom role "Linux Fence Agent Role" that was created in the last chapter to the service principal. Don't use the Owner role anymore! For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md). -Make sure to assign the role for both cluster nodes. +Make sure to assign the role for both cluster nodes. ++ ### **[1]** Create the fencing devices After you edited the permissions for the virtual machines, you can configure the fencing devices in the cluster. -<pre><code> +```bash sudo pcs property set stonith-timeout=900-</code></pre> +``` > [!NOTE] > Option 'pcmk_host_map' is ONLY required in the command, if the RHEL host names and the Azure VM names are NOT identical. Specify the mapping in the format **hostname:vm-name**. > Refer to the bold section in the command. For more information, see [What format should I use to specify node mappings to fencing devices in pcmk_host_map](https://access.redhat.com/solutions/2619961) - #### [Managed Identity](#tab/msi) -For RHEL **7.x**, use the following command to configure the fence device: -<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm <b>msi=true</b> resourceGroup="<b>resource group</b>" \ -subscriptionId="<b>subscription id</b>" <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ +For RHEL **7.x**, use the following command to configure the fence device: ++```bash +sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \ +subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600-</code></pre> +``` For RHEL **8.x/9.x**, use the following command to configure the fence device: -<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm <b>msi=true</b> resourceGroup="<b>resource group</b>" \ -subscriptionId="<b>subscription id</b>" <b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ ++```bash +# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command: +sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \ +subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ +power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \ +op monitor interval=3600 ++# If the version of pacemaker is less than 2.0.4-6.el8, then run following command: +sudo pcs stonith create rsc_st_azure fence_azure_arm msi=true resourceGroup="resource group" \ +subscriptionId="subscription id" pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600-</code></pre> +``` #### [Service Principal](#tab/spn) -For RHEL **7.x**, use the following command to configure the fence device: -<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm login="<b>login ID</b>" passwd="<b>password</b>" \ -resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \ -<b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ +For RHEL **7.x**, use the following command to configure the fence device: ++```bash +sudo pcs stonith create rsc_st_azure fence_azure_arm login="login ID" passwd="password" \ +resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \ +pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600-</code></pre> +``` For RHEL **8.x/9.x**, use the following command to configure the fence device: -<pre><code>sudo pcs stonith create rsc_st_azure fence_azure_arm username="<b>login ID</b>" password="<b>password</b>" \ -resourceGroup="<b>resource group</b>" tenantId="<b>tenant ID</b>" subscriptionId="<b>subscription id</b>" \ -<b>pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name"</b> \ ++```bash +# If the version of pacemaker is or greater than 2.0.4-6.el8, then run following command: +sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \ +resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \ +pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ +power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 \ +op monitor interval=3600 ++# If the version of pacemaker is less than 2.0.4-6.el8, then run following command: +sudo pcs stonith create rsc_st_azure fence_azure_arm username="login ID" password="password" \ +resourceGroup="resource group" tenantId="tenant ID" subscriptionId="subscription id" \ +pcmk_host_map="prod-cl1-0:prod-cl1-0-vm-name;prod-cl1-1:prod-cl1-1-vm-name" \ power_timeout=240 pcmk_reboot_timeout=900 pcmk_monitor_timeout=120 pcmk_monitor_retries=4 pcmk_action_limit=3 pcmk_delay_max=15 \ op monitor interval=3600-</code></pre> +``` If you're using fencing device, based on service principal configuration, read [Change from SPN to MSI for Pacemaker clusters using Azure fencing](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-high-availability-change-from-spn-to-msi-for/ba-p/3609278) and learn how to convert to managed identity configuration. > [!TIP]-> Only configure the `pcmk_delay_max` attribute in two node Pacemaker clusters. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829). - +> Only configure the `pcmk_delay_max` attribute in two node clusters, with pacemaker version less than 2.0.4-6.el8. For more information on preventing fence races in a two node Pacemaker cluster, see [Delaying fencing in a two node cluster to prevent fence races of "fence death" scenarios](https://access.redhat.com/solutions/54829). > [!IMPORTANT] > The monitoring and fencing operations are deserialized. As a result, if there is a longer running monitoring operation and simultaneous fencing event, there is no delay to the cluster failover, due to the already running monitoring operation. ### **[1]** Enable the use of a fencing device -<pre><code>sudo pcs property set stonith-enabled=true -</code></pre> +```bash +sudo pcs property set stonith-enabled=true +``` > [!TIP] >Azure Fence Agent requires outbound connectivity to public end points as documented, along with possible solutions, in [Public endpoint connectivity for VMs using standard ILB](./high-availability-guide-standard-load-balancer-outbound-connections.md). - ## Optional fencing configuration > [!TIP] > This section is only applicable, if it is desired to configure special fencing device `fence_kdump`. -If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs. +If there is a need to collect diagnostic information within the VM, it may be useful to configure additional fencing device, based on fence agent `fence_kdump`. The `fence_kdump` agent can detect that a node entered kdump crash recovery and can allow the crash recovery service to complete, before other fencing methods are invoked. Note that `fence_kdump` isn't a replacement for traditional fence mechanisms, like Azure Fence Agent when using Azure VMs. > [!IMPORTANT]-> Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover. -> -> If a crash dump is successfully detected, the fencing will be delayed until the crash recovery service completes. If the failed node is unreachable or if it doesn't respond, the fencing will be delayed by time determined by the configured number of iterations and the `fence_kdump` timeout. For more details, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). +> Be aware that when `fence_kdump` is configured as a first level fencing device, it will introduce delays in the fencing operations and respectively delays in the application resources failover. +> +> If a crash dump is successfully detected, the fencing will be delayed until the crash recovery service completes. If the failed node is unreachable or if it doesn't respond, the fencing will be delayed by time determined by the configured number of iterations and the `fence_kdump` timeout. For more details, see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). +> > The proposed fence_kdump timeout may need to be adapted to the specific environment.-> -> We recommend to configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence method as Azure Fence Agent. +> +> We recommend to configure `fence_kdump` fencing only when necessary to collect diagnostics within the VM and always in combination with traditional fence method as Azure Fence Agent. The following Red Hat KBs contain important information about configuring `fence_kdump` fencing: The following Red Hat KBs contain important information about configuring `fence * [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 HA cluster with kexec-tools older than 2.0.14](https://access.redhat.com/solutions/2388711) * For information how to change the default timeout see [How do I configure kdump for use with the RHEL 6,7,8 HA Add-On](https://access.redhat.com/articles/67570) * For information on how to reduce failover delay, when using `fence_kdump` see [Can I reduce the expected delay of failover when adding fence_kdump configuration](https://access.redhat.com/solutions/5512331)- -Execute the following optional steps to add `fence_kdump` as a first level fencing configuration, in addition to the Azure Fence Agent configuration. + +Execute the following optional steps to add `fence_kdump` as a first level fencing configuration, in addition to the Azure Fence Agent configuration. +1. **[A]** Verify that kdump is active and configured. -1. **[A]** Verify that kdump is active and configured. - ``` + ```bash systemctl is-active kdump # Expected result # active ```-2. **[A]** Install the `fence_kdump` fence agent. - ``` ++2. **[A]** Install the `fence_kdump` fence agent. ++ ```bash yum install fence-agents-kdump ```-3. **[1]** Create `fence_kdump` fencing device in the cluster. - <pre><code> - pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" <b>pcmk_host_list="prod-cl1-0 prod-cl1-1</b>" timeout=30 - </code></pre> ++3. **[1]** Create `fence_kdump` fencing device in the cluster. ++ ```bash + pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" pcmk_host_list="prod-cl1-0 prod-cl1-1" timeout=30 + ``` 4. **[1]** Configure fencing levels, so that `fence_kdump` fencing mechanism is engaged first. - <pre><code> - pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" <b>pcmk_host_list="prod-cl1-0 prod-cl1-1</b>" - pcs stonith level add 1 <b>prod-cl1-0</b> rsc_st_kdump - pcs stonith level add 1 <b>prod-cl1-1</b> rsc_st_kdump - pcs stonith level add 2 <b>prod-cl1-0</b> rsc_st_azure - pcs stonith level add 2 <b>prod-cl1-1</b> rsc_st_azure ++ ```bash + pcs stonith create rsc_st_kdump fence_kdump pcmk_reboot_action="off" pcmk_host_list="prod-cl1-0 prod-cl1-1" + pcs stonith level add 1 prod-cl1-0 rsc_st_kdump + pcs stonith level add 1 prod-cl1-1 rsc_st_kdump + pcs stonith level add 2 prod-cl1-0 rsc_st_azure + pcs stonith level add 2 prod-cl1-1 rsc_st_azure + # Check the fencing level configuration pcs stonith level # Example output- # Target: <b>prod-cl1-0</b> + # Target: prod-cl1-0 # Level 1 - rsc_st_kdump # Level 2 - rsc_st_azure- # Target: <b>prod-cl1-1</b> + # Target: prod-cl1-1 # Level 1 - rsc_st_kdump # Level 2 - rsc_st_azure- </code></pre> + ``` 5. **[A]** Allow the required ports for `fence_kdump` through the firewall- ``` ++ ```bash firewall-cmd --add-port=7410/udp firewall-cmd --add-port=7410/udp --permanent ``` -6. **[A]** Ensure that `initramfs` image file contains `fence_kdump` and `hosts` files. For details see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). - ``` +6. **[A]** Ensure that `initramfs` image file contains `fence_kdump` and `hosts` files. For details see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). ++ ```bash lsinitrd /boot/initramfs-$(uname -r)kdump.img | egrep "fence|hosts" # Example output # -rw-r--r-- 1 root root 208 Jun 7 21:42 etc/hosts Execute the following optional steps to add `fence_kdump` as a first level fenci 7. **[A]** Perform the `fence_kdump_nodes` configuration in `/etc/kdump.conf` to avoid `fence_kdump` failing with a timeout for some `kexec-tools` versions. For details see [fence_kdump times out when fence_kdump_nodes is not specified with kexec-tools version 2.0.15 or later](https://access.redhat.com/solutions/4498151) and [fence_kdump fails with "timeout after X seconds" in a RHEL 6 or 7 High Availability cluster with kexec-tools versions older than 2.0.14](https://access.redhat.com/solutions/2388711). The example configuration for a two node cluster is presented below. After making a change in `/etc/kdump.conf`, the kdump image must be regenerated. That can be achieved by restarting the `kdump` service. - <pre><code> + ```bash vi /etc/kdump.conf- # On node <b>prod-cl1-0</b> make sure the following line is added - fence_kdump_nodes <b>prod-cl1-1</b> - # On node <b>prod-cl1-1</b> make sure the following line is added - fence_kdump_nodes <b>prod-cl1-0</b> -+ # On node prod-cl1-0 make sure the following line is added + fence_kdump_nodes prod-cl1-1 + # On node prod-cl1-1 make sure the following line is added + fence_kdump_nodes prod-cl1-0 + # Restart the service on each node systemctl restart kdump- </code></pre> + ``` 8. Test the configuration by crashing a node. For details see [How do I configure fence_kdump in a Red Hat Pacemaker cluster](https://access.redhat.com/solutions/2876971). > [!IMPORTANT]- > If the cluster is already in productive use, plan the test accordingly as crashing a node will have an impact on the application. + > If the cluster is already in productive use, plan the test accordingly as crashing a node will have an impact on the application. - ``` + ```bash echo c > /proc/sysrq-trigger ```+ ## Next steps -* [Azure Virtual Machines planning and implementation for SAP][planning-guide] -* [Azure Virtual Machines deployment for SAP][deployment-guide] -* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] -* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha] +* [Azure Virtual Machines planning and implementation for SAP][planning-guide]. +* [Azure Virtual Machines deployment for SAP][deployment-guide]. +* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide]. +* To learn how to establish high availability and plan for disaster recovery of SAP HANA on Azure VMs, see [High Availability of SAP HANA on Azure Virtual Machines (VMs)][sap-hana-ha]. |
sap | High Availability Guide Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel.md | Follow these steps to install an SAP application server. rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ``` -1. Simulate node crash +2. Simulate node crash Resource state before starting the test: Follow these steps to install an SAP application server. rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ``` -1. Kill message server process +3. Blocking network communication ++ Resource state before starting the test: ++ ```text + rsc_st_azure (stonith:fence_azure_arm): Started nw1-cl-0 + Resource Group: g-NW1_ASCS + fs_NW1_ASCS (ocf::heartbeat:Filesystem): Started nw1-cl-0 + nc_NW1_ASCS (ocf::heartbeat:azure-lb): Started nw1-cl-0 + vip_NW1_ASCS (ocf::heartbeat:IPaddr2): Started nw1-cl-0 + rsc_sap_NW1_ASCS00 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 + Resource Group: g-NW1_AERS + fs_NW1_AERS (ocf::heartbeat:Filesystem): Started nw1-cl-1 + nc_NW1_AERS (ocf::heartbeat:azure-lb): Started nw1-cl-1 + vip_NW1_AERS (ocf::heartbeat:IPaddr2): Started nw1-cl-1 + rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 + ``` ++ Execute firewall rule to block the communication on one of the nodes. ++ ```bash + # Execute iptable rule on nw1-cl-0 (10.0.0.7) to block the incoming and outgoing traffic to nw1-cl-1 (10.0.0.8) + iptables -A INPUT -s 10.0.0.8 -j DROP; iptables -A OUTPUT -d 10.0.0.8 -j DROP + ``` ++ When cluster nodes can't communicate to each other, there's a risk of a split-brain scenario. In such situations, cluster nodes will try to simultaneously fence each other, resulting in fence race. To avoid such situation, it's recommended to set [priority-fencing-delay](https://access.redhat.com/solutions/5110521) property in cluster configuration (applicable only for [pacemaker-2.0.4-6.el8](https://access.redhat.com/errata/RHEA-2020:4804) or higher). ++ By enabling priority-fencing-delay property, the cluster introduces an additional delay in the fencing action specifically on the node hosting ASCS resource, allowing the node to win the fence race. ++ Execute below command to delete the firewall rule. ++ ```bash + # If the iptables rule set on the server gets reset after a reboot, the rules will be cleared out. In case they have not been reset, please proceed to remove the iptables rule using the following command. + iptables -D INPUT -s 10.0.0.8 -j DROP; iptables -D OUTPUT -d 10.0.0.8 -j DROP + ``` ++4. Kill message server process Resource state before starting the test: Follow these steps to install an SAP application server. rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-0 ``` -1. Kill enqueue server process +5. Kill enqueue server process Resource state before starting the test: Follow these steps to install an SAP application server. rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ``` -1. Kill enqueue replication server process +6. Kill enqueue replication server process Resource state before starting the test: Follow these steps to install an SAP application server. rsc_sap_NW1_ERS02 (ocf::heartbeat:SAPInstance): Started nw1-cl-1 ``` -1. Kill enqueue sapstartsrv process +7. Kill enqueue sapstartsrv process Resource state before starting the test: |
sap | High Availability Guide Suse Nfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs.md | This article describes how to deploy the virtual machines, configure the virtual This guide describes how to set up a highly available NFS server that is used by two SAP systems, NW1 and NW2. The names of the resources (for example virtual machines, virtual networks) in the example assume that you have used the [SAP file server template][template-file-server] with resource prefix **prod**. > [!NOTE]-> This article contains references to the terms *slave* and *master*, terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When the terms are removed from the software, we'll remove them from this article. Read the following SAP Notes and papers first |
sap | Integration Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/integration-get-started.md | Select an area for resources about how to integrate SAP and Azure in that space. | [Microsoft Teams](#microsoft-teams) | Discover collaboration scenarios boosting your daily productivity by interacting with your SAP applications directly from Microsoft Teams. | | [Microsoft Power Platform](#microsoft-power-platform) | Learn about the available [out-of-the-box SAP applications](/power-automate/sap-integration/solutions) enabling your business users to achieve more with less. | | [SAP Fiori](#sap-fiori) | Increase performance and security of your SAP Fiori applications by integrating them with Azure services. |-| [Azure Active Directory (Azure AD)](#azure-ad) | Ensure end-to-end SAP user authentication and authorization with Azure Active Directory. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. | +| [Microsoft Entra ID (formerly Azure Active Directory)](#microsoft-entra-id-formerly-azure-ad) | Ensure end-to-end SAP user authentication and authorization with Microsoft Entra ID. Single sign-on (SSO) and multi-factor authentication (MFA) are the foundation for a secure and seamless user experience. | | [Azure Integration Services](#azure-integration-services) | Connect your SAP workloads with your end users, business partners, and their systems with world-class integration services. Learn about co-development efforts that enable SAP Event Mesh to exchange cloud events with Azure Event Grid, understand how you can achieve high-availability for services like SAP Cloud Integration, automate your SAP invoice processing with Logic Apps and Azure AI services and more. | | [App Development in any language including ABAP and DevOps](#app-development-in-any-language-including-abap-and-devops) | Apply best-in-class developer tooling to your SAP app developments and DevOps processes. | | [Azure Data Services](#azure-data-services) | Learn how to integrate your SAP data with Data Services like Azure Synapse Analytics, Azure Data Lake Storage, Azure Data Factory, Power BI, Data Warehouse Cloud, Analytics Cloud, which connector to choose, tune performance, efficiently troubleshoot, and more. |-| [Threat Monitoring with Microsoft Sentinel for SAP](#microsoft-sentinel) | Learn how to best secure your SAP workload with Microsoft Sentinel, prevent incidents from happening and detect and respond to threats in real-time with this [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution. | +| [Threat Monitoring and Response Automation with Microsoft Security Services for SAP](#microsoft-security-for-sap) | Learn how to best secure your SAP workload with Microsoft Defender for Cloud and the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) Microsoft Sentinel solution. Prevent incidents from happening, detect and respond to threats in real-time. | | [SAP Business Technology Platform (BTP)](#sap-btp) | Discover integration scenarios like SAP Private Link to securely and efficiently connect your BTP apps to your Azure workloads. | ### Azure OpenAI service For more information about integration with [Azure OpenAI service](/azure/ai-ser Also see these SAP resources: -- [empower SAP RISE enterprise users with Azure OpenAI in multi-cloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/)+- [empower SAP RISE enterprise users with Azure OpenAI in multicloud environment](https://blogs.sap.com/2023/02/14/empower-sap-rise-enterprise-users-with-chatgpt-in-multi-cloud-environment/) - [Consume OpenAI services (GPT) through CAP & SAP BTP, AI Core](https://github.com/SAP-samples/azure-openai-aicore-cap-api) - [SAP SuccessFactors Helps HR Solve Skills Gap with Generative AI | SAP News](https://news.sap.com/2023/05/sap-successfactors-helps-hr-solve-skills-gap-with-generative-ai/) Also see the following SAP resources: - [Azure CDN for SAPUI5 libraries](https://blogs.sap.com/2021/03/22/sap-fiori-using-azure-cdn-for-sapui5-libraries/) - [Web Application Firewall Setup for Internet facing SAP Fiori Apps](https://blogs.sap.com/2020/12/03/sap-on-azure-application-gateway-web-application-firewall-waf-v2-setup-for-internet-facing-sap-fiori-apps/) -### Azure AD +### Microsoft Entra ID (formerly Azure AD) For more information about integration with Azure AD, see the following Azure documentation: For more information about using SAP with Azure Integration services, see the fo - [Connect to SAP from workflows in Azure Logic Apps](../../logic-apps/logic-apps-using-sap-connector.md) - [Import SAP OData metadata as an API into Azure API Management](../../api-management/sap-api.md) - [Apply SAP Principal Propagation to your Azure hosted APIs](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml)+- [Using Logic Apps (Standard) to connect with SAP BAPIs and RFC](https://www.youtube.com/watch?v=ZmOPPtIYYM4) Also see the following SAP resources: For more information about integrating SAP with Microsoft services natively, see - [Use community-driven OData SDKs with Azure Functions](https://github.com/Azure/azure-sdk-for-sap-odata) Also see the following SAP resources: -- [SAP BTP ABAP Environment (aka. Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)-- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (aka. Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/)+- [SAP BTP ABAP Environment (also known as Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/) +- [SAP S/4HANA Cloud, private edition ΓÇô ABAP Environment (also known as Embedded Steampunk) integration with Microsoft services](https://blogs.sap.com/2023/06/06/kick-start-your-sap-abap-platform-integration-journey-with-microsoft/) - [dotNET speaks OData too, how to implement Azure App Service with SAP Gateway](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/) - [Apply cloud native deployment practice blue-green to SAP BTP apps with Azure DevOps](https://blogs.sap.com/2019/12/20/go-blue-green-for-your-cloud-foundry-app-from-webide-with-azure-devops/) Also see the following SAP resources: - [Integrate SAP Data Warehouse Cloud with Power BI and Azure Synapse Analytics](https://blogs.sap.com/2022/07/27/your-sap-on-azure-part-28-integrate-sap-data-warehouse-cloud-with-powerbi-and-azure-synapse/) - [Extend SAP Integrated Business Planning forecasting algorithms with Azure Machine Learning](https://blogs.sap.com/2022/10/03/microsoft-azure-machine-learning-for-supply-chain-planning/) -### Microsoft Sentinel +### Microsoft Security for SAP ++Protect your data, apps, and infrastructure against rapidly evolving cyber threats with cloud security services from Microsoft. Artificial intelligence (AI) and device learning (ML) backed capabilities are required to keep up with the pace. ++Use [Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) to secure your cloud-infrastructure surrounding the SAP system including automated responses. ++Complimenting that, use the [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) solution [Microsoft Sentinel](../../sentinel/sap/sap-solution-security-content.md) to protect your SAP system from within using signals from the SAP Audit Log among others. ++Learn more about identity focused integration capabilities that power the analysis on Defender and Sentinel via the [Microsoft Entra ID section](#microsoft-entra-id-formerly-azure-ad). ++#### Microsoft Defender for Cloud ++The [Defender product family](../../defender-for-cloud/defender-for-cloud-introduction.md) consist of multiple products tailored to provide "cloud security posture management" (CSPM) and "cloud workload protection" (CWPP) for the various workload types. Below excerpt serves as entry point to start securing your SAP system. ++- Defender for Servers (SAP hosts) + - [Protect your SAP hosts with Defender](../../defender-for-cloud/defender-for-servers-introduction.md) including OS specific Endpoint protection with Microsoft Defender for Endpoint (MDE) + - [Microsoft Defender for Endpoint on Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux) + - [Microsoft Defender for Endpoint on Windows](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) + - [Enable Defender for Servers](../../defender-for-cloud/tutorial-enable-servers-plan.md#enable-the-defender-for-servers-plan) +- Defender for Storage (SAP SMB file shares on Azure) + - [Protect your SAP SMB file shares with Defender](../../defender-for-cloud/defender-for-storage-introduction.md) + - [Enable Defender for Storage](../../defender-for-cloud/tutorial-enable-storage-plan.md) +- Defender for APIs (SAP Gateway, SAP Business Technology Platform, SAP SaaS) + - [Protect your OpenAPI APIs with Defender for APIs](../../defender-for-cloud/defender-for-apis-introduction.md) + - [Enable the Defender for APIs](../../defender-for-cloud/defender-for-apis-deploy.md) ++See SAP's recommendation to use AntiVirus software for SAP hosts and systems on both Linux and Windows based platforms [here](https://wiki.scn.sap.com/wiki/display/Basis/Protecting+SAP+systems+using+antivirus+softwares). Be aware that the threat landscape has evolved from file-based attacks to file-less attacks. Therefore, the protection approach has to evolve beyond pure AntiVirus capabilities too. ++For more information about using Microsoft Defender for Endpoint (MDE) via Microsoft Defender for Server for SAP applications regarding `Next-generation protection` (AntiVirus) and `Endpoint Detection and Response` (EDR) see the following Microsoft resources: ++- [SAP Applications and Microsoft Defender for Linux | Microsoft TechCommunity](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-applications-and-microsoft-defender-for-linux/ba-p/3675480) +- [Enable the Microsoft Defender for Endpoint integration](../../defender-for-cloud/integration-defender-for-endpoint.md#enable-the-microsoft-defender-for-endpoint-integration) +- [Common mistakes to avoid when defining exclusions](/microsoft-365/security/defender-endpoint/common-exclusion-mistakes-microsoft-defender-antivirus) ++Also see the following SAP resources: ++- [2808515 - Installing security software on SAP servers running on Linux](https://me.sap.com/notes/2808515) +- [1730997 - Unrecommended versions of antivirus software](https://me.sap.com/notes/1730997) ++> [!Note] +> It is **not recommended** to exclude files, paths or processes from EDR because it creates blind spots for Defender. If exclusions are required nevertheless, open a support case with Microsoft Support via the Defender365 Portal specifying executables and/or paths to exclude. Follow the same process for tuning of real-time scans. ++> [!Note] +> Certification for the SAP Virus Scan Interface (NW-VSI) doesn't apply to MDE, because it operates outside of the SAP system. It complements Microsoft Sentinel for SAP, which interacts with the SAP system directly. See more details and the SAP certification note for Sentinel below. ++> [!Tip] +> MDE was formerly called Microsoft Defender Advanced Threat Protection (ATP). Older articles or SAP notes still refer to that name. ++> [!Tip] +> Microsoft Defender for Server includes Endpoint detection and response (EDR) features that are provided by Microsoft Defender for Endpoint Plan 2. ++#### Microsoft Sentinel for SAP For more information about [SAP certified](https://www.sap.com/dmc/exp/2013_09_adpd/enEN/#/solutions?id=s:33db1376-91ae-4f36-a435-aafa892a88d8) threat monitoring with Microsoft Sentinel for SAP, see the following Microsoft resources: For more information about Azure integration with SAP Business Technology Platfo - [Route Multi-Region Traffic to SAP BTP Services Intelligently with Azure Traffic Manager](https://discovery-center.cloud.sap/missiondetail/3603/) - [Distributed Resiliency of SAP CAP applications using SAP HANA Cloud with Azure Traffic Manager](https://blogs.sap.com/2022/11/12/distributed-resiliency-of-sap-cap-applications-using-sap-hana-cloud-multi-zone-replication-with-azure-traffic-manager/) - [Federate your data from Azure Data Explorer to SAP Data Warehouse Cloud](https://discovery-center.cloud.sap/missiondetail/3433/3473/)-- [Integrate globally available SAP BTP apps with Azure CosmosDB via OData](https://blogs.sap.com/2021/06/11/sap-where-can-i-get-toilet-paper-an-implementation-of-the-geodes-pattern-with-s4-btp-and-azure-cosmosdb/)+- [Integrate globally available SAP BTP apps with Azure Cosmos DB via OData](https://blogs.sap.com/2021/06/11/sap-where-can-i-get-toilet-paper-an-implementation-of-the-geodes-pattern-with-s4-btp-and-azure-cosmosdb/) - [Explore your Azure data sources with SAP Data Warehouse Cloud](https://discovery-center.cloud.sap/missiondetail/3656/3699/) - [Building Applications on SAP BTP with Microsoft Services | OpenSAP course](https://open.sap.com/courses/btpma1) |
sap | Sap Ascs Ha Multi Sid Wsfc Azure Shared Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-ascs-ha-multi-sid-wsfc-azure-shared-disk.md | Following are some of the important points to consider with respect to Azure Pre ## Supported OS versions -Both Windows Server 2016 and Windows Server 2019 are supported (use the latest data center images). +Windows Servers 2016, 2019 and higher are supported (use the latest data center images). -We strongly recommend using **Windows Server 2019 Datacenter**, as: +We strongly recommend using at least **Windows Server 2019 Datacenter**, as: - Windows 2019 Failover Cluster Service is Azure aware - There is added integration and awareness of Azure Host Maintenance and improved experience by monitoring for Azure schedule events. - It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on Azure Internal Load Balancer. |
sap | Sap Hana High Availability Netapp Files Red Hat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md | Title: High availability of SAP HANA Scale-up with ANF on RHEL | Microsoft Docs description: Establish high availability of SAP HANA with ANF on Azure virtual machines (VMs). vm-linux Last updated 07/11/2023 - # High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux-[anf-azure-doc]:/azure/azure-netapp-files/ -[anf-avail-matrix]:https://azure.microsoft.com/global-infrastructure/services/?products=netapp®ions=all --[2205917]:https://launchpad.support.sap.com/#/notes/2205917 -[1944799]:https://launchpad.support.sap.com/#/notes/1944799 -[1928533]:https://launchpad.support.sap.com/#/notes/1928533 -[2015553]:https://launchpad.support.sap.com/#/notes/2015553 -[2178632]:https://launchpad.support.sap.com/#/notes/2178632 -[2191498]:https://launchpad.support.sap.com/#/notes/2191498 -[2243692]:https://launchpad.support.sap.com/#/notes/2243692 -[1984787]:https://launchpad.support.sap.com/#/notes/1984787 -[1999351]:https://launchpad.support.sap.com/#/notes/1999351 -[1410736]:https://launchpad.support.sap.com/#/notes/1410736 -[1900823]:https://launchpad.support.sap.com/#/notes/1900823 -[2292690]:https://launchpad.support.sap.com/#/notes/2292690 -[2455582]:https://launchpad.support.sap.com/#/notes/2455582 -[2593824]:https://launchpad.support.sap.com/#/notes/2593824 -[2009879]:https://launchpad.support.sap.com/#/notes/2009879 -[3108302]:https://launchpad.support.sap.com/#/notes/3108302 --[sap-swcenter]:https://support.sap.com/en/my-support/software-downloads.html --[sap-hana-ha]:sap-hana-high-availability.md -[nfs-ha]:high-availability-guide-suse-nfs.md - This article describes how to configure SAP HANA System Replication in Scale-up deployment, when the HANA file systems are mounted via NFS, using Azure NetApp Files (ANF). In the example configurations and installation commands, instance number **03**, and HANA System ID **HN1** are used. SAP HANA Replication consists of one primary node and at least one secondary node. When steps in this document are marked with the following prefixes, the meaning is as follows: When steps in this document are marked with the following prefixes, the meaning - **[A]**: The step applies to all nodes - **[1]**: The step applies to node1 only - **[2]**: The step applies to node2 only- + Read the following SAP Notes and papers first: - SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533), which has:- - The list of Azure VM sizes that are supported for the deployment of SAP software. - - Important capacity information for Azure VM sizes. - - The supported SAP software, and operating system (OS) and database combinations. - - The required SAP kernel version for Windows and Linux on Microsoft Azure. + - The list of Azure VM sizes that are supported for the deployment of SAP software. + - Important capacity information for Azure VM sizes. + - The supported SAP software, and operating system (OS) and database combinations. + - The required SAP kernel version for Windows and Linux on Microsoft Azure. - SAP Note [2015553](https://launchpad.support.sap.com/#/notes/2015553) lists prerequisites for SAP-supported SAP software deployments in Azure. - SAP Note [405827](https://launchpad.support.sap.com/#/notes/405827) lists out recommended file system for HANA environment. - SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) has recommended OS settings for Red Hat Enterprise Linux. Read the following SAP Notes and papers first: - [Azure Virtual Machines DBMS deployment for SAP on Linux][dbms-guide] - [SAP HANA system replication in pacemaker cluster.](https://access.redhat.com/articles/3004101) - General RHEL documentation- - [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index) - - [High Availability Add-On Administration.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index) - - [High Availability Add-On Reference.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index) - - [Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571) + - [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index) + - [High Availability Add-On Administration.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index) + - [High Availability Add-On Reference.](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index) + - [Configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571) - Azure-specific RHEL documentation:- - [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members.](https://access.redhat.com/articles/3131341) - - [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure.](https://access.redhat.com/articles/3252491) - - [Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are on NFS shares](https://access.redhat.com/solutions/5156571) + - [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members.](https://access.redhat.com/articles/3131341) + - [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure.](https://access.redhat.com/articles/3252491) + - [Configure SAP HANA scale-up system replication up Pacemaker cluster when the HANA file systems are on NFS shares](https://access.redhat.com/solutions/5156571) - [NetApp SAP Applications on Microsoft Azure using Azure NetApp Files](https://www.netapp.com/us/media/tr-4746.pdf) - [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) In order to achieve SAP HANA High Availability of scale-up system on [Azure NetA ![SAP HANA HA Scale-up on ANF](./media/sap-hana-high-availability-rhel/sap-hana-scale-up-netapp-files-red-hat.png) -SAP HANA filesystems are mounted on NFS shares using Azure NetApp Files on each node. File systems /hana/data, /hana/log, and /hana/shared are unique to each node. +SAP HANA filesystems are mounted on NFS shares using Azure NetApp Files on each node. File systems /hana/data, /hana/log, and /hana/shared are unique to each node. Mounted on node1 (**hanadb1**) Mounted on node2 (**hanadb2**) - 10.32.2.4:/**hanadb2**-shared-mnt00001 on /hana/shared > [!NOTE]-> File systems /hana/shared, /hana/data and /hana/log are not shared between the two nodes. Each cluster node has its own, separate file systems. +> File systems /hana/shared, /hana/data and /hana/log are not shared between the two nodes. Each cluster node has its own, separate file systems. The SAP HANA System Replication configuration uses a dedicated virtual hostname and virtual IP addresses. On Azure, a load balancer is required to use a virtual IP address. The presented configuration shows a load balancer with: - Front-end IP address: 10.32.0.10 for hn1-db-- Probe Port: 62503 +- Probe Port: 62503 ## Set up the Azure NetApp File infrastructure For information about the availability of Azure NetApp Files by Azure region, se ### Important considerations -As you are creating your Azure NetApp Files volumes for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations). +As you're creating your Azure NetApp Files volumes for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations). ### Sizing of HANA database on Azure NetApp Files The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). -While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files). +While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files). -The configuration in this article is presented with simple Azure NetApp Files Volumes. +The configuration in this article is presented with simple Azure NetApp Files Volumes. > [!IMPORTANT]-> For production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg). +> For production systems, where performance is a key, we recommend to evaluate and consider using [Azure NetApp Files application volume group for SAP HANA](hana-vm-operations-netapp.md#deployment-through-azure-netapp-files-application-volume-group-for-sap-hana-avg). ### Deploy Azure NetApp Files resources The following instructions assume that you've already deployed your [Azure virtu 1. Create a NetApp account in your selected Azure region by following the instructions in [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md). -2. Set up an Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md). +2. Set up Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md). ++ The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md). - The HANA architecture presented in this article uses a single Azure NetApp Files capacity pool at the *Ultra* Service level. For HANA workloads on Azure, we recommend using an Azure NetApp Files *Ultra* or *Premium* [service Level](../../azure-netapp-files/azure-netapp-files-service-levels.md). +3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md). -3. Delegate a subnet to Azure NetApp Files, as described in the instructions in [Delegate a subnet to Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-delegate-subnet.md). +4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md). -4. Deploy Azure NetApp Files volumes by following the instructions in [Create an NFS volume for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-create-volumes.md). + As you're deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically. - As you are deploying the volumes, be sure to select the NFSv4.1 version. Deploy the volumes in the designated Azure NetApp Files subnet. The IP addresses of the Azure NetApp volumes are assigned automatically. + Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.32.2.4/hanadb1-data-mnt00001, nfs://10.32.2.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes. - Keep in mind that the Azure NetApp Files resources and the Azure VMs must be in the same Azure virtual network or in peered Azure virtual networks. For example, hanadb1-data-mnt00001, hanadb1-log-mnt00001, and so on, are the volume names and nfs://10.32.2.4/hanadb1-data-mnt00001, nfs://10.32.2.4/hanadb1-log-mnt00001, and so on, are the file paths for the Azure NetApp Files volumes. - - On **hanadb1** + On **hanadb1** - - Volume hanadb1-data-mnt00001 (nfs://10.32.2.4:/hanadb1-data-mnt00001) - - Volume hanadb1-log-mnt00001 (nfs://10.32.2.4:/hanadb1-log-mnt00001) - - Volume hanadb1-shared-mnt00001 (nfs://10.32.2.4:/hanadb1-shared-mnt00001) - - On **hanadb2** + - Volume hanadb1-data-mnt00001 (nfs://10.32.2.4:/hanadb1-data-mnt00001) + - Volume hanadb1-log-mnt00001 (nfs://10.32.2.4:/hanadb1-log-mnt00001) + - Volume hanadb1-shared-mnt00001 (nfs://10.32.2.4:/hanadb1-shared-mnt00001) - - Volume hanadb2-data-mnt00001 (nfs://10.32.2.4:/hanadb2-data-mnt00001) - - Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001) - - Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-mnt00001) + On **hanadb2** ++ - Volume hanadb2-data-mnt00001 (nfs://10.32.2.4:/hanadb2-data-mnt00001) + - Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001) + - Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-mnt00001) > [!NOTE] > All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes. > If you deployed the /hana/shared volumes as NFSv3 volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3. -## Deploy Linux virtual machine via Azure portal +## Deploy Linux virtual machine via Azure portal -First you need to create the Azure NetApp Files volumes. Then do the following steps: +This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet. -1. Create a resource group. -2. Create a virtual network. -3. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration. -4. Create a load balancer (internal). We recommend standard load balancer. - Select the virtual network created in step 2. -5. Create Virtual Machine 1 (**hanadb1**). -6. Create Virtual Machine 2 (**hanadb2**). -7. While creating virtual machine, we will not be adding any disk as all our mount points will be on NFS shares from Azure NetApp Files. +Deploy virtual machines for SAP HANA. Choose a suitable RHEL image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set. > [!IMPORTANT]-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. +> +> Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type. ++During VM configuration, we won't be adding any disk as all our mount points are on NFS shares from Azure NetApp Files. Also, you have an option to create or select exiting load balancer in networking section. If you're creating a new load balancer, follow below steps - ++1. To set up standard load balancer, follow these configuration steps: + 1. First, create a front-end IP pool: + 1. Open the load balancer, select **frontend IP pool**, and select **Add**. + 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**). + 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.32.0.10**). + 4. Select **OK**. + 5. After the new front-end IP pool is created, note the pool IP address. + 2. Create a single back-end pool: + 1. Open the load balancer, select **Backend pools**, and then select **Add**. + 2. Enter the name of the new back-end pool (for example, **hana-backend**). + 3. Select **NIC** for Backend Pool Configuration. + 4. Select **Add a virtual machine**. + 5. Select the virtual machines of the HANA cluster. + 6. Select **Add**. + 7. Select **Save**. + 3. Next, create a health probe: + 1. Open the load balancer, select **health probes**, and select **Add**. + 2. Enter the name of the new health probe (for example, **hana-hp**). + 3. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5. + 4. Select **OK**. + 4. Next, create the load-balancing rules: + 1. Open the load balancer, select **load balancing rules**, and select **Add**. + 2. Enter the name of the new load balancer rule (for example, **hana-lb**). + 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). + 1. Increase idle timeout to 30 minutes + 4. Select **HA Ports**. + 5. Make sure to **enable Floating IP**. + 6. Select **OK**. -> [!NOTE] -> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). +For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694). -8. To set up standard load balancer, follow these configuration steps: - 1. First, create a front-end IP pool: - 1. Open the load balancer, select **frontend IP pool**, and select **Add**. - 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**). - 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.32.0.10**). - 1. Select **OK**. - 1. After the new front-end IP pool is created, note the pool IP address. - 1. Create a single back-end pool: - 1. Open the load balancer, select **Backend pools**, and then select **Add**. - 1. Enter the name of the new back-end pool (for example, **hana-backend**). - 2. Select **NIC** for Backend Pool Configuration. - 1. Select **Add a virtual machine**. - 1. Select the virtual machines of the HANA cluster. - 1. Select **Add**. - 2. Select **Save**. - 1. Next, create a health probe: - 1. Open the load balancer, select **health probes**, and select **Add**. - 1. Enter the name of the new health probe (for example, **hana-hp**). - 1. Select TCP as the protocol and port 625**03**. Keep the **Interval** value set to 5. - 1. Select **OK**. - 1. Next, create the load-balancing rules: - 1. Open the load balancer, select **load balancing rules**, and select **Add**. - 1. Enter the name of the new load balancer rule (for example, **hana-lb**). - 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). - 2. Increase idle timeout to 30 minutes - 1. Select **HA Ports**. - 1. Make sure to **enable Floating IP**. - 1. Select **OK**. +> [!IMPORTANT] +> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. -For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or SAP Note [2388694](https://launchpad.support.sap.com/#/notes/2388694). +> [!NOTE] +> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). > [!IMPORTANT] > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md). See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). ## Mount the Azure NetApp Files volume -1. **[A]** Create mount points for the HANA database volumes. +1. **[A]** Create mount points for the HANA database volumes. ```bash sudo mkdir -p /hana/data For more information about the required ports for SAP HANA, read the chapter [Co ```bash sudo cat /etc/idmapd.conf ```+ Example output+ ```output [General] Domain = defaultv4iddomain.com For more information about the required ports for SAP HANA, read the chapter [Co Nobody-Group = nobody ``` - > [!IMPORTANT] + > [!IMPORTANT] > Make sure to set the NFS domain in /etc/idmapd.conf on the VM to match the default domain configuration on Azure NetApp Files: **defaultv4iddomain.com**. If there's a mismatch between the domain configuration on the NFS client (i.e. the VM) and the NFS server, i.e. the Azure NetApp configuration, then the permissions for files on Azure NetApp volumes that are mounted on the VMs will be displayed as nobody.- -3. **[1]** Mount the node-specific volumes on node1 (**hanadb1**) +3. **[1]** Mount the node-specific volumes on node1 (**hanadb1**) ```bash sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-shared-mnt00001 /hana/shared sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-log-mnt00001 /hana/log sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb1-data-mnt00001 /hana/data ```- -4. **[2]** Mount the node-specific volumes on node2 (**hanadb2**) - ++4. **[2]** Mount the node-specific volumes on node2 (**hanadb2**) + ```bash sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-shared-mnt00001 /hana/shared sudo mount -o rw,nfsvers=4.1,hard,timeo=600,rsize=262144,wsize=262144,noatime,lock,_netdev,sec=sys 10.32.2.4:/hanadb2-log-mnt00001 /hana/log For more information about the required ports for SAP HANA, read the chapter [Co ```bash sudo nfsstat -m ```- Verify that flag vers is set to 4.1 ++ Verify that flag vers is set to 4.1 Example from hanadb1- + ```output /hana/log from 10.32.2.4:/hanadb1-log-mnt00001 Flags: rw,noatime,vers=4.1,rsize=262144,wsize=262144,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.32.0.4,local_lock=none,addr=10.32.2.4 For more information about the required ports for SAP HANA, read the chapter [Co 6. **[A]** Verify **nfs4_disable_idmapping**. It should be set to **Y**. To create the directory structure where **nfs4_disable_idmapping** is located, execute the mount command. You won't be able to manually create the directory under /sys/modules, because access is reserved for the kernel / drivers. - Check nfs4_disable_idmapping - ```bash + Check nfs4_disable_idmapping ++ ```bash sudo cat /sys/module/nfs/parameters/nfs4_disable_idmapping ```- If you need to set nfs4_disable_idmapping to ++ If you need to set nfs4_disable_idmapping to + ```bash sudo echo "Y" > /sys/module/nfs/parameters/nfs4_disable_idmapping ```+ Make the configuration permanent+ ```bash sudo echo "options nfs nfs4_disable_idmapping=Y" >> /etc/modprobe.d/nfs.conf ``` - ΓÇïFor more details on how to change **nfs_disable_idmapping** parameter, see [https://access.redhat.com/solutions/1749883](https://access.redhat.com/solutions/1749883). -+ ΓÇïFor more information on how to change nfs_disable_idmapping parameter, see [https://access.redhat.com/solutions/1749883](https://access.redhat.com/solutions/1749883). ## SAP HANA installation For more information about the required ports for SAP HANA, read the chapter [Co ``` Insert the following lines in the /etc/hosts file. Change the IP address and hostname to match your environment + ```output 10.32.0.4 hanadb1 10.32.0.5 hanadb2 ``` -3. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. +2. **[A]** Prepare the OS for running SAP HANA on Azure NetApp with NFS, as described in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). Create configuration file */etc/sysctl.d/91-NetApp-HANA.conf* for the NetApp configuration settings. ```bash sudo vi /etc/sysctl.d/91-NetApp-HANA.conf ```+ Add the following entries in the configuration file+ ```output net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 For more information about the required ports for SAP HANA, read the chapter [Co net.ipv4.tcp_sack = 1 ``` -4. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings. +3. **[A]** Create configuration file */etc/sysctl.d/ms-az.conf* with additional optimization settings. ```bash sudo vi /etc/sysctl.d/ms-az.conf ```+ Add the following entries in the configuration file+ ```output net.ipv6.conf.all.disable_ipv6 = 1 net.ipv4.tcp_max_syn_backlog = 16348 For more information about the required ports for SAP HANA, read the chapter [Co ``` > [!TIP]- > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more details see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). + > Avoid setting net.ipv4.ip_local_port_range and net.ipv4.ip_local_reserved_ports explicitly in the sysctl configuration files to allow SAP Host Agent to manage the port ranges. For more information, see SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). -5. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). +4. **[A]** Adjust the sunrpc settings, as recommended in SAP note [3024346 - Linux Kernel Settings for NetApp NFS](https://launchpad.support.sap.com/#/notes/3024346). ```bash sudo vi /etc/modprobe.d/sunrpc.conf ``` Insert the following line:+ ```output options sunrpc tcp_max_slot_table_entries=128 ``` -2. **[A]** RHEL for HANA Configuration +5. **[A]** RHEL for HANA Configuration Configure RHEL as described in below SAP Note based on your RHEL version - [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690) - [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782) - [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)- - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) + - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) - [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607) -3. **[A]** Install the SAP HANA -- Started with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some case you do not want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711) -- Run the **hdblcm** program from the HANA DVD. Enter the following values at the prompt: - Choose installation: Enter **1** (for install) - Select additional components for installation: Enter **1**. - Enter Installation Path [/hana/shared]: press Enter to accept the default - Enter Local Host Name [..]: Press Enter to accept the default - Do you want to add additional hosts to the system? (y/n) [n]: **n** - Enter SAP HANA System ID: Enter **HN1**. - Enter Instance Number [00]: Enter **03** - Select Database Mode / Enter Index [1]: press Enter to accept the default - Select System Usage / Enter Index [4]: enter **4** (for custom) - Enter Location of Data Volumes [/hana/data]: press Enter to accept the default - Enter Location of Log Volumes [/hana/log]: press Enter to accept the default - Restrict maximum memory allocation? [n]: press Enter to accept the default - Enter Certificate Host Name For Host '...' [...]: press Enter to accept the default - Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password - Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm - Enter System Administrator (hn1adm) Password: Enter the system administrator password - Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to confirm - Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default - Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default - Enter System Administrator User ID [1001]: press Enter to accept the default - Enter ID of User Group (sapsys) [79]: press Enter to accept the default - Enter Database User (SYSTEM) Password: Enter the database user password - Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm - Restart system after machine reboot? [n]: press Enter to accept the default - Do you want to continue? (y/n): Validate the summary. Enter **y** to continue +6. **[A]** Install the SAP HANA ++ Started with HANA 2.0 SPS 01, MDC is the default option. When you install HANA system, SYSTEMDB and a tenant with same SID will be created together. In some case you don't want the default tenant. In case, if you donΓÇÖt want to create initial tenant along with the installation you can follow SAP Note [2629711](https://launchpad.support.sap.com/#/notes/2629711) ++ Run the **hdblcm** program from the HANA DVD. Enter the following values at the prompt: + Choose installation: Enter **1** (for install) + Select additional components for installation: Enter **1**. + Enter Installation Path [/hana/shared]: press Enter to accept the default + Enter Local Host Name [..]: Press Enter to accept the default + Do you want to add additional hosts to the system? (y/n) [n]: **n** + Enter SAP HANA System ID: Enter **HN1**. + Enter Instance Number [00]: Enter **03** + Select Database Mode / Enter Index [1]: press Enter to accept the default + Select System Usage / Enter Index [4]: enter **4** (for custom) + Enter Location of Data Volumes [/hana/data]: press Enter to accept the default + Enter Location of Log Volumes [/hana/log]: press Enter to accept the default + Restrict maximum memory allocation? [n]: press Enter to accept the default + Enter Certificate Host Name For Host '...' [...]: press Enter to accept the default + Enter SAP Host Agent User (sapadm) Password: Enter the host agent user password + Confirm SAP Host Agent User (sapadm) Password: Enter the host agent user password again to confirm + Enter System Administrator (hn1adm) Password: Enter the system administrator password + Confirm System Administrator (hn1adm) Password: Enter the system administrator password again to confirm + Enter System Administrator Home Directory [/usr/sap/HN1/home]: press Enter to accept the default + Enter System Administrator Login Shell [/bin/sh]: press Enter to accept the default + Enter System Administrator User ID [1001]: press Enter to accept the default + Enter ID of User Group (sapsys) [79]: press Enter to accept the default + Enter Database User (SYSTEM) Password: Enter the database user password + Confirm Database User (SYSTEM) Password: Enter the database user password again to confirm + Restart system after machine reboot? [n]: press Enter to accept the default + Do you want to continue? (y/n): Validate the summary. Enter **y** to continue 4. **[A]** Upgrade SAP Host Agent For more information about the required ports for SAP HANA, read the chapter [Co ## Configure SAP HANA system replication -Follow the steps in Set up [SAP HANA System Replication](./sap-hana-high-availability-rhel.md#configure-sap-hana-20-system-replication) to configure SAP HANA System Replication. +Follow the steps in Set up [SAP HANA System Replication](./sap-hana-high-availability-rhel.md#configure-sap-hana-20-system-replication) to configure SAP HANA System Replication. ## Cluster configuration -This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on NFS shares using Azure NetApp Files. +This section describes necessary steps required for cluster to operate seamlessly when SAP HANA is installed on NFS shares using Azure NetApp Files. ### Create a Pacemaker cluster Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux](./high-av ### Implement the Python system replication hook SAPHanaSR -This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It is highly recommended to configure the SAPHanaSR Python hook. --1. **[A]** Install the HANA "system replication hook". The hook needs to be installed on both HANA DB nodes. -- > [!TIP] - > The Python hook can only be implemented for HANA 2.0. -- 1. Prepare the hook as `root`. -- ```bash - sudo mkdir -p /hana/shared/myHooks - sudo cp /usr/share/SAPHanaSR/srHook/SAPHanaSR.py /hana/shared/myHooks - sudo chown -R hn1adm:sapsys /hana/shared/myHooks - ``` -- 2. Stop HANA on both nodes. Execute as <sid\>adm: - - ```bash - sapcontrol -nr 03 -function StopSystem - ``` -- 3. Adjust `global.ini` on each cluster node. - - ```config - # add to global.ini - [ha_dr_provider_SAPHanaSR] - provider = SAPHanaSR - path = /hana/shared/myHooks - execution_order = 1 - - [trace] - ha_dr_saphanasr = info - ``` --2. **[A]** The cluster requires sudoers configuration on each cluster node for <sid\>adm. In this example that is achieved by creating a new file. Execute the commands as `root`. - ```bash - sudo visudo -f /etc/sudoers.d/20-saphana - ``` - Insert the following lines and then save - ```output - Cmnd_Alias SITE1_SOK = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE1 -v SOK -t crm_config -s SAPHanaSR - Cmnd_Alias SITE1_SFAIL = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE1 -v SFAIL -t crm_config -s SAPHanaSR - Cmnd_Alias SITE2_SOK = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE2 -v SOK -t crm_config -s SAPHanaSR - Cmnd_Alias SITE2_SFAIL = /usr/sbin/crm_attribute -n hana_hn1_site_srHook_SITE2 -v SFAIL -t crm_config -s SAPHanaSR - hn1adm ALL=(ALL) NOPASSWD: SITE1_SOK, SITE1_SFAIL, SITE2_SOK, SITE2_SFAIL - Defaults!SITE1_SOK, SITE1_SFAIL, SITE2_SOK, SITE2_SFAIL !requiretty - ``` --3. **[A]** Start SAP HANA on both nodes. Execute as <sid\>adm. -- ```bash - sapcontrol -nr 03 -function StartSystem - ``` --4. **[1]** Verify the hook installation. Execute as <sid\>adm on the active HANA system replication site. -- ```bash - cdtrace - awk '/ha_dr_SAPHanaSR.*crm_attribute/ \ - { printf "%s %s %s %s\n",$2,$3,$5,$16 }' nameserver_* - ``` - Example output -- ```output - # 2021-04-12 21:36:16.911343 ha_dr_SAPHanaSR SFAIL - # 2021-04-12 21:36:29.147808 ha_dr_SAPHanaSR SFAIL - # 2021-04-12 21:37:04.898680 ha_dr_SAPHanaSR SOK - ``` --For more details on the implementation of the SAP HANA system replication hook see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook). +This is important step to optimize the integration with the cluster and improve the detection when a cluster failover is needed. It's highly recommended to configure the SAPHanaSR Python hook. Follow the steps mentioned in [Implement the Python system replication hook SAPHanaSR](sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr) ### Configure filesystem resources -In this example each cluster node has its own HANA NFS filesystems /hana/shared, /hana/data, and /hana/log. +In this example each cluster node has its own HANA NFS filesystems /hana/shared, /hana/data, and /hana/log. 1. **[1]** Put the cluster in maintenance mode. - ``` + ```bash sudo pcs property set maintenance-mode=true ``` In this example each cluster node has its own HANA NFS filesystems /hana/shared, `on-fail=fence` attribute is also added to the monitor operation. With this option, if the monitor operation fails on a node, that node is immediately fenced. Without this option, the default behavior is to stop all resources that depend on the failed resource, then restart the failed resource, then start all the resources that depend on the failed resource. Not only can this behavior take a long time when an SAPHana resource depends on the failed resource, but it also can fail altogether. The SAPHana resource cannot stop successfully if the NFS server holding the HANA executables is inaccessible. - The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup. + The suggested timeouts values allow the cluster resources to withstand protocol-specific pause, related to NFSv4.1 lease renewals. For more information, see [NFS in NetApp Best practice](https://www.netapp.com/media/10720-tr-4067.pdf). The timeouts in the above configuration may need to be adapted to the specific SAP setup. - For workloads, that require higher throughput, consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release. + For workloads that require higher throughput consider using the `nconnect` mount option, as described in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#nconnect-mount-option). Check if `nconnect` is [supported by Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md#nconnect) on your Linux release. 4. **[1]** Configuring Location Constraints In this example each cluster node has its own HANA NFS filesystems /hana/shared, 7. **[1]** Creating Ordering Constraints Configure ordering constraints so that a node's attribute resources start only after all of the node's NFS mounts are mounted.- + ```bash sudo pcs constraint order hanadb1_nfs then hana_nfs1_active sudo pcs constraint order hanadb2_nfs then hana_nfs2_active ``` > [!TIP]- > If your configuration includes file systems, outside of group `hanadb1_nfs` or `hanadb2_nfs`, then include the `sequential=false` option, so that there are no ordering dependencies among the file systems. All file systems must start before `hana_nfs1_active`, but they do not need to start in any order relative to each other. For more details see [How do I configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571) + > If your configuration includes file systems, outside of group `hanadb1_nfs` or `hanadb2_nfs`, then include the `sequential=false` option, so that there are no ordering dependencies among the file systems. All file systems must start before `hana_nfs1_active`, but they do not need to start in any order relative to each other. For more information, see [How do I configure SAP HANA System Replication in Scale-Up in a Pacemaker cluster when the HANA filesystems are on NFS shares](https://access.redhat.com/solutions/5156571) ### Configure SAP HANA cluster resources In this example each cluster node has its own HANA NFS filesystems /hana/shared, 2. **[1]** Configure constraints between the SAP HANA resources and the NFS mounts - Location rule constraints will be set so that the SAP HANA resources can run on a node only if all of the node's NFS mounts are mounted. + Location rule constraints are set so that the SAP HANA resources can run on a node only if all of the node's NFS mounts are mounted. ```bash sudo pcs constraint location SAPHanaTopology_HN1_03-clone rule score=-INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true ```+ On RHEL 7.x+ ```bash sudo pcs constraint location SAPHana_HN1_03-master rule score=-INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true ```- On RHEL 8.x ++ On RHEL 8.x/9.x + ```bash sudo pcs constraint location SAPHana_HN1_03-clone rule score=-INFINITY hana_nfs1_active ne true and hana_nfs2_active ne true ```+ Take the cluster out of maintenance mode+ ```bash sudo pcs property set maintenance-mode=false ``` Check the status of cluster and all the resources > [!NOTE]- > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. - + > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. + ```bash sudo pcs status ```+ Example output+ ```output Online: [ hanadb1 hanadb2 ] Starting with SAP HANA 2.0 SPS 01 SAP allows Active/Read-Enabled setups for SAP The additional configuration, required to manage HANA Active/Read enabled system replication in a Red Hat high availability cluster with second virtual IP is described in [Configure HANA Active/Read Enabled System Replication in Pacemaker cluster](./sap-hana-high-availability-rhel.md#configure-hana-activeread-enabled-system-replication-in-pacemaker-cluster). -Before proceeding further, make sure you have fully configured Red Hat High Availability Cluster managing SAP HANA database as described in above segments of the documentation. -+Before proceeding further, make sure you have fully configured Red Hat High Availability Cluster managing SAP HANA database as described in above segments of the documentation. ## Test the cluster setup -This section describes how you can test your setup. +This section describes how you can test your setup. -1. Before you start a test, make sure that Pacemaker does not have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA system replication is sync state, for example with systemReplicationStatus: +1. Before you start a test, make sure that Pacemaker doesn't have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA system replication is sync state, for example with systemReplicationStatus: ```bash sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" This section describes how you can test your setup. 2. Verify the cluster configuration for a failure scenario when a node loses access to the NFS share (/hana/shared) The SAP HANA resource agents depend on binaries, stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented scenario. - It is difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be performed is to re-mount the file system as read-only. - This approach validates that the cluster will be able to failover, if access to `/hana/shared` is lost on the active node. -+ It's difficult to simulate a failure, where one of the servers loses access to the NFS share. A test that can be performed is to remount the file system as read-only. + This approach validates that the cluster will be able to fail over, if access to `/hana/shared` is lost on the active node. - **Expected Result:** On making `/hana/shared` as read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1` which performs read/write operation on file system will fail as it is not able to write anything on the file system and will perform HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares. + **Expected Result:** On making `/hana/shared` as read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1` which performs read/write operation on file system fails as it isn't able to write anything on the file system and will perform HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares. Resource state before starting the test: ```bash sudo pcs status ```+ Example output+ ```output Full list of resources: rsc_hdb_azr_agt (stonith:fence_azure_arm): Started hanadb1 This section describes how you can test your setup. ```bash sudo pcs status ```+ Example output+ ```output Full list of resources: This section describes how you can test your setup. ## Next steps -* [Azure Virtual Machines planning and implementation for SAP][planning-guide] -* [Azure Virtual Machines deployment for SAP][deployment-guide] -* [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] -* [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) +- [Azure Virtual Machines planning and implementation for SAP][planning-guide] +- [Azure Virtual Machines deployment for SAP][deployment-guide] +- [Azure Virtual Machines DBMS deployment for SAP][dbms-guide] +- [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md) |
sap | Sap Hana High Availability Netapp Files Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-suse.md | Read the following SAP Notes and papers first: - [Azure Virtual Machines planning and implementation for SAP on Linux](./planning-guide.md) >[!NOTE]-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. ## Overview |
sap | Sap Hana High Availability Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md | -[2205917]:https://launchpad.support.sap.com/#/notes/2205917 -[1944799]:https://launchpad.support.sap.com/#/notes/1944799 [1928533]:https://launchpad.support.sap.com/#/notes/1928533 [2015553]:https://launchpad.support.sap.com/#/notes/2015553 [2178632]:https://launchpad.support.sap.com/#/notes/2178632 [2191498]:https://launchpad.support.sap.com/#/notes/2191498 [2243692]:https://launchpad.support.sap.com/#/notes/2243692-[1984787]:https://launchpad.support.sap.com/#/notes/1984787 [1999351]:https://launchpad.support.sap.com/#/notes/1999351 [2388694]:https://launchpad.support.sap.com/#/notes/2388694-[2292690]:https://launchpad.support.sap.com/#/notes/2292690 -[2455582]:https://launchpad.support.sap.com/#/notes/2455582 [2002167]:https://launchpad.support.sap.com/#/notes/2002167 [2009879]:https://launchpad.support.sap.com/#/notes/2009879 [3108302]:https://launchpad.support.sap.com/#/notes/3108302 [sap-swcenter]:https://launchpad.support.sap.com/#/softwarecenter-[template-multisid-db]:https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fapplication-workloads%2Fsap%2Fsap-3-tier-marketplace-image-multi-sid-db-md%2Fazuredeploy.json For on-premises development, you can use either HANA System Replication or use shared storage to establish high availability for SAP HANA. On Azure virtual machines (VMs), HANA System Replication on Azure is currently the only supported high availability function. SAP HANA System Replication setup uses a dedicated virtual hostname and virtual The Azure Marketplace contains images qualified for SAP HANA with the High Availability add-on, which you can use to deploy new virtual machines using various versions of Red Hat. -### Deploy with a template --You can use one of the quickstart templates that are on GitHub to deploy all the required resources. The template deploys the virtual machines, the load balancer, the availability set, and so on. -To deploy the template, follow these steps: --1. Open the [database template][template-multisid-db] on the Azure portal. -1. Enter the following parameters: - * **Sap System ID**: Enter the SAP system ID of the SAP system you want to install. The ID is used as a prefix for the resources that are deployed. - * **Os Type**: Select one of the Linux distributions. For this example, select **RHEL 7**. - * **Db Type**: Select **HANA**. - * **Sap System Size**: Enter the number of SAPS that the new system is going to provide. If you're not sure how many SAPS the system requires, ask your SAP Technology Partner or System Integrator. - * **System Availability**: Select **HA**. - * **Admin Username, Admin Password or SSH key**: A new user is created that can be used to sign in to the machine. - * **Subnet ID**: If you want to deploy the VM into an existing VNet where you have a subnet defined the VM should be assigned to, name the ID of that specific subnet. The ID usually looks like **/subscriptions/\<subscription ID>/resourceGroups/\<resource group name>/providers/Microsoft.Network/virtualNetworks/\<virtual network name>/subnets/\<subnet name>**. Leave empty, if you want to create a new virtual network --### Manual deployment --1. Create a resource group. -1. Create a virtual network. -1. Choose a [suitable deployment type](./sap-high-availability-architecture-scenarios.md#comparison-of-different-deployment-types-for-sap-workload) for SAP virtual machines. Typically a virtual machine scale set with flexible orchestration. -1. Create a load balancer (internal). We recommend [standard load balancer](../../load-balancer/load-balancer-overview.md). - * Select the virtual network created in step 2. -1. Create virtual machine 1. - Use a properly supported version of Red Hat for SAP + High Availability, supported for your version of SAP HANA. This page will use the image [Red Hat Enterprise Linux- SAP, HA, Update Services](https://portal.azure.com/#create/redhat.rhel-sap-ha). - Select the availability set created in step 3. -1. Create virtual machine 2. - Use a properly supported version of Red Hat for SAP + High Availability, supported for your version of SAP HANA. This page will use the image [Red Hat Enterprise Linux- SAP, HA, Update Services](https://portal.azure.com/#create/redhat.rhel-sap-ha). - Select the availability set created in step 3. -1. Add data disks. +### Deploy Linux VMs manually via Azure portal ++This document assumes that you've already deployed a resource group, [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md), and subnet. ++Deploy virtual machines for SAP HANA. Choose a suitable RHEL image that is supported for HANA system. You can deploy VM in any one of the availability options - scale set, availability zone or availability set. > [!IMPORTANT]-> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. +> +> Make sure that the OS you select is SAP certified for SAP HANA on the specific VM types that you plan to use in your deployment. You can look up SAP HANA-certified VM types and their OS releases in [SAP HANA Certified IaaS Platforms](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24;iaas;v:125;v:105;v:99;v:120). Make sure that you look at the details of the VM type to get the complete list of SAP HANA-supported OS releases for the specific VM type. -> [!Note] -> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). +During VM configuration, you can create or select exiting load balancer in networking section. If you're creating a new load balancer, follow below steps - -To set up standard load balancer, follow these configuration steps: 1. First, create a front-end IP pool: 1. Open the load balancer, select **frontend IP pool**, and select **Add**.- 1. Enter the name of the new front-end IP pool (for example, **hana-frontend**). - 1. Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**). - 1. Select **OK**. - 1. After the new front-end IP pool is created, note the pool IP address. + 2. Enter the name of the new front-end IP pool (for example, **hana-frontend**). + 3. Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.13**). + 4. Select **OK**. + 5. After the new front-end IP pool is created, note the pool IP address. ++ 2. Create a single back-end pool: - 1. Create a single back-end pool: - 1. Open the load balancer, select **Backend pools**, and then select **Add**.- 1. Enter the name of the new back-end pool (for example, **hana-backend**). - 2. Select **NIC** for Backend Pool Configuration. - 1. Select **Add a virtual machine**. - 1. Select the virtual machines of the HANA cluster. - 1. Select **Add**. - 2. Select **Save**. + 2. Enter the name of the new back-end pool (for example, **hana-backend**). + 3. Select **NIC** for Backend Pool Configuration. + 4. Select **Add a virtual machine**. + 5. Select the virtual machines of the HANA cluster. + 6. Select **Add**. + 7. Select **Save**. - 1. Next, create a health probe: + 3. Next, create a health probe: 1. Open the load balancer, select **health probes**, and select **Add**.- 1. Enter the name of the new health probe (for example, **hana-hp**). - 1. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5. - 1. Select **OK**. + 2. Enter the name of the new health probe (for example, **hana-hp**). + 3. Select **TCP** as the protocol and port 625**03**. Keep the **Interval** value set to 5. + 4. Select **OK**. ++ 4. Next, create the load-balancing rules: - 1. Next, create the load-balancing rules: - 1. Open the load balancer, select **load balancing rules**, and select **Add**.- 1. Enter the name of the new load balancer rule (for example, **hana-lb**). - 1. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). - 2. Increase idle timeout to 30 minutes - 1. Select **HA Ports**. - 1. Increase the **idle timeout** to 30 minutes. - 1. Make sure to **enable Floating IP**. - 1. Select **OK**. + 2. Enter the name of the new load balancer rule (for example, **hana-lb**). + 3. Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-frontend**, **hana-backend** and **hana-hp**). + 4. Increase idle timeout to 30 minutes + 5. Select **HA Ports**. + 6. Increase the **idle timeout** to 30 minutes. + 7. Make sure to **enable Floating IP**. + 8. Select **OK**. For more information about the required ports for SAP HANA, read the chapter [Connections to Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6/latest/en-US/7a9343c9f2a2436faa3cfdb5ca00c052.html) in the [SAP HANA Tenant Databases](https://help.sap.com/viewer/78209c1d3a9b41cd8624338e42a12bf6) guide or [SAP Note 2388694][2388694]. +> [!IMPORTANT] +> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC. ++> [!NOTE] +> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](./high-availability-guide-standard-load-balancer-outbound-connections.md). + > [!IMPORTANT] > Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../load-balancer/load-balancer-custom-probe-overview.md).-> See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). +> See also SAP note [2382421](https://launchpad.support.sap.com/#/notes/2382421). ## Install SAP HANA The steps in this section use the following prefixes: ```output /dev/disk/azure/scsi1/lun0 /dev/disk/azure/scsi1/lun1 /dev/disk/azure/scsi1/lun2 /dev/disk/azure/scsi1/lun3 ```- + Create physical volumes for all of the disks that you want to use: ```bash The steps in this section use the following prefixes: sudo mkfs.xfs /dev/vg_hana_log_HN1/hana_log sudo mkfs.xfs /dev/vg_hana_shared_HN1/hana_shared ```- - Do not mount the directories by issuing mount commands, rather enter the configurations into the fstab and issue a final `mount -a` to validate the syntax. Start by creating the mount directories for each volume: ++ Don't mount the directories by issuing mount commands, rather enter the configurations into the fstab and issue a final `mount -a` to validate the syntax. Start by creating the mount directories for each volume: ```bash sudo mkdir -p /hana/data sudo mkdir -p /hana/log sudo mkdir -p /hana/shared ```- + Next create `fstab` entries for the three logical volumes by inserting the following lines in the `/etc/fstab` file: /dev/mapper/vg_hana_data_HN1-hana_data /hana/data xfs defaults,nofail 0 2 The steps in this section use the following prefixes: 10.0.0.5 hn1-db-0 10.0.0.6 hn1-db-1 - 1. **[A]** RHEL for HANA configuration Configure RHEL as described in the following notes:- - [2447641 - Additional packages required for installing SAP HANA SPS 12 on RHEL 7.X](https://access.redhat.com/solutions/2447641) - - [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690) - - [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782) - - [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582) - - [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) - - [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607) + * [2447641 - Additional packages required for installing SAP HANA SPS 12 on RHEL 7.X](https://access.redhat.com/solutions/2447641) + * [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690) + * [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782) + * [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582) + * [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) + * [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607) 1. **[A]** Install the SAP HANA The steps in this section use the following prefixes: ``` 1. **[2]** Configure System Replication on the second node:- + Register the second node to start the system replication. Run the following command as <hanasid\>adm: ```bash This is important step to optimize the integration with the cluster and improve provider = SAPHanaSR path = /hana/shared/myHooks execution_order = 1- + [trace] ha_dr_saphanasr = info ``` This is important step to optimize the integration with the cluster and improve # 2021-04-12 21:37:04.898680 ha_dr_SAPHanaSR SOK ``` -For more details on the implementation of the SAP HANA system replication hook see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook). +For more details on the implementation of the SAP HANA system replication hook, see [Enable the SAP HA/DR provider hook](https://access.redhat.com/articles/3004101#enable-srhook). ## Create SAP HANA cluster resources Create the HANA topology. Run the following commands on one of the Pacemaker cluster nodes. Throughout these instructions, be sure to substitute your instance number, HANA system ID, IP addresses, and system names, where appropriate: ```bash- sudo pcs property set maintenance-mode=true +sudo pcs property set maintenance-mode=true - sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1 InstanceNumber=03 \ - op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 \ - clone clone-max=2 clone-node-max=1 interleave=true +sudo pcs resource create SAPHanaTopology_HN1_03 SAPHanaTopology SID=HN1 InstanceNumber=03 \ + op start timeout=600 op stop timeout=300 op monitor interval=10 timeout=600 \ + clone clone-max=2 clone-node-max=1 interleave=true ``` Next, create the HANA resources. > [!NOTE]-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. If building a cluster on **RHEL 7.x**, use the following commands: sudo pcs resource defaults migration-threshold=5000 sudo pcs property set maintenance-mode=false ``` -If building a cluster on **RHEL 8.x**, use the following commands: +If building a cluster on **RHEL 8.x/9.x**, use the following commands: ```bash sudo pcs resource create SAPHana_HN1_03 SAPHana SID=HN1 InstanceNumber=03 PREFER_SITE_TAKEOVER=true DUPLICATE_PRIMARY_TIMEOUT=7200 AUTOMATED_REGISTER=false \ sudo pcs resource defaults update migration-threshold=5000 sudo pcs property set maintenance-mode=false ``` +To configure priority-fencing-delay for SAP HANA (applicable only as of pacemaker-2.0.4-6.el8 or higher), following commands needs to be executed. ++> [!NOTE] +> If you have two-node cluster, you have option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Pacemaker cluster properties](https://access.redhat.com/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_controlling-cluster-behavior-configuring-and-managing-high-availability-clusters). +> +> The property priority-fencing-delay is applicable for pacemaker-2.0.4-6.el8 version or higher. If you are setting up priority-fencing-delay on existing cluster, make sure to unset `pcmk_delay_max` option in fencing device. ++```bash +sudo pcs property set maintenance-mode=true ++sudo pcs resource defaults update priority=1 +sudo pcs resource update SAPHana_HN1_03-clone meta priority=10 ++sudo pcs property set priority-fencing-delay=15s ++sudo pcs property set maintenance-mode=false +``` + > [!IMPORTANT]-> It's a good idea to set `AUTOMATED_REGISTER` to `false`, while you're performing failover tests, to prevent a failed primary instance to automatically register as secondary. After testing, as a best practice, set `AUTOMATED_REGISTER` to `true`, so that after takeover, system replication can resume automatically. +> It's a good idea to set `AUTOMATED_REGISTER` to `false`, while you're performing failover tests, to prevent a failed primary instance to automatically register as secondary. After testing, as a best practice, set `AUTOMATED_REGISTER` to `true`, so that after takeover, system replication can resume automatically. Make sure that the cluster status is ok and that all of the resources are started. It's not important on which node the resources are running. > [!NOTE] > The timeouts in the above configuration are just examples and may need to be adapted to the specific HANA setup. For instance, you may need to increase the start timeout, if it takes longer to start the SAP HANA database. -Use the command `sudo pcs status` to check the state of the cluster resources just created: +Use the command `sudo pcs status` to check the state of the cluster resources created: ```output # Online: [ hn1-db-0 hn1-db-1 ] Before proceeding further, make sure you have fully configured Red Hat High Avai ### Additional setup in Azure load balancer for active/read-enabled setup -To proceed with additional steps on provisioning second virtual IP, make sure you have configured Azure Load Balancer as described in [Manual Deployment](#manual-deployment) section. +To proceed with additional steps on provisioning second virtual IP, make sure you have configured Azure Load Balancer as described in [Deploy Linux VMs manually via Azure portal](#deploy-linux-vms-manually-via-azure-portal) section. 1. For **standard** load balancer, follow below additional steps on the same load balancer that you had created in earlier section. - a. Create a second front-end IP pool: + a. Create a second front-end IP pool: - - Open the load balancer, select **frontend IP pool**, and select **Add**. - - Enter the name of the second front-end IP pool (for example, **hana-secondaryIP**). - - Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.14**). - - Select **OK**. - - After the new front-end IP pool is created, note the pool IP address. + * Open the load balancer, select **frontend IP pool**, and select **Add**. + * Enter the name of the second front-end IP pool (for example, **hana-secondaryIP**). + * Set the **Assignment** to **Static** and enter the IP address (for example, **10.0.0.14**). + * Select **OK**. + * After the new front-end IP pool is created, note the pool IP address. b. Next, create a health probe: - - Open the load balancer, select **health probes**, and select **Add**. - - Enter the name of the new health probe (for example, **hana-secondaryhp**). - - Select **TCP** as the protocol and port **62603**. Keep the **Interval** value set to 5, and the **Unhealthy threshold** value set to 2. - - Select **OK**. + * Open the load balancer, select **health probes**, and select **Add**. + * Enter the name of the new health probe (for example, **hana-secondaryhp**). + * Select **TCP** as the protocol and port **62603**. Keep the **Interval** value set to 5, and the **Unhealthy threshold** value set to 2. + * Select **OK**. c. Next, create the load-balancing rules: - - Open the load balancer, select **load balancing rules**, and select **Add**. - - Enter the name of the new load balancer rule (for example, **hana-secondarylb**). - - Select the front-end IP address , the back-end pool, and the health probe that you created earlier (for example, **hana-secondaryIP**, **hana-backend** and **hana-secondaryhp**). - - Select **HA Ports**. - - Make sure to **enable Floating IP**. - - Select **OK**. + * Open the load balancer, select **load balancing rules**, and select **Add**. + * Enter the name of the new load balancer rule (for example, **hana-secondarylb**). + * Select the front-end IP address, the back-end pool, and the health probe that you created earlier (for example, **hana-secondaryIP**, **hana-backend** and **hana-secondaryhp**). + * Select **HA Ports**. + * Make sure to **enable Floating IP**. + * Select **OK**. ### Configure HANA active/read enabled system replication -The steps to configure HANA system replication are described in [Configure SAP HANA 2.0 System Replication](#configure-sap-hana-20-system-replication) section. If you are deploying read-enabled secondary scenario, while configuring system replication on the second node, execute following command as **hanasid**adm: +The steps to configure HANA system replication are described in [Configure SAP HANA 2.0 System Replication](#configure-sap-hana-20-system-replication) section. If you're deploying read-enabled secondary scenario, while configuring system replication on the second node, execute following command as **hanasid**adm: -``` +```bash sapcontrol -nr 03 -function StopWait 600 10 hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMode=sync --name=SITE2 --operationMode=logreplay_readaccess hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMo The second virtual IP and the appropriate colocation constraint can be configured with the following commands: -``` +```bash pcs property set maintenance-mode=true pcs resource create secvip_HN1_03 ocf:heartbeat:IPaddr2 ip="10.40.0.16" pcs constraint location g_secip_HN1_03 rule score=4000 hana_hn1_sync_state eq PR pcs property set maintenance-mode=false ```-Make sure that the cluster status is ok and that all of the resources are started. The second virtual IP will run on the secondary site along with SAPHana secondary resource. ++Make sure that the cluster status is ok and that all of the resources are started. The second virtual IP runs on the secondary site along with SAPHana secondary resource. ```output sudo pcs status In next section, you can find the typical set of failover tests to execute. Be aware of the second virtual IP behavior, while testing a HANA cluster configured with read-enabled secondary: -1. When you migrate **SAPHana_HN1_03** cluster resource to secondary site **hn1-db-1**, the second virtual IP will continue to run on the same site **hn1-db-1**. If you have set AUTOMATED_REGISTER="true" for the resource and HANA system replication is registered automatically on **hn1-db-0**, then your second virtual IP will also move to **hn1-db-0**. +1. When you migrate **SAPHana_HN1_03** cluster resource to secondary site **hn1-db-1**, the second virtual IP continues to run on the same site **hn1-db-1**. If you have set AUTOMATED_REGISTER="true" for the resource and HANA system replication is registered automatically on **hn1-db-0**, then your second virtual IP will also move to **hn1-db-0**. -2. On testing server crash, second virtual IP resources (**secvip_HN1_03**) and Azure load balancer port resource (**secnc_HN1_03**) will run on primary server alongside the primary virtual IP resources. So, till the time secondary server is down, application that are connected to read-enabled HANA database will connect to primary HANA database. The behavior is expected as you do not want applications that are connected to read-enabled HANA database to be inaccessible till the time secondary server is unavailable. +2. On testing server crash, second virtual IP resources (**secvip_HN1_03**) and Azure load balancer port resource (**secnc_HN1_03**) will run on primary server alongside the primary virtual IP resources. So, till the time secondary server is down, application that are connected to read-enabled HANA database connects to primary HANA database. The behavior is expected as you don't want applications that are connected to read-enabled HANA database to be inaccessible till the time secondary server is unavailable. 3. During failover and fallback of second virtual IP address, it may happen that the existing connections on applications that use second virtual IP to connect to the HANA database may get interrupted. The setup maximizes the time that the second virtual IP resource will be assigne ## Test the cluster setup -This section describes how you can test your setup. Before you start a test, make sure that Pacemaker does not have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA is sync state, for example with systemReplicationStatus: +This section describes how you can test your setup. Before you start a test, make sure that Pacemaker doesn't have any failed action (via pcs status), there are no unexpected location constraints (for example leftovers of a migration test) and that HANA is sync state, for example with systemReplicationStatus: ```bash sudo su - hn1adm -c "python /usr/sap/HN1/HDB03/exe/python_support/systemReplicationStatus.py" Resource Group: g_ip_HN1_03 You can migrate the SAP HANA master node by executing the following command as root: -#### On RHEL 7.x - ```bash+# On RHEL 7.x pcs resource move SAPHana_HN1_03-master-``` --#### On RHEL 8.x --```bash +# On RHEL 8.x pcs resource move SAPHana_HN1_03-clone --master ``` The SAP HANA resource on hn1-db-0 is stopped. In this case, configure the HANA i ```bash sapcontrol -nr 03 -function StopWait 600 10-hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMod -e=sync --name=SITE1 +hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMode=sync --name=SITE1 ``` The migration creates location constraints that need to be deleted again. Do the following as root, or via sudo: Resource Group: g_ip_HN1_03 vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1 ``` +### Blocking network communication ++Resource state before starting the test: ++```output +Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03] + Started: [ hn1-db-0 hn1-db-1 ] +Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03] + Masters: [ hn1-db-1 ] + Slaves: [ hn1-db-0 ] +Resource Group: g_ip_HN1_03 + nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-1 + vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1 +``` ++Execute firewall rule to block the communication on one of the nodes. ++```bash +# Execute iptable rule on hn1-db-1 (10.0.0.6) to block the incoming and outgoing traffic to hn1-db-0 (10.0.0.5) +iptables -A INPUT -s 10.0.0.5 -j DROP; iptables -A OUTPUT -d 10.0.0.5 -j DROP +``` ++When cluster nodes can't communicate to each other, there's a risk of a split-brain scenario. In such situations, cluster nodes will try to simultaneously fence each other, resulting in fence race. To avoid such situation, it's recommended to set [priority-fencing-delay](#create-sap-hana-cluster-resources) property in cluster configuration (applicable only for [pacemaker-2.0.4-6.el8](https://access.redhat.com/errata/RHEA-2020:4804) or higher). ++By enabling priority-fencing-delay property, the cluster introduces an additional delay in the fencing action specifically on the node hosting HANA master resource, allowing the node to win the fence race. ++Execute below command to delete the firewall rule. ++```bash +# If the iptables rule set on the server gets reset after a reboot, the rules will be cleared out. In case they have not been reset, please proceed to remove the iptables rule using the following command. +iptables -D INPUT -s 10.0.0.5 -j DROP; iptables -D OUTPUT -d 10.0.0.5 -j DROP +``` + ### Test the Azure fencing agent > [!NOTE]-> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. Resource state before starting the test: hdbnsutil -sr_register --remoteHost=hn1-db-0 --remoteInstance=03 --replicationMo Switch back to root and clean up the failed state -#### On RHEL 7.x - ```bash+# On RHEL 7.x pcs resource cleanup SAPHana_HN1_03-master-``` --#### On RHEL 8.x -```bash +# On RHEL 8.x pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource needs to be cleaned> ``` hdbnsutil -sr_register --remoteHost=hn1-db-1 --remoteInstance=03 --replicationMo ``` Then as root --#### On RHEL 7.x ```bash+# On RHEL 7.x pcs resource cleanup SAPHana_HN1_03-master-``` --#### On RHEL 8.x --```bash +# On RHEL 8.x pcs resource cleanup SAPHana_HN1_03 node=<hostname on which the resource needs to be cleaned> ``` Resource Group: g_ip_HN1_03 vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1 ``` -### Test a manual failover --Resource state before starting the test: --```output -Clone Set: SAPHanaTopology_HN1_03-clone [SAPHanaTopology_HN1_03] - Started: [ hn1-db-0 hn1-db-1 ] -Master/Slave Set: SAPHana_HN1_03-master [SAPHana_HN1_03] - Masters: [ hn1-db-0 ] - Slaves: [ hn1-db-1 ] -Resource Group: g_ip_HN1_03 - nc_HN1_03 (ocf::heartbeat:azure-lb): Started hn1-db-0 - vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-0 -``` --You can test a manual failover by stopping the cluster on the hn1-db-0 node, as root: --```bash -pcs cluster stop -``` -- ## Next steps * [Azure Virtual Machines planning and implementation for SAP][planning-guide] |
sap | Sap Hana High Availability Scale Out Hsr Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-rhel.md | Now you're ready to create the cluster resources: 1. Create the HANA instance resource. > [!NOTE]- > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we’ll remove it from this article. + > This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we’ll remove it from this article. If you're building a RHEL **7.x** cluster, use the following commands: ```bash |
sap | Sap Hana High Availability Scale Out Hsr Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-scale-out-hsr-suse.md | You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va 3. Next, create the HANA instance resource. > [!NOTE]- > This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. + > This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article. ```bash sudo crm configure primitive rsc_SAPHana_HN1_HDB03 ocf:suse:SAPHanaController \ You can adjust the behavior of susChkSrv with parameter action_on_lost. Valid va ## Test SAP HANA failover > [!NOTE]-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. 1. Before you start a test, check the cluster and SAP HANA system replication status. |
sap | Sap Hana High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability.md | Next, create the HANA resources: > For existing Pacemaker clusters, if your configuration was already changed to use `socat` as described in [Azure Load Balancer Detection Hardening](https://www.suse.com/support/kb/doc/?id=7024128), you don't need to immediately switch to the `azure-lb` resource agent. > [!NOTE]-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, we'll remove them from this article. ```bash # Replace <placeholders> with your instance number, HANA system ID, and the front-end IP address of the Azure load balancer. |
sap | Sap Hana Scale Out Standby Netapp Files Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md | This article describes how to deploy a highly available SAP HANA system in a sca In the example configurations, installation commands, and so on, the HANA instance is **03** and the HANA system ID is **HN1**. The examples are based on HANA 2.0 SP4 and Red Hat Enterprise Linux for SAP 7.6. > [!NOTE]-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. Before you begin, refer to the following SAP notes and papers: |
sap | Sap Hana Scale Out Standby Netapp Files Suse | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md | In this example for deploying SAP HANA in scale-out configuration with standby n ## Test SAP HANA failover > [!NOTE]-> This article contains references to the terms *master* and *slave*, terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. +> This article contains references to terms that Microsoft no longer uses. When these terms are removed from the software, weΓÇÖll remove them from this article. 1. Simulate a node crash on an SAP HANA worker node. Do the following: |
sap | Sap High Availability Guide Wsfc Shared Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-high-availability-guide-wsfc-shared-disk.md | Following are some of the important points to consider for Azure Premium shared ### Supported OS versions -Both Windows Servers 2016 and 2019 are supported (use the latest data center images). +Windows Servers 2016, 2019 and higher are supported (use the latest data center images). -We strongly recommend using **Windows Server 2019 Datacenter**, as: +We strongly recommend using at least **Windows Server 2019 Datacenter**, as: - Windows 2019 Failover Cluster Service is Azure aware - There is added integration and awareness of Azure Host Maintenance and improved experience by monitoring for Azure schedule events. - It is possible to use Distributed network name(it is the default option). Therefore, there is no need to have a dedicated IP address for the cluster network name. Also, there is no need to configure this IP address on Azure Internal Load Balancer. |
search | Cognitive Search Common Errors Warnings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-common-errors-warnings.md | |
search | Cognitive Search Defining Skillset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md | Title: Create a skillset -description: A skillset defines content extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search. +description: A skillset defines data extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure Cognitive Search. Previously updated : 08/08/2023 Last updated : 07/14/2022 # Create a skillset in Azure Cognitive Search |
search | Cognitive Search Quickstart Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-quickstart-blob.md | You're now ready to move on the Import data wizard. :::image type="content" source="medi.png" alt-text="Screenshot of the Import data command." border="true"::: -### Step 1 - Create a data source +### Step 1: Create a data source 1. In **Connect to your data**, choose **Azure Blob Storage**. If you get "Error detecting index schema from data source", the indexer that's p | Resource is behind an IP firewall | [Create an inbound rule for Search and for Azure portal](search-indexer-howto-access-ip-restricted.md) | | Resource requires a private endpoint connection | [Connect over a private endpoint](search-indexer-howto-access-private.md) | -### Step 2 - Add cognitive skills +### Step 2: Add cognitive skills Next, configure AI enrichment to invoke OCR, image analysis, and natural language processing. Next, configure AI enrichment to invoke OCR, image analysis, and natural languag Continue to the next page. -### Step 3 - Configure the index +### Step 3: Configure the index An index contains your searchable content and the **Import data** wizard can usually create the schema for you by sampling the data source. In this step, review the generated schema and potentially revise any settings. Below is the default schema created for the demo Blob data set. Marking a field as **Retrievable** doesn't mean that the field *must* be present Continue to the next page. -### Step 4 - Configure the indexer +### Step 4: Configure the indexer The indexer drives the indexing process. It specifies the data source name, a target index, and frequency of execution. The **Import data** wizard creates several objects, including an indexer that you can reset and run repeatedly. |
search | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
search | Query Lucene Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/query-lucene-syntax.md | You can embed Boolean operators in a query string to improve the precision of a |--|-- |--|-| | AND | `+` | `wifi AND luxury` | Specifies terms that a match must contain. In the example, the query engine looks for documents containing both `wifi` and `luxury`. The plus character (`+`) can also be used directly in front of a term to make it required. For example, `+wifi +luxury` stipulates that both terms must appear somewhere in the field of a single document.| | OR | (none) <sup>1</sup> | `wifi OR luxury` | Finds a match when either term is found. In the example, the query engine returns match on documents containing either `wifi` or `luxury` or both. Because OR is the default conjunction operator, you could also leave it out, such that `wifi luxury` is the equivalent of `wifi OR luxury`.|-| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns a match on documents that exclude the term. For example, `wifi ΓÇôluxury` searches for documents that have the `wifi` term but not `luxury`. </p>It's important to note that the NOT operator (`NOT`, `!`, or `-`) behaves differently in full syntax than it does in simple syntax. In full syntax, negations will always be ANDed onto the query such that `wifi -luxury` is interpreted as "wifi AND NOT luxury" regardless of if the `searchMode` parameter is set to `any` or `all`. This gives you a more intuitive behavior for negations by default. </p>A single negation such as the query `-luxury` isn't allowed in full search syntax and will always return an empty result set.| +| NOT | `!`, `-` | `wifi ΓÇôluxury` | Returns a match on documents that exclude the term. For example, `wifi ΓÇôluxury` searches for documents that have the `wifi` term but not `luxury`. | <sup>1</sup> The `|` character isn't supported for OR operations. +### <a name="bkmk_boolean_not"></a> NOT Boolean operator ++> [!Important] +> +> The NOT operator (`NOT`, `!`, or `-`) behaves differently in full syntax than it does in simple syntax. ++* In simple syntax, queries with negation always have a wildcard automatically added. For example, the query `-luxury` is automatically expanded to `-luxury *`. +* In full syntax, queries with negation cannot be combined with a wildcard. For example, the queries `-luxury *` is not allowed. +* In full syntax, queries with a single negation are not allowed. For example, the query `-luxury` is not allowed. +* In full syntax, negations will behave as if they are always ANDed onto the query regardless of the search mode. + * For example, the full syntax query `wifi -luxury` in full syntax only fetches documents that contain the term `wifi`, and then applies the negation `-luxury` to those documents. +* If you want to use negations to search over all documents in the index, simple syntax with the any search mode is recommended. +* If you want to use negations to search over a subset of documents in the index, full syntax or the simple syntax with the all search mode are recommended. ++| Query Type | Search Mode | Example Query | Behavior | +| - | -- | - | -- | +| Simple | any | `wifi -luxury`| Returns all documents in the index. Documents with the term "wifi" or documents missing the term "luxury" are ranked higher than other documents. The query is expanded to `wifi OR -luxury OR *`. | +| Simple | all | `wifi -luxury`| Returns only documents in the index that contain the term "wifi" and don't contain the term "luxury". The query is expanded to `wifi AND -luxury AND *`. | +| Full | any | `wifi -luxury`| Returns only documents in the index that contain the term "wifi", and then documents that contain the term "luxury" are removed from the results. | +| Full | all | `wifi -luxury`| Returns only documents in the index that contain the term "wifi", and then documents that contain the term "luxury" are removed from the results. | + ## <a name="bkmk_fields"></a> Fielded search You can define a fielded search operation with the `fieldName:searchExpression` syntax, where the search expression can be a single word or a phrase, or a more complex expression in parentheses, optionally with Boolean operators. Some examples include the following: |
search | Resource Demo Sites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-demo-sites.md | The following demos are built and hosted by Microsoft. | Demo name | Description | Source code | |--| |-|+| [Chat with your data](https://entgptsearch.azurewebsites.net/) | An Azure web app that uses ChatGPT in Azure OpenAI with fictitous health plan data in a search index. | [https://github.com/Azure-Samples/azure-search-openai-demo/](https://github.com/Azure-Samples/azure-search-openai-demo/) | | [AzSearchLab](https://azuresearchlab.azurewebsites.net/) | A web front end that makes calls to a search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) | | [NYC Jobs demo](https://azjobsdemo.azurewebsites.net/) | An ASP.NET app with facets, filters, details, geo-search (map controls). | [https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs](https://github.com/Azure-Samples/search-dotnet-asp-net-mvc-jobs) | | [JFK files demo](https://jfk-demo-2019.azurewebsites.net/#/) | An ASP.NET web app built on a public data set, transformed with custom and predefined skills to extract searchable content from scanned document (JPEG) files. [Learn more...](https://www.microsoft.com/ai/ai-lab-jfk-files) | [https://github.com/Microsoft/AzureSearch_JFK_Files](https://github.com/Microsoft/AzureSearch_JFK_Files) | |
search | Search Get Started Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal.md | For this quickstart, we'll create and load the index using a built-in sample dat An indexer is a source-specific crawler that can read metadata and content from supported Azure data sources. Normally, indexers are created programmatically, but in the portal, you can create them through the **Import data wizard**. -### Step 1 - Start the Import data wizard and create a data source +### Step 1: Start the Import data wizard and create a data source 1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account. An indexer is a source-specific crawler that can read metadata and content from 1. Continue to the next page. -### Step 2 - Skip the "Enrich content" page +### Step 2: Skip the "Enrich content" page The wizard supports the creation of an [AI enrichment pipeline](cognitive-search-concept-intro.md) for incorporating the Azure AI services algorithms into indexing. We'll skip this step for now, and move directly on to **Customize target index** > [!TIP] > You can step through an AI-indexing example in a [quickstart](cognitive-search-quickstart-blob.md) or [tutorial](cognitive-search-tutorial-blob.md). -### Step 3 - Configure index +### Step 3: Configure index For the built-in hotels sample index, a default index schema is defined for you. Except for a few advanced filter examples, queries in the documentation and samples that target the hotel-samples index will run on this index definition: By default, the wizard scans the data source for unique identifiers as the basis 1. Continue to the next page. -### Step 4 - Configure indexer +### Step 4: Configure indexer Still in the **Import data** wizard, select **Indexer** > **Name**, and type a name for the indexer. |
search | Search Get Started Semantic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-semantic.md | This quickstart walks you through the query modifications that invoke semantic s + An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). -+ Azure Cognitive Search, at Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search). ++ Azure Cognitive Search, at Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). + An API key and service endpoint: |
search | Search Get Started Vector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vector.md | Get started with vector search in Azure Cognitive Search using the **2023-07-01- + An Azure subscription. [Create one for free](https://azure.microsoft.com/free/). -+ Azure Cognitive Search, in any region and on any tier. However, if you want to also use [semantic search](semantic-search-overview.md), as shown in the last two examples, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search). ++ Azure Cognitive Search, in any region and on any tier. However, if you want to also use [semantic search](semantic-search-overview.md), as shown in the last two examples, your search service must be Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created. api-key: {{admin-api-key}} ### Semantic hybrid search -Assuming that you've [enabled semantic search](semantic-search-overview.md#enable-semantic-search) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search, plus keyword search with semantic ranking, caption, answers, and spell check. +Assuming that you've [enabled semantic search](semantic-how-to-enable-disable.md) and your index definition includes a [semantic configuration](semantic-how-to-query-request.md), you can formulate a query that includes vector search, plus keyword search with semantic ranking, caption, answers, and spell check. ```http POST https://{{search-service-name}}.search.windows.net/indexes/{{index-name}}/docs/search?api-version={{api-version}} |
search | Search Howto Index Cosmosdb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md | The data source definition specifies the data to index, credentials, and policie ### Supported credentials and connection strings -Indexers can connect to a collection using the following connections. For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit "ApiKind" from the connection string. +Indexers can connect to a collection using the following connections. Avoid port numbers in the endpoint URL. If you include the port number, the connection will fail. Avoid port numbers in the endpoint URL. If you include the port number, the conn | Managed identity connection string | ||-|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)" }`| -|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md) and created a role assignment that grants **Cosmos DB Account Reader Role** permissions. See [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md) for more information.| +|`{ "connectionString" : "ResourceId=/subscriptions/<your subscription ID>/resourceGroups/<your resource group name>/providers/Microsoft.DocumentDB/databaseAccounts/<your cosmos db account name>/;(ApiKind=[api-kind];)/(IdentityAuthType=[identity-auth-type])" }`| +|This connection string doesn't require an account key, but you must have previously configured a search service to [connect using a managed identity](search-howto-managed-identities-data-sources.md). For connections that target the [SQL API](../cosmos-db/sql-query-getting-started.md), you can omit `ApiKind` from the connection string. For more information about `ApiKind`, `IdentityAuthType` see [Setting up an indexer connection to an Azure Cosmos DB database using a managed identity](search-howto-managed-identities-cosmos-db.md).| <a name="flatten-structures"></a> |
search | Search Howto Managed Identities Cosmos Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-cosmos-db.md | You can use a system-assigned managed identity or a user-assigned managed identi * [Create a managed identity](search-howto-managed-identities-data-sources.md) for your search service. -* [Assign a role](search-howto-managed-identities-data-sources.md#assign-a-role) in Azure Cosmos DB. +* Assign the **Cosmos DB Account Reader** role to the search service managed identity. This role grants the ability to read Azure Cosmos DB account data. For more information about role assignments in Cosmos DB, see [Configure role-based access control to data](search-howto-managed-identities-data-sources.md#assign-a-role). ++* Data Plane Role assignment: Follow [Data plane Role assignment](../cosmos-db/how-to-setup-rbac.md) +to know more. ++* Example for a read-only data plane role assignment: +```azurepowershell +$cosmosdb_acc_name = <cosmos db account name> +$resource_group = <resource group name> +$subsciption = <subscription id> +$system_assigned_principal = <principal id for system assigned identity> +$readOnlyRoleDefinitionId = "00000000-0000-0000-0000-000000000001" +$scope=$(az cosmosdb show --name $cosmosdbname --resource-group $resourcegroup --query id --output tsv) +``` ++Role assignment for system-assigned identity: - For data reader access, you'll need the **Cosmos DB Account Reader** role and the identity used to make the request. This role works for all Azure Cosmos DB APIs supported by Cognitive Search. This is a control plane RBAC role. +```azurepowershell +az cosmosdb sql role assignment create --account-name $cosmosdbname --resource-group $resourcegroup --role-definition-id $readOnlyRoleDefinitionId --principal-id $sys_principal --scope $scope +``` +* For Cosmos DB for NoSQL, you can optionally [Enforcing RBAC as the only authentication method](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) +for data connections by setting `disableLocalAuth` to `true` for your Cosmos DB account. - At this time, Cognitive Search obtains keys with the identity and uses those keys to connect to the Azure Cosmos DB account. This means that [enforcing RBAC as the only authentication method in Azure Cosmos DB](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) isn't supported when using Search with managed identities to connect to Azure Cosmos DB. +* *For Gremlin and MongoDB Collections*: + Indexer support is currently in preview. At this time, a preview limitation exists that requires Cognitive Search to connect using keys. You can still set up a managed identity and role assignment, but Cognitive Search will only use the role assignment to get keys for the connection. This limitation means that you can't configure an [RBAC-only approach](../cosmos-db/how-to-setup-rbac.md#disable-local-auth) if your indexers are connecting to Gremlin or MongoDB using Search with managed identities to connect to Azure Cosmos DB. * You should be familiar with [indexer concepts](search-indexer-overview.md) and [configuration](search-howto-index-cosmosdb.md). The [REST API](/rest/api/searchservice/create-data-source), Azure portal, and th When you're connecting with a system-assigned managed identity, the only change to the data source definition is the format of the "credentials" property. You'll provide the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind". +* For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections. * For MongoDB collections, add "ApiKind=MongoDb" to the connection string and use a preview REST API. * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string and use a preview REST API. api-key: [Search service admin key] "name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];" + "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]" }, "container": { "name": "[my-cosmos-collection]", "query": null }, "dataChangeDetectionPolicy": null The 2021-04-30-preview REST API supports connections based on a user-assigned ma * First, the format of the "credentials" property is the database name and a ResourceId that has no account key or password. The ResourceId must include the subscription ID of Azure Cosmos DB, the resource group, and the Azure Cosmos DB account name. * For SQL collections, the connection string doesn't require "ApiKind". + * For SQL collections add "IdentityAuthType=AccessToken" if RBAC is enforced as the only authentication method. It is not applicable for MongoDB and Gremlin collections. * For MongoDB collections, add "ApiKind=MongoDb" to the connection string * For Gremlin graphs, add "ApiKind=Gremlin" to the connection string. api-key: [Search service admin key] "name": "[my-cosmosdb-ds]", "type": "cosmosdb", "credentials": {- "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];" + "connectionString": "ResourceId=/subscriptions/[subscription-id]/resourceGroups/[rg-name]/providers/Microsoft.DocumentDB/databaseAccounts/[cosmos-account-name];Database=[cosmos-database];ApiKind=[SQL | Gremlin | MongoDB];IdentityAuthType=[AccessToken | AccountKey]" }, "container": { "name": "[my-cosmos-collection]", "query": null |
search | Search Manage Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-rest.md | PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegrou ## (preview) Disable semantic search -Although [semantic search isn't enabled](semantic-search-overview.md#enable-semantic-search) by default, you could lock down the feature at the service level. +Although [semantic search isn't enabled](semantic-how-to-enable-disable.md) by default, you could lock down the feature at the service level. ```rest PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview |
search | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cognitive Search description: Lists Azure Policy Regulatory Compliance controls available for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
search | Semantic Answers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md | The "semanticConfiguration" parameter is required. It's defined in a search inde + "queryLanguage" must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage). -+ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. ++ A "semanticConfiguration" determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details. + For "answers", parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results. |
search | Semantic How To Enable Disable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-enable-disable.md | + + Title: Enable or disable semantic search ++description: Steps for turning semantic search on or off in Cognitive Search. ++++++ Last updated : 8/22/2023+++# Enable or disable semantic search ++> [!IMPORTANT] +> Semantic search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through Azure portal, preview REST APIs, and beta SDKs. This feature is billable. See [Availability and pricing](semantic-search-overview.md#availability-and-pricing). ++Semantic search is a premium feature that's billed by usage. By default, semantic search is disabled on all services. ++## Enable semantic search ++Follow these steps to enable [semantic search](semantic-search-overview.md) for your search service. ++### [**Azure portal**](#tab/enable-portal) ++1. Open the [Azure portal](https://portal.azure.com). ++1. Navigate to your search service. The service must be a billable tier. ++1. Determine whether the service region supports semantic search: ++ 1. Find your service region in the overview page in the Azure portal. ++ 1. Check the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page on the Azure web site to see if your region is listed. ++1. On the left-nav pane, select **Semantic Search (Preview)**. ++1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time. +++The free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota the next time you issue a semantic query. When this happens, you need to upgrade to the standard plan to continue using semantic search. ++### [**REST**](#tab/enable-rest) ++To enable Semantic Search using the REST API, you can use the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch). ++Management REST API calls are authenticated through Azure Active Directory. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for instructions on how to authenticate. ++* Management REST API version 2021-04-01-Preview provides the semantic search property. ++* Owner or Contributor permissions are required to enable or disable features. ++> [!NOTE] +> Create or Update supports two HTTP methods: PUT and PATCH. Both PUT and PATCH can be used to update existing services, but only PUT can be used to create a new service. If PUT is used to update an existing service, it replaces all properties in the service with their defaults if they are not specified in the request. When PATCH is used to update an existing service, it only replaces properties that are specified in the request. When using PUT to update an existing service, it's possible to accidentally introduce an unexpected scaling or configuration change. When enabling semantic search on an existing service, it's recommended to use PATCH instead of PUT. ++```http +PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview + { + "properties": { + "semanticSearch": "standard" + } + } +``` ++++## Disable semantic search using the REST API ++To reverse feature enablement, or for full protection against accidental usage and charges, you can disable semantic search using the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) on your search service. After the feature is disabled, any requests that include the semantic query type will be rejected. ++Management REST API calls are authenticated through Azure Active Directory. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for instructions on how to authenticate. ++```http +PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview + { + "properties": { + "semanticSearch": "disabled" + } + } +``` ++To re-enable semantic search, rerun the above request, setting "semanticSearch" to either "free" (default) or "standard". ++## Next steps ++[Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content. |
search | Semantic How To Query Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md | There are two main activities to perform: If you have an existing Basic or greater service in a supported region, you can enable semantic search without having to create a new service. -+ Semantic search [enabled on your search service](semantic-search-overview.md#enable-semantic-search). ++ Semantic search [enabled on your search service](semantic-how-to-enable-disable.md). + An existing search index with rich content in a [supported query language](/rest/api/searchservice/preview-api/search-documents#queryLanguage). Semantic search works best on content that is informational or descriptive. You can add or update a semantic configuration at any time without rebuilding yo ### [**Azure portal**](#tab/portal) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic search enabled](semantic-search-overview.md#enable-semantic-search). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic search enabled](semantic-how-to-enable-disable.md). 1. Open an index. Your next step is adding parameters to the query request. To be successful, your [Search explorer](search-explorer.md) has been updated to include options for semantic queries. To configure semantic ranking in the portal, follow the steps below: -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-search-overview.md#enable-semantic-search). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has semantic search [enabled](semantic-how-to-enable-disable.md). 1. Select **Search explorer** at the top of the overview page. The following example in this section uses the [hotels-sample-index](search-get- 1. Set "captions" to specify whether semantic captions are included in the result. If you're using a semantic configuration, you should set this parameter. While the ["searchFields" approach](#2buse-searchfields-for-field-prioritization) automatically included captions, "semanticConfiguration" doesn't. - Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`. + Currently, the only valid value for this parameter is "extractive". Captions can be configured to return results with or without highlights. The default is for highlights to be returned. This example returns captions without highlights: `extractive|highlight-false`. ++ For semantic captions, the fields referenced in the "semanticConfiguration" must have a word limit in the range of 2000-3000 words (or equivalent to 10000 tokens), otherwise, it will miss important caption results. If you anticipate that the fields used by the "semanticConfiguration" word count could be higher than the exposed limit and you need to use captions, consider [Text split cognitive skill]cognitive-search-skill-textsplit.md) as part of your [AI enrichment pipeline](cognitive-search-concept-intro.md) while indexing your data with [built-in pull indexers](search-indexer-overview.md). 1. Set "highlightPreTag" and "highlightPostTag" if you want to override the default highlight formatting that's applied to captions. |
search | Semantic Ranking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-ranking.md | Each document is now represented by a single long string. > [!NOTE] > In the 2020-06-30-preview, the "searchFields" parameter is used rather than the semantic configuration to determine which fields to use. We recommend upgrading to the 2021-04-30-preview API version for best results. -The string is composed of tokens, not characters or words. The maximum token count is 128 unique tokens. For estimation purposes, you can assume that 128 tokens are roughly equivalent to a string that is 128 words in length. +The string is composed of tokens, not characters or words. The maximum token count is 256 unique tokens. For estimation purposes, you can assume that 256 tokens are roughly equivalent to a string that is 256 words in length. > [!NOTE] > Tokenization is determined in part by the analyzer assignment on searchable fields. If you are using specialized analyzer, such as nGram or EdgeNGram, you might want to exclude that field from "searchFields". For insights into how strings are tokenized, you can review the token output of an analyzer using the [Test Analyzer REST API](/rest/api/searchservice/test-analyzer). |
search | Semantic Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md | Semantic search is a premium feature that's billed by usage. We recommend this a > [!div class="checklist"] > * [Check regional and service tier requirements](#availability-and-pricing).-> * [Enable semantic search for semantic ranking](#enable-semantic-search) on your search service. +> * [Enable semantic search for semantic ranking](semantic-how-to-enable-disable.md) on your search service. > * Create or modify queries to [return semantic captions and highlights](semantic-how-to-query-request.md). > * Add a few more query properties to also [return semantic answers](semantic-answers.md). Although semantic search isn't beneficial in every scenario, certain content can ## Availability and pricing -Semantic search and spell check are available on services that meet the criteria in the table below. To use semantic search, your first need to [enable the capabilities](#enable-semantic-search) on your search service. +Semantic search and spell check are available on services that meet the criteria in the table below. To use semantic search, your first need to [enable the capabilities](semantic-how-to-enable-disable.md) on your search service. | Feature | Tier | Region | Sign up | Pricing | |||--||| Semantic search and spell check are available on services that meet the criteria Charges for semantic search are levied when query requests include "queryType=semantic" and the search string isn't empty (for example, "search=pet friendly hotels in New York"). If your search string is empty ("search=*"), you won't be charged, even if the queryType is set to "semantic". -## Enable semantic search --By default, semantic search is disabled on all services. To enable semantic search for your search service: --1. Open the [Azure portal](https://portal.azure.com). -1. Navigate to your Standard tier search service. -1. Determine whether the service region supports semantic search. Search service region is noted on the overview page. Semantic search regions are noted on the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page. -1. On the left-nav pane, select **Semantic Search (Preview)**. -1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time. ---Semantic Search's free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota whenever you issue a semantic query. When this happens, you need to upgrade to the standard plan to continue using semantic search. --Alternatively, you can also enable semantic search using the REST API that's described in the next section. --## Enable semantic search using the REST API --To enable Semantic Search using the REST API, you can use the [Create or Update Service API](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch). --> [!NOTE] -> Create or Update supports two HTTP methods: PUT and PATCH. Both PUT and PATCH can be used to update existing services, but only PUT can be used to create a new service. If PUT is used to update an existing service, it replaces all properties in the service with their defaults if they are not specified in the request. When PATCH is used to update an existing service, it only replaces properties that are specified in the request. When using PUT to update an existing service, it's possible to accidentally introduce an unexpected scaling or configuration change. When enabling semantic search on an existing service, it's recommended to use PATCH instead of PUT. --* Management REST API version 2021-04-01-Preview provides the semantic search property --* Owner or Contributor permissions are required to enable or disable features --``` -PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview - { - "properties": { - "semanticSearch": "standard" - } - } -``` --## Disable semantic search using the REST API --To reverse feature enablement, or for full protection against accidental usage and charges, you can [disable semantic search](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#searchsemanticsearch) using the Create or Update Service API on your search service. After the feature is disabled, any requests that include the semantic query type will be rejected. --```http -PATCH https://management.azure.com/subscriptions/{{subscriptionId}}/resourcegroups/{{resource-group}}/providers/Microsoft.Search/searchServices/{{search-service-name}}?api-version=2021-04-01-Preview - { - "properties": { - "semanticSearch": "disabled" - } - } -``` --To re-enable semantic search, rerun the above request, setting "semanticSearch" to either "free" (default) or "standard". --> [!TIP] -> Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principal and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post. - ## Next steps -[Enable semantic search](#enable-semantic-search) for your search service and follow the steps in [Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content. -+[Enable semantic search](semantic-how-to-enable-disable.md) for your search service and follow the steps in [Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic search on your content. |
search | Vector Search How To Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md | Code samples in the [cognitive-search-vector-pr](https://github.com/Azure/cognit + Azure Cognitive Search, in any region and on any tier. Most existing services support vector search. For a small subset of services created prior to January 2019, an index containing vector fields will fail on creation. In this situation, a new service must be created. -+ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-query.md). ++ A search index containing vector fields. See [Add vector fields to a search index](vector-search-how-to-create-index.md). + Use REST API version **2023-07-01-Preview**, the [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr/tree/main), or Search Explorer in the Azure portal. -+ (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-search-overview.md#enable-semantic-search). ++ (Optional) If you want to also use [semantic search (preview)](semantic-search-overview.md) and vector search together, your search service must be Basic tier or higher, with [semantic search enabled](semantic-how-to-enable-disable.md). ## Limitations |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | Last updated 08/10/2023 > [!IMPORTANT] > Vector search is in public preview under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's available through the Azure portal, preview REST API, and [beta client libraries](https://github.com/Azure/cognitive-search-vector-pr#readme). -This article is a high-level introduction to vector search in Azure Cognitive Search. It also explains integration with other Azure services and covers [terms and concepts](#vector-search-concepts) related to vector search development. +This article is a high-level introduction to vector support in Azure Cognitive Search. It also explains integration with other Azure services and covers [terminology and concepts](#vector-search-concepts) related to vector search development. We recommend this article for background, but if you'd rather get started, follow these steps: Support for vector search is in public preview and available through the [**2023 ## What's vector search in Cognitive Search? -Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://arxiv.org/abs/2005.11401). +Vector search is a new capability for indexing, storing, and retrieving vector embeddings from a search index. You can use it to power similarity search, multi-modal search, recommendations engines, or applications implementing the [Retrieval Augmented Generation (RAG) architecture](https://aka.ms/what-is-rag). The following diagram shows the indexing and query workflows for vector search. On the indexing side, prepare source documents that contain embeddings. Cognitive Search doesn't generate embeddings, so your solution should include calls to Azure OpenAI or other models that can transform image, audio, text, and other content into vector representations. Add a *vector field* to your index definition on Cognitive Search. Load the index with a documents payload that includes the vectors. Your index is now ready to query. |
search | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md | Learn about the latest updates to Azure Cognitive Search functionality, docs, an > [!NOTE] > Looking for preview features? Previews are announced here, but we also maintain a [preview features list](search-api-preview.md) so you can find them in one place. +## August 2023 ++| Item | Type | Description | +|--||--| +| [**Enhanced semantic ranking**](semantic-ranking.md) | Feature | Upgraded models are rolling out for semantic reranking, and availability is extended to more regions. Maximum unique token counts doubled from 128 to 256. | + ## July 2023 | Item | Type | Description | |
security | Identity Management Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/identity-management-best-practices.md | The following sections list best practices for identity and access security usin In a hybrid identity scenario we recommend that you integrate your on-premises and cloud directories. Integration enables your IT team to manage accounts from one location, regardless of where an account is created. Integration also helps your users be more productive by providing a common identity for accessing both cloud and on-premises resources. -**Best practice**: Establish a single Azure AD instance. Consistency and a single authoritative source will increase clarity and reduce security risks from human errors and configuration complexity. +**Best practice**: Establish a single Azure AD instance. Consistency and a single authoritative source will increase clarity and reduce security risks from human errors and configuration complexity. **Detail**: Designate a single Azure AD directory as the authoritative source for corporate and organizational accounts. **Best practice**: Integrate your on-premises directories with Azure AD. In a hybrid identity scenario we recommend that you integrate your on-premises a > [!Note] > There are [factors that affect the performance of Azure AD Connect](../../active-directory/hybrid/plan-connect-performance-factors.md). Ensure Azure AD Connect has enough capacity to keep underperforming systems from impeding security and productivity. Large or complex organizations (organizations provisioning more than 100,000 objects) should follow the [recommendations](../../active-directory/hybrid/whatis-hybrid-identity.md) to optimize their Azure AD Connect implementation. -**Best practice**: DonΓÇÖt synchronize accounts to Azure AD that have high privileges in your existing Active Directory instance. +**Best practice**: DonΓÇÖt synchronize accounts to Azure AD that have high privileges in your existing Active Directory instance. **Detail**: DonΓÇÖt change the default [Azure AD Connect configuration](../../active-directory/hybrid/how-to-connect-sync-configure-filtering.md) that filters out these accounts. This configuration mitigates the risk of adversaries pivoting from cloud to on-premises assets (which could create a major incident). **Best practice**: Turn on password hash synchronization. Even if you decide to use federation with Active Directory Federation Services ( For more information, see [Implement password hash synchronization with Azure AD Connect sync](../../active-directory/hybrid/how-to-connect-password-hash-synchronization.md). -**Best practice**: For new application development, use Azure AD for authentication. +**Best practice**: For new application development, use Azure AD for authentication. **Detail**: Use the correct capabilities to support authentication: - Azure AD for employees To balance security and productivity, you need to think about how a resource is **Best practice**: Manage and control access to corporate resources. **Detail**: Configure common Azure AD [Conditional Access policies](../../active-directory/conditional-access/concept-conditional-access-policy-common.md) based on a group, location, and application sensitivity for SaaS apps and Azure ADΓÇôconnected apps. -**Best practice**: Block legacy authentication protocols. +**Best practice**: Block legacy authentication protocols. **Detail**: Attackers exploit weaknesses in older protocols every day, particularly for password spray attacks. Configure Conditional Access to [block legacy protocols](../../active-directory/conditional-access/howto-conditional-access-policy-block-legacy.md). ## Plan for routine security improvements Security is always evolving, and it is important to build into your cloud and id Identity Secure Score is a set of recommended security controls that Microsoft publishes that works to provide you a numerical score to objectively measure your security posture and help plan future security improvements. You can also view your score in comparison to those in other industries as well as your own trends over time. -**Best practice**: Plan routine security reviews and improvements based on best practices in your industry. +**Best practice**: Plan routine security reviews and improvements based on best practices in your industry. **Detail**: Use the Identity Secure Score feature to rank your improvements over time. ## Enable password management If you have multiple tenants or you want to enable users to [reset their own pas **Best practice**: Monitor how or if SSPR is really being used. **Detail**: Monitor the users who are registering by using the Azure AD [Password Reset Registration Activity report](../../active-directory/authentication/howto-sspr-reporting.md). The reporting feature that Azure AD provides helps you answer questions by using prebuilt reports. If you're appropriately licensed, you can also create custom queries. -**Best practice**: Extend cloud-based password policies to your on-premises infrastructure. +**Best practice**: Extend cloud-based password policies to your on-premises infrastructure. **Detail**: Enhance password policies in your organization by performing the same checks for on-premises password changes as you do for cloud-based password changes. Install [Azure AD password protection](../../active-directory/authentication/concept-password-ban-bad.md) for Windows Server Active Directory agents on-premises to extend banned password lists to your existing infrastructure. Users and admins who change, set, or reset passwords on-premises are required to comply with the same password policy as cloud-only users. ## Enforce multi-factor verification for users There are multiple options for requiring two-step verification. The best option Following are options and benefits for enabling two-step verification: -**Option 1**: Enable MFA for all users and login methods with Azure AD Security Defaults +**Option 1**: Enable MFA for all users and login methods with Azure AD Security Defaults **Benefit**: This option enables you to easily and quickly enforce MFA for all users in your environment with a stringent policy to: * Challenge administrative accounts and administrative logon mechanisms This method is available to all licensing tiers but is not able to be mixed with To determine where Multi-Factor Authentication needs to be enabled, see [Which version of Azure AD MFA is right for my organization?](../../active-directory/authentication/concept-mfa-howitworks.md). -**Option 3**: [Enable Multi-Factor Authentication with Conditional Access policy](../../active-directory/authentication/howto-mfa-getstarted.md). +**Option 3**: [Enable Multi-Factor Authentication with Conditional Access policy](../../active-directory/authentication/howto-mfa-getstarted.md). **Benefit**: This option allows you to prompt for two-step verification under specific conditions by using [Conditional Access](../../active-directory/conditional-access/concept-conditional-access-policy-common.md). Specific conditions can be user sign-in from different locations, untrusted devices, or applications that you consider risky. Defining specific conditions where you require two-step verification enables you to avoid constant prompting for your users, which can be an unpleasant user experience. This is the most flexible way to enable two-step verification for your users. Enabling a Conditional Access policy works only for Azure AD Multi-Factor Authentication in the cloud and is a premium feature of Azure AD. You can find more information on this method in [Deploy cloud-based Azure AD Multi-Factor Authentication](../../active-directory/authentication/howto-mfa-getstarted.md). Your security team needs visibility into your Azure resources in order to assess You can use [Azure RBAC](../../role-based-access-control/overview.md) to assign permissions to users, groups, and applications at a certain scope. The scope of a role assignment can be a subscription, a resource group, or a single resource. -**Best practice**: Segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, allow only certain actions at a particular scope. +**Best practice**: Segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, allow only certain actions at a particular scope. **Detail**: Use [Azure built-in roles](../../role-based-access-control/built-in-roles.md) in Azure to assign privileges to users. > [!Note] > Specific permissions create unneeded complexity and confusion, accumulating into a ΓÇ£legacyΓÇ¥ configuration thatΓÇÖs difficult to fix without fear of breaking something. Avoid resource-specific permissions. Instead, use management groups for enterprise-wide permissions and resource groups for permissions within subscriptions. Avoid user-specific permissions. Instead, assign access to groups in Azure AD. -**Best practice**: Grant security teams with Azure responsibilities access to see Azure resources so they can assess and remediate risk. +**Best practice**: Grant security teams with Azure responsibilities access to see Azure resources so they can assess and remediate risk. **Detail**: Grant security teams the Azure RBAC [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) role. You can use the root management group or the segment management group, depending on the scope of responsibilities: * **Root management group** for teams responsible for all enterprise resources * **Segment management group** for teams with limited scope (commonly because of regulatory or other organizational boundaries) -**Best practice**: Grant the appropriate permissions to security teams that have direct operational responsibilities. +**Best practice**: Grant the appropriate permissions to security teams that have direct operational responsibilities. **Detail**: Review the Azure built-in roles for the appropriate role assignment. If the built-in roles don't meet the specific needs of your organization, you can create [Azure custom roles](../../role-based-access-control/custom-roles.md). As with built-in roles, you can assign custom roles to users, groups, and service principals at subscription, resource group, and resource scopes. -**Best practices**: Grant Microsoft Defender for Cloud access to security roles that need it. Defender for Cloud allows security teams to quickly identify and remediate risks. +**Best practices**: Grant Microsoft Defender for Cloud access to security roles that need it. Defender for Cloud allows security teams to quickly identify and remediate risks. **Detail**: Add security teams with these needs to the Azure RBAC [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) role so they can view security policies, view security states, edit security policies, view alerts and recommendations, and dismiss alerts and recommendations. You can do this by using the root management group or the segment management group, depending on the scope of responsibilities. Organizations that donΓÇÖt enforce data access control by using capabilities like Azure RBAC might be giving more privileges than necessary to their users. This can lead to data compromise by allowing users to access types of data (for example, high business impact) that they shouldnΓÇÖt have. The following summarizes the best practices found in [Securing privileged access **Best practice**: Ensure all critical admin accounts are managed Azure AD accounts. **Detail**: Remove any consumer accounts from critical admin roles (for example, Microsoft accounts like hotmail.com, live.com, and outlook.com). -**Best practice**: Ensure all critical admin roles have a separate account for administrative tasks in order to avoid phishing and other attacks to compromise administrative privileges. +**Best practice**: Ensure all critical admin roles have a separate account for administrative tasks in order to avoid phishing and other attacks to compromise administrative privileges. **Detail**: Create a separate admin account thatΓÇÖs assigned the privileges needed to perform the administrative tasks. Block the use of these administrative accounts for daily productivity tools like Microsoft 365 email or arbitrary web browsing. **Best practice**: Identify and categorize accounts that are in highly privileged roles. The following summarizes the best practices found in [Securing privileged access Evaluate the accounts that are assigned or eligible for the global admin role. If you donΓÇÖt see any cloud-only accounts by using the `*.onmicrosoft.com` domain (intended for emergency access), create them. For more information, see [Managing emergency access administrative accounts in Azure AD](../../active-directory/roles/security-emergency-access.md). -**Best practice**: Have a ΓÇ£break glass" process in place in case of an emergency. +**Best practice**: Have a ΓÇ£break glass" process in place in case of an emergency. **Detail**: Follow the steps in [Securing privileged access for hybrid and cloud deployments in Azure AD](../../active-directory/roles/security-planning.md). -**Best practice**: Require all critical admin accounts to be password-less (preferred), or require Multi-Factor Authentication. +**Best practice**: Require all critical admin accounts to be password-less (preferred), or require Multi-Factor Authentication. **Detail**: Use the [Microsoft Authenticator app](../../active-directory/authentication/howto-authentication-passwordless-phone.md) to sign in to any Azure AD account without using a password. Like [Windows Hello for Business](/windows/security/identity-protection/hello-for-business/hello-identity-verification), the Microsoft Authenticator uses key-based authentication to enable a user credential thatΓÇÖs tied to a device and uses biometric authentication or a PIN. Require Azure AD Multi-Factor Authentication at sign-in for all individual users who are permanently assigned to one or more of the Azure AD admin roles: Global Administrator, Privileged Role Administrator, Exchange Online Administrator, and SharePoint Online Administrator. Enable [Multi-Factor Authentication for your admin accounts](../../active-directory/authentication/howto-mfa-userstates.md) and ensure that admin account users have registered. -**Best practice**: For critical admin accounts, have an admin workstation where production tasks arenΓÇÖt allowed (for example, browsing and email). This will protect your admin accounts from attack vectors that use browsing and email and significantly lower your risk of a major incident. +**Best practice**: For critical admin accounts, have an admin workstation where production tasks arenΓÇÖt allowed (for example, browsing and email). This will protect your admin accounts from attack vectors that use browsing and email and significantly lower your risk of a major incident. **Detail**: Use an admin workstation. Choose a level of workstation security: - Highly secure productivity devices provide advanced security for browsing and other productivity tasks. - [Privileged Access Workstations (PAWs)](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) provide a dedicated operating system thatΓÇÖs protected from internet attacks and threat vectors for sensitive tasks. -**Best practice**: Deprovision admin accounts when employees leave your organization. +**Best practice**: Deprovision admin accounts when employees leave your organization. **Detail**: Have a process in place that disables or deletes admin accounts when employees leave your organization. -**Best practice**: Regularly test admin accounts by using current attack techniques. +**Best practice**: Regularly test admin accounts by using current attack techniques. **Detail**: Use Microsoft 365 Attack Simulator or a third-party offering to run realistic attack scenarios in your organization. This can help you find vulnerable users before a real attack occurs. **Best practice**: Take steps to mitigate the most frequently used attacked techniques. |
security | Operational Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md | This checklist is intended to help enterprises think through various operational | [<br>Data Protection & Storage](../../storage/blobs/security-recommendations.md)|<ul><li>Use Management Plane Security to secure your Storage Account using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).</li><li>Data Plane Security to Securing Access to your Data using [Shared Access Signatures (SAS)](../../storage/common/storage-sas-overview.md) and Stored Access Policies.</li><li>Use Transport-Level Encryption ΓÇô Using HTTPS and the encryption used by [SMB (Server message block protocols) 3.0](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) for [Azure File Shares](../../storage/files/storage-dotnet-how-to-use-files.md).</li><li>Use [Client-side encryption](../../storage/common/storage-client-side-encryption.md) to secure data that you send to storage accounts when you require sole control of encryption keys. </li><li>Use [Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md) to automatically encrypt data in Azure Storage, and [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) to encrypt virtual machine disk files for the OS and data disks.</li><li>Use Azure [Storage Analytics](/rest/api/storageservices/storage-analytics) to monitor authorization type; like with Blob Storage, you can see if users have used a Shared Access Signature or the storage account keys.</li><li>Use [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) to access storage resources from different domains.</li></ul> | |[<br>Security Policies & Recommendations](../../defender-for-cloud/defender-for-cloud-planning-and-operations-guide.md#security-policies-and-recommendations)|<ul><li>Use [Microsoft Defender for Cloud](../../defender-for-cloud/integration-defender-for-endpoint.md) to deploy endpoint solutions.</li><li>Add a [web application firewall (WAF)](../../web-application-firewall/ag/ag-overview.md) to secure web applications.</li><li>Use [Azure Firewall](../../firewall/overview.md) to increase your security protections. </li><li>Apply security contact details for your Azure subscription. The [Microsoft Security Response Center](https://technet.microsoft.com/security/dn528958.aspx) (MSRC) contacts you if it discovers that your customer data has been accessed by an unlawful or unauthorized party.</li></ul> | | [<br>Identity & Access Management](identity-management-best-practices.md)|<ul><li>[Synchronize your on-premises directory with your cloud directory using Azure AD](../../active-directory/hybrid/whatis-hybrid-identity.md).</li><li>Use [single sign-on](../../active-directory/manage-apps/what-is-single-sign-on.md) to enable users to access their SaaS applications based on their organizational account in Azure AD.</li><li>Use the [Password Reset Registration Activity](../../active-directory/authentication/howto-sspr-reporting.md) report to monitor the users that are registering.</li><li>Enable [multi-factor authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) for users.</li><li>Developers to use secure identity capabilities for apps like [Microsoft Security Development Lifecycle (SDL)](https://www.microsoft.com/download/details.aspx?id=12379).</li><li>Actively monitor for suspicious activities by using Azure AD Premium anomaly reports and [Azure AD identity protection capability](../../active-directory/identity-protection/overview-identity-protection.md).</li></ul> |-|[<br>Ongoing Security Monitoring](../../defender-for-cloud/defender-for-cloud-introduction.md)|<ul><li>Use Malware Assessment Solution [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) to report on the status of antimalware protection in your infrastructure.</li><li>Use [Update Management](../../automation/update-management/overview.md) to determine the overall exposure to potential security problems, and whether or how critical these updates are for your environment.</li><li>The [Azure Active Directory portal](https://aad.portal.azure.com/) to gain visibility into the integrity and security of your organization's directory. | +|[<br>Ongoing Security Monitoring](../../defender-for-cloud/defender-for-cloud-introduction.md)|<ul><li>Use Malware Assessment Solution [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) to report on the status of antimalware protection in your infrastructure.</li><li>Use [Update Management](../../automation/update-management/overview.md) to determine the overall exposure to potential security problems, and whether or how critical these updates are for your environment.</li><li>The [Microsoft Entra admin center](https://entra.microsoft.com) provides visibility into the integrity and security of your organization's directory. | | [<br>Microsoft Defender for Cloud detection capabilities](../../security-center/security-center-alerts-overview.md#detect-threats)|<ul><li>Use [Cloud Security Posture Management](../../defender-for-cloud/concept-cloud-security-posture-management.md) (CSPM) for hardening guidance that helps you efficiently and effectively improve your security.</li><li>Use [alerts](../../defender-for-cloud/alerts-overview.md) to be notified when threats are identified in your cloud, hybrid, or on-premises environment. </li><li>Use [security policies, initiatives, and recommendations](../../defender-for-cloud/security-policy-concept.md) to improve your security posture.</li></ul> | ## Conclusion |
security | Ransomware Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/ransomware-protection.md | This article lays out key Azure native capabilities and defenses for ransomware ## A growing threat -Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can cripple a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection. +Ransomware attacks have become one of the biggest security challenges facing businesses today. When successful, ransomware attacks can disable a business core IT infrastructure, and cause destruction that could have a debilitating impact on the physical, economic security or safety of a business. Ransomware attacks are targeted to businesses of all types. This requires that all businesses take preventive measures to ensure protection. Recent trends on the number of attacks are quite alarming. While 2020 wasn't a good year for ransomware attacks on businesses, 2021 started on a bad trajectory. On May 7, the Colonial pipeline (Colonial) attack shut down services such as pipeline transportation of diesel, gasoline, and jet fuel were temporary halted. Colonial shut the critical fuel network supplying the populous eastern states. |
security | Steps Secure Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md | Attackers who get control of privileged accounts can do tremendous damage, so it All set? Let's get started on the checklist. -## Step 1 - Strengthen your credentials +## Step 1: Strengthen your credentials Although other types of attacks are emerging, including consent phishing and attacks on nonhuman identities, password-based attacks on user identities are still the most prevalent vector of identity compromise. Well-established spear phishing and password spray campaigns by adversaries continue to be successful against organizations that havenΓÇÖt yet implemented multi-factor authentication (MFA) or other protections against this common tactic. Passwords are never stored in clear text or encrypted with a reversible algorith Smart lockout helps lock out bad actors that try to guess your users' passwords or use brute-force methods to get in. Smart lockout can recognize sign-ins that come from valid users and treat them differently than ones of attackers and other unknown sources. Attackers get locked out, while your users continue to access their accounts and be productive. Organizations, which configure applications to authenticate directly to Azure AD benefit from Azure AD smart lockout. Federated deployments that use AD FS 2016 and AD FS 2019 can enable similar benefits using [AD FS Extranet Lockout and Extranet Smart Lockout](/windows-server/identity/ad-fs/operations/configure-ad-fs-extranet-smart-lockout-protection). -## Step 2 - Reduce your attack surface area +## Step 2: Reduce your attack surface area Given the pervasiveness of password compromise, minimizing the attack surface in your organization is critical. Disabling the use of older, less secure protocols, limiting access entry points, moving to cloud authentication, and exercising more significant control of administrative access to resources and embracing Zero Trust security principles. Make sure users can request admin approval for new applications to reduce user f For more information, see the article [Azure Active Directory consent framework](../../active-directory/develop/consent-framework.md). -## Step 3 - Automate threat response +## Step 3: Automate threat response Azure Active Directory has many capabilities that automatically intercept attacks, to remove the latency between detection and response. You can reduce the costs and risks, when you reduce the time criminals use to embed themselves into your environment. Here are the concrete steps you can take. Learn more about Microsoft Threat Protection and the importance of integrating d Monitoring and auditing your logs is important to detect suspicious behavior. The Azure portal has several ways to integrate Azure AD logs with other tools, like Microsoft Sentinel, Azure Monitor, and other SIEM tools. For more information, see the [Azure Active Directory security operations guide](../../active-directory/fundamentals/security-operations-introduction.md#data-sources). -## Step 4 - Utilize cloud intelligence +## Step 4: Utilize cloud intelligence Auditing and logging of security-related events and related alerts are essential components of an efficient protection strategy. Security logs and reports provide you with an electronic record of suspicious activities and help you detect patterns that may indicate attempted or successful external penetration of the network, and internal attacks. You can use auditing to monitor user activity, document regulatory compliance, do forensic analysis, and more. Alerts provide notifications of security events. Make sure you have a log retention policy in place for both your sign-in logs and audit logs for Azure AD by exporting into Azure Monitor or a SIEM tool. Microsoft Azure services and features provide you with configurable security aud Users can be tricked into navigating to a compromised web site or apps that will gain access to their profile information and user data, such as their email. A malicious actor can use the consented permissions it received to encrypt their mailbox content and demand a ransom to regain your mailbox data. [Administrators should review and audit](/office365/securitycompliance/detect-and-remediate-illicit-consent-grants) the permissions given by users. In addition to auditing the permissions given by users, you can [locate risky or unwanted OAuth applications](/cloud-app-security/investigate-risky-oauth) in premium environments. -## Step 5 - Enable end-user self-service +## Step 5: Enable end-user self-service As much as possible you'll want to balance security with productivity. Approaching your journey with the mindset that you're setting a foundation for security, you can remove friction from your organization by empowering your users while remaining vigilant and reducing your operational overheads. |
security | Technical Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md | Security benefits of Azure Active Directory (Azure AD) include the ability to: - Provision secure remote access to on-premises web applications through Azure AD Application Proxy. -The [Azure Active Directory portal](https://aad.portal.azure.com/) is available as part of the Azure portal. From this dashboard, you can get an overview of the state of your organization, and easily manage the directory, users, or application access. - ![Azure Active Directory](./media/technical-capabilities/azure-security-technical-capabilities-fig2.png) The following are core Azure identity management capabilities: Not only do users not have to manage multiple sets of usernames and passwords, a Security monitoring and alerts and machine learning-based reports that identify inconsistent access patterns can help you protect your business. You can use Azure Active Directory's access and usage reports to gain visibility into the integrity and security of your organizationΓÇÖs directory. With this information, a directory admin can better determine where possible security risks may lie so that they can adequately plan to mitigate those risks. -In the Azure portal or through the [Azure Active Directory portal](https://aad.portal.azure.com/), [reports](../../active-directory/reports-monitoring/overview-reports.md) are categorized in the following ways: +In the [Azure portal](https://portal.azure.com), [reports](../../active-directory/reports-monitoring/overview-reports.md) are categorized in the following ways: - Anomaly reports ΓÇô contain sign in events that we found to be anomalous. Our goal is to make you aware of such activity and enable you to be able to decide about whether an event is suspicious. |
sentinel | Connect Azure Functions Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-azure-functions-template.md | Make sure that you have the following permissions and credentials before using A > - Some data connectors depend on a parser based on a [Kusto Function](/azure/data-explorer/kusto/query/functions/user-defined-functions) to work as expected. See the section for your service in the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page for links to instructions to create the Kusto function and alias. -### STEP 1 - Get your source system's API credentials +### Step 1: Get your source system's API credentials Follow your source system's instructions to get its **API credentials / authorization keys / tokens**. Copy and paste them into a text file for later. You can find details on the exact credentials you'll need, and links to your product's instructions for finding or creating them, on the data connector page in the portal and in the section for your service in the [Microsoft Sentinel data connectors reference](data-connectors-reference.md) page. You may also need to configure logging or other settings on your source system. You'll find the relevant instructions together with those in the preceding paragraph.-### STEP 2 - Deploy the connector and the associated Azure Function App +### Step 2: Deploy the connector and the associated Azure Function App #### Choose a deployment option This method provides an automated deployment of your Azure Function-based connec 1. The **Custom deployment** screen will appear. - Select a **subscription**, **resource group**, and **region** in which to deploy your Function App. - - Enter your API credentials / authorization keys / tokens that you saved in [Step 1](#step-1get-your-source-systems-api-credentials) above. + - Enter your API credentials / authorization keys / tokens that you saved in [Step 1](#step-1-get-your-source-systems-api-credentials) above. - Enter your Microsoft Sentinel **Workspace ID** and **Workspace Key** (primary key) that you copied and put aside. |
sentinel | Create Nrt Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md | You create NRT rules the same way you create regular [scheduled-query analytics The configuration of NRT rules is in most ways the same as that of scheduled analytics rules. - - You can refer to [**watchlists**](watchlists.md) in your query logic. + - You can refer to multiple tables and [**watchlists**](watchlists.md) in your query logic. - You can use all of the alert enrichment methods: [**entity mapping**](map-data-fields-to-entities.md), [**custom details**](surface-custom-details-in-alerts.md), and [**alert details**](customize-alert-details.md). You create NRT rules the same way you create regular [scheduled-query analytics In addition, the query itself has the following requirements: - - The query itself can refer to only one table, and cannot contain unions or joins. - - You can't run the query across workspaces. - Due to the size limits of the alerts, your query should make use of `project` statements to include only the necessary fields from your table. Otherwise, the information you want to surface could end up being truncated. |
sentinel | Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/customer-managed-keys.md | To provision CMK, follow these steps:  1. Onboard the workspace to Microsoft Sentinel via the [Onboarding API](/rest/api/securityinsights/preview/sentinel-onboarding-states/create). 1. Contact the Microsoft Sentinel Product group to confirm onboarding. -### STEP 1: Create an Azure Key Vault and generate or import a key +### Step 1: Create an Azure Key Vault and generate or import a key 1. [Create Azure Key Vault resource](/azure-stack/user/azure-stack-key-vault-manage-portal), then generate or import a key to be used for data encryption. To provision CMK, follow these steps:  - Turn on [Purge protection](../key-vault/general/soft-delete-overview.md#purge-protection) to guard against forced deletion of the secret/vault even after soft delete. -### STEP 2: Enable CMK on your Log Analytics workspace +### Step 2: Enable CMK on your Log Analytics workspace Follow the instructions in [Azure Monitor customer-managed key configuration](../azure-monitor/logs/customer-managed-keys.md) in order to create a CMK workspace that is used as the Microsoft Sentinel workspace in the following steps. -### STEP 3: Register the Azure Cosmos DB Resource Provider +### Step 3: Register the Azure Cosmos DB Resource Provider Microsoft Sentinel works with Azure Cosmos DB as an additional storage resource. Make sure to register to the Azure Cosmos DB Resource Provider. Follow the instructions to [Register the Azure Cosmos DB Resource Provider](../cosmos-db/how-to-setup-cmk.md#register-resource-provider) for your Azure subscription. -### STEP 4: Add an access policy to your Azure Key Vault instance +### Step 4: Add an access policy to your Azure Key Vault instance Add an access policy that allows your Azure Cosmos DB to access the Azure Key Vault instance created in [**STEP 1**](#step-1-create-an-azure-key-vault-and-generate-or-import-a-key). Follow the instructions here to [add an access policy to your Azure Key Vault in :::image type="content" source="../cosmos-db/media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" lightbox="../cosmos-db/media/how-to-setup-customer-managed-keys/add-access-policy-principal.png" alt-text="Screenshot of the Select principal option on the Add access policy page."::: -### STEP 5: Onboard the workspace to Microsoft Sentinel via the onboarding API +### Step 5: Onboard the workspace to Microsoft Sentinel via the onboarding API Onboard the CMK enabled workspace to Microsoft Sentinel via the [onboarding API](/rest/api/securityinsights/preview/sentinel-onboarding-states/create) using the `customerManagedKey` property as `true`. For more context on the onboarding API, see [this document](https://github.com/Azure/Azure-Sentinel/raw/master/docs/Azure%20Sentinel%20management.docx) in the Microsoft Sentinel GitHub repo. PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{ } ``` -### STEP 6: Contact the Microsoft Sentinel Product group to confirm onboarding +### Step 6: Contact the Microsoft Sentinel Product group to confirm onboarding Lastly, you must confirm the onboarding status of your CMK enabled workspace by contacting the [Microsoft Sentinel Product Group](mailto:onboardrecoeng@microsoft.com). |
sentinel | Amazon Web Services S3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/amazon-web-services-s3.md | This connector allows you to ingest AWS service logs, collected in AWS S3 bucket | Connector attribute | Description | | | | | **Log Analytics table(s)** | AWSGuardDuty<br/> AWSVPCFlow<br/> AWSCloudTrail<br/> |-| **Data collection rules support** | Not currently supported | +| **Data collection rules support** | [Supported as listed](/azure/azure-monitor/logs/tables-feature-support) | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | |
sentinel | Fortinet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/fortinet.md | Install the Microsoft Monitoring Agent on your Linux machine and configure the m > 2. You must have elevated permissions (sudo) on your machine. Run the following command to install and apply the CEF collector:-+ ``` sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py &&sudo python cef_installer.py {0} {1}-+ ``` 2. Forward Fortinet logs to Syslog agent Set your Fortinet to send Syslog messages in CEF format to the proxy machine. Make sure you to send the logs to port 514 TCP on the machineΓÇÖs IP address. If the logs are not received, run the following connectivity validation script: >2. You must have elevated permissions (sudo) on your machine Run the following command to validate your connectivity:-+ ``` sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py &&sudo python cef_troubleshoot.py {0}-+ ``` 4. Secure your machine Make sure to configure the machine's security according to your organization's security policy |
sentinel | Deploy Dynamics 365 Finance Operations Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/dynamics-365/deploy-dynamics-365-finance-operations-solution.md | To enable data collection, you create a new role in Finance and Operations with To collect the managed identity application ID from Azure Active Directory: -1. In the [Azure Active Directory portal](https://aad.portal.azure.com/), select **Enterprise Applications**. -+1. Sign in to the [Azure portal](https://portal.azure.com). +1. Browse to **Azure Active Directory** > **Enterprise applications**. 1. Change the application type filter to **Managed Identities**.- 1. Search for and open the Function App created in the [previous step](#deploy-the-azure-resource-manager-arm-template). Copy the Application ID and save it for later use. ### Create a role for data collection in Finance and Operations |
sentinel | Monitor Analytics Rule Integrity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-analytics-rule-integrity.md | For either **Scheduled analytics rule run** or **NRT analytics rule run**, you m | An internal server error occurred while running the query. | | | The query execution timed out. | | | A table referenced in the query was not found. | Verify that the relevant data source is connected. |- | A semantic error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). | + | A semantic error occurred while running the query. | Try resetting the analytics rule by editing and saving it (without changing any settings). | | A function called by the query is named with a reserved word. | Remove or rename the function. |- | A syntax error occurred while running the query. | Try resetting the alert rule by editing and saving it (without changing any settings). | + | A syntax error occurred while running the query. | Try resetting the analytics rule by editing and saving it (without changing any settings). | | The workspace does not exist. | |- | This query was found to use too many system resources and was prevented from running. | | + | This query was found to use too many system resources and was prevented from running. | Review and tune the analytics rule. Consult our Kusto Query Language [overview](kusto-overview.md) and [best practices](/azure/data-explorer/kusto/query/best-practices?toc=%2Fazure%2Fsentinel%2FTOC.json&bc=%2Fazure%2Fsentinel%2Fbreadcrumb%2Ftoc.json) documentation. | | A function called by the query was not found. | Verify the existence in your workspace of all functions called by the query. | | The workspace used in the query was not found. | Verify that all workspaces in the query exist. |- | You don't have permissions to run this query. | Try resetting the alert rule by editing and saving it (without changing any settings). | + | You don't have permissions to run this query. | Try resetting the analytics rule by editing and saving it (without changing any settings). | | You don't have access permissions to one or more of the resources in the query. | | | The query referred to a storage path that was not found. | | | The query was denied access to a storage path. | | |
sentinel | Near Real Time Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md | The following limitations currently govern the use of NRT rules: (Since the NRT rule type is supposed to approximate **real-time** data ingestion, it doesn't afford you any advantage to use NRT rules on log sources with significant ingestion delay, even if it's far less than 12 hours.) -1. As this type of rule is new, its syntax is currently limited but will gradually evolve. Therefore, at this time the following restrictions are in effect: -- 1. The query defined in an NRT rule can reference **only one table**. Queries can, however, refer to multiple watchlists. -- 1. You cannot use unions or joins. +1. The syntax for this type of rule is gradually evolving. At this time the following limitations remain in effect: 1. Because this rule type is in near real time, we have reduced the built-in delay to a minimum (two minutes). The following limitations currently govern the use of NRT rules: 1. Event grouping is now configurable to a limited degree. NRT rules can produce up to 30 single-event alerts. A rule with a query that results in more than 30 events will produce alerts for the first 29, then a 30th alert that summarizes all the applicable events. + 1. Queries defined in an NRT rule can now reference **more than one table**. + ## Next steps In this document, you learned how near-real-time (NRT) analytics rules work in Microsoft Sentinel. |
sentinel | Sentinel Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md | This article lists the most common service limits you might encounter as you use [!INCLUDE [sentinel-service-limits](includes/sentinel-limits-workbooks.md)] +## Workspace manager limits ++ ## Next steps - [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md) |
sentinel | Threat Intelligence Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md | To connect to Threat Intelligence Platform (TIP) feeds, follow the instructions ### MISP Open Source Threat Intelligence Platform -- For a sample script that provides clients with MISP instances to migrate threat indicators to the Microsoft Graph Security API, see the [MISP to Microsoft Graph Security Script](https://github.com/microsoftgraph/security-api-solutions/tree/master/Samples/MISP).+- Push threat indicators from MISP to Microsoft Sentinel using the TI upload indicators API with [MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/). +- Azure Marketplace link for [MISP2Sentinel](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoftsentinelcommunity.azure-sentinel-solution-misp2sentinel?tab=Overview). - [Learn more about the MISP Project](https://www.misp-project.org/). ### Palo Alto Networks MineMeld |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | See these [important announcements](#announcements) about recent changes to feat [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] +## August 2023 ++- [Updated MISP2Sentinel solution utilizes the new upload indicators API.](#updated-misp2sentinel-solution) ++### Updated MISP2Sentinel solution +The open source threat intelligence sharing platform, MISP, has an updated solution to push indicators to Microsoft Sentinel. This notable solution utilizes the new [upload indicators API](#connect-threat-intelligence-with-the-upload-indicators-api) to take advantage of workspace granularity and align the MISP ingested TI to STIX-based properties. ++Learn more about the implementation details from the [MISP blog entry for MISP2Sentinel](https://www.misp-project.org/2023/08/26/MISP-Sentinel-UploadIndicatorsAPI.html/). + ## July 2023 - [Higher limits for entities in alerts and entity mappings in analytics rules](#higher-limits-for-entities-in-alerts-and-entity-mappings-in-analytics-rules) |
sentinel | Workspace Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/workspace-manager.md | Workspace manager groups allow you to organize workspaces together based on busi ## Publish the Group definition At this point, the content items selected haven't been published to the member workspace(s) yet. +> [!NOTE] +> The publish action will fail if the [maximum publish operations](#known-limitations) are exceeded. +> Consider splitting up member workspaces into additional groups if you approach this limit. + 1. Select the group > **Publish content**. :::image type="content" source="media/workspace-manager/publish-group.png" alt-text="Screenshot shows the group publish window."::: Common reasons for failure include: - A member workspace has been deleted. ### Known limitations+- The maximum published operations per group is 2000. *Published operations* = (*member workspaces*) * (*content items*).<br>For example, if you have 10 member workspaces in a group and you publish 20 content items in that group,<br>*published operations* = *10* * *20* = *200*. - Playbooks attributed or attached to analytics and automation rules aren't currently supported. - Workbooks stored in bring-your-own-storage aren't currently supported. - Workspace manager only manages content items published from the central workspace. It doesn't manage content created locally from member workspace(s). |
service-bus-messaging | Deprecate Service Bus Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/deprecate-service-bus-management.md | For more information on Service Manager and Resource Manager APIs for Azure Serv - [Azure Service Bus](/rest/api/servicebus/) - [Azure Event Hubs](/rest/api/eventhub/)-- [Azure Relay](/rest/api/relay/)+- [Azure Relay](/rest/api/relay/controlplane-preview/) ## Service Manager REST API - Resource Manager REST API | Service Manager APIs (Deprecated) | Resource Manager - Service Bus API | Resource Manager - Event Hubs API | Resource Manager - Relay API | | | -- | -- | -- | -| **Namespaces-GetNamespaceAsync** <br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/> ```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) | -| **ConnectionDetails-GetConnectionDetails**<br/>Service Bus/Event Hub/Relay GetConnectionDetals<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/ConnectionDetails``` | [listkeys](/rest/api/servicebus/stable/namespaces-authorization-rules/list-keys) | [listkeys](/rest/api/eventhub/stable/authorization-rules-event-hubs/list-keys) | [listkeys](/rest/api/relay/namespaces/listkeys) | -| **Topics-GetTopicsAsync**<br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics? $skip={skip}&$top={top}``` | [list](/rest/api/servicebus/stable/topics/listbynamespace) | | | -| **Queues-GetQueueAsync** <br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/queues/{queueName}``` | [get](/rest/api/servicebus/stable/queues/get) | | | -| **Relays-GetRelaysAsync**<br/>[Get Relays](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/relays? $skip={skip}&$top={top}```| | | [list](/rest/api/relay/wcfrelays/listbynamespace) | -| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay GetNamespaceAuthRule<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/authorizationrules?``` | [getauthorizationrule](/rest/api/servicebus/stable/namespaces-authorization-rules/get-authorization-rule) | [getauthorizationrule](/rest/api/eventhub/stable/authorization-rules-namespaces/get-authorization-rule) | [getauthorizationrule](/rest/api/relay/namespaces/getauthorizationrule) | -| **Namespaces-DeleteNamespaceAsync**<br/>[Service Bus Delete Namespace](/rest/api/servicebus/delete-namespace)<br/>[Event Hubs Delete Namespace](/rest/api/eventhub/delete-event-hub)<br/>[Relays Delete Namespace](/rest/api/servicebus/delete-namespace)<br/> ```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [delete](/rest/api/servicebus/stable/namespaces/delete) | [delete](/rest/api/eventhub/stable/namespaces/delete) | [delete](/rest/api/relay/namespaces/delete) | -| **MessagingSKUPlan-GetPlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) | -| **MessagingSKUPlan-UpdatePlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) | -| **NamespaceAuthorizationRules-UpdateNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdateauthorizationrule](/rest/api/eventhub/stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/namespaces/createorupdateauthorizationrule) | +| **Namespaces-GetNamespaceAsync** <br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/> ```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/controlplane-stable/namespaces/get) | [get](/rest/api/eventhub/controlplane-stable/namespaces/get) | [get](/rest/api/relay/controlplane-preview/namespaces/get) | +| **ConnectionDetails-GetConnectionDetails**<br/>Service Bus/Event Hub/Relay GetConnectionDetals<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/ConnectionDetails``` | [listkeys](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-keys) | [listkeys](/rest/api/eventhub/controlplane-stable/authorization-rules-event-hubs/list-keys) | [listkeys](/rest/api/relay/controlplane-preview/namespaces/listkeys) | +| **Topics-GetTopicsAsync**<br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics? $skip={skip}&$top={top}``` | [list](/rest/api/servicebus/controlplane-stable/topics/listbynamespace) | | | +| **Queues-GetQueueAsync** <br/>Service Bus<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/queues/{queueName}``` | [get](/rest/api/servicebus/controlplane-stable/queues/get) | | | +| **Relays-GetRelaysAsync**<br/>[Get Relays](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/relays? $skip={skip}&$top={top}```| | | [list](/rest/api/relay/controlplane-preview/wcfrelays/listbynamespace) | +| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay GetNamespaceAuthRule<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/authorizationrules?``` | [getauthorizationrule](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/get-authorization-rule) | [getauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-namespaces/get-authorization-rule) | [getauthorizationrule](/rest/api/relay/controlplane-preview/namespaces/getauthorizationrule) | +| **Namespaces-DeleteNamespaceAsync**<br/>[Service Bus Delete Namespace](/rest/api/servicebus/delete-namespace)<br/>[Event Hubs Delete Namespace](/rest/api/eventhub/delete-event-hub)<br/>[Relays Delete Namespace](/rest/api/servicebus/delete-namespace)<br/> ```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [delete](/rest/api/servicebus/controlplane-stable/namespaces/delete) | [delete](/rest/api/eventhub/controlplane-stable/namespaces/delete) | [delete](/rest/api/relay/controlplane-preview/namespaces/delete) | +| **MessagingSKUPlan-GetPlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [get](/rest/api/servicebus/controlplane-stable/namespaces/get) | [get](/rest/api/eventhub/controlplane-stable/namespaces/get) | [get](/rest/api/relay/controlplane-preview/namespaces/get) | +| **MessagingSKUPlan-UpdatePlanAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/MessagingPlan``` | [createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/controlplane-stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/controlplane-preview/namespaces/createorupdate) | +| **NamespaceAuthorizationRules-UpdateNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay Get Namespace<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/createorupdate) | [createorupdateauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/controlplane-preview/namespaces/createorupdateauthorizationrule) | | **NamespaceAuthorizationRules-CreateNamespaceAuthorizationRuleAsync**<br/> -Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` |[createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdateauthorizationrule](/rest/api/eventhub/stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/namespaces/createorupdateauthorizationrule) | -| **NamespaceProperties-GetNamespacePropertiesAsync**<br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/stable/namespaces/get) | [get](/rest/api/eventhub/stable/namespaces/get) | [get](/rest/api/relay/namespaces/get) | +Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` |[createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/createorupdate) | [createorupdateauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-event-hubs/create-or-update-authorization-rule) | [createorupdateauthorizationrule](/rest/api/relay/controlplane-preview/namespaces/createorupdateauthorizationrule) | +| **NamespaceProperties-GetNamespacePropertiesAsync**<br/>[Service Bus Get Namespace](/rest/api/servicebus/get-namespace)<br/>[Event Hubs Get Namespace](/rest/api/eventhub/get-event-hub)<br/>[Relay Get Namespace](/rest/api/servicebus/get-relays)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [get](/rest/api/servicebus/controlplane-stable/namespaces/get) | [get](/rest/api/eventhub/controlplane-stable/namespaces/get) | [get](/rest/api/relay/controlplane-preview/namespaces/get) | | **RegionCodes-GetRegionCodesAsync**<br/>Service Bus/EventHub/Relay Get Namespace<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | | | | -| **NamespaceProperties-UpdateNamespacePropertyAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Regions/``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) | -| **EventHubsCrud-ListEventHubsAsync**<br/>[List Event Hubs](/rest/api/eventhub/list-event-hubs)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs?$skip={skip}&$top={top}``` | | [list](/rest/api/eventhub/preview/event-hubs/list-by-namespace) | | +| **NamespaceProperties-UpdateNamespacePropertyAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Regions/``` | [createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/controlplane-stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/controlplane-preview/namespaces/createorupdate) | +| **EventHubsCrud-ListEventHubsAsync**<br/>[List Event Hubs](/rest/api/eventhub/list-event-hubs)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs?$skip={skip}&$top={top}``` | | [list](/rest/api/eventhub/controlplane-preview/event-hubs/list-by-namespace) | | | **EventHubsCrud-GetEventHubAsync**<br/>[Get Event Hubs](/rest/api/eventhub/get-event-hub)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/eventhubs/{eventHubPath}``` | | [get](/rest/api/eventhub/get-event-hub) | | -| **NamespaceAuthorizationRules-DeleteNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay<br/>```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [deleteauthorizationrule](/rest/api/servicebus/stable/namespaces-authorization-rules/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/eventhub/stable/authorization-rules-namespaces/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/relay/namespaces/deleteauthorizationrule) | -| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRulesAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules``` | [listauthorizationrules](/rest/api/servicebus/stable/namespaces-authorization-rules/list-authorization-rules) | [listauthorizationrules](/rest/api/eventhub/stable/authorization-rules-namespaces/list-authorization-rules) | [listauthorizationrules](/rest/api/relay/namespaces/listauthorizationrules) | -| **NamespaceAvailability-IsNamespaceAvailable**<br/>[Service Bus Namespace Availability](/rest/api/servicebus/check-namespace-availability)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/CheckNamespaceAvailability/?namespace=<namespaceValue>``` | [checknameavailability](/rest/api/servicebus/stable/namespaces-check-name-availability/check-name-availability) | [checknameavailability](/rest/api/eventhub/stable/check-name-availability-namespaces/check-name-availability) | [checknameavailability](/rest/api/relay/namespaces/checknameavailability) | -| **Namespaces-CreateOrUpdateNamespaceAsync**<br/>Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [createorupdate](/rest/api/servicebus/stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/namespaces/createorupdate) | -| **Topics-GetTopicAsync**<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics/{topicPath}``` | [get](/rest/api/servicebus/stable/topics/get) | | | +| **NamespaceAuthorizationRules-DeleteNamespaceAuthorizationRuleAsync**<br/>Service Bus/Event Hub/Relay<br/>```DELETE https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules/{rule name}``` | [deleteauthorizationrule](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/eventhub/controlplane-stable/authorization-rules-namespaces/delete-authorization-rule) | [deleteauthorizationrule](/rest/api/relay/controlplane-preview/namespaces/deleteauthorizationrule) | +| **NamespaceAuthorizationRules-GetNamespaceAuthorizationRulesAsync**<br/>Service Bus/EventHub/Relay<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/AuthorizationRules``` | [listauthorizationrules](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-authorization-rules) | [listauthorizationrules](/rest/api/eventhub/controlplane-stable/authorization-rules-namespaces/list-authorization-rules) | [listauthorizationrules](/rest/api/relay/controlplane-preview/namespaces/listauthorizationrules) | +| **NamespaceAvailability-IsNamespaceAvailable**<br/>[Service Bus Namespace Availability](/rest/api/servicebus/check-namespace-availability)<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/CheckNamespaceAvailability/?namespace=<namespaceValue>``` | [checknameavailability](/rest/api/servicebus/controlplane-stable/namespaces-check-name-availability/check-name-availability) | [checknameavailability](/rest/api/eventhub/controlplane-stable/check-name-availability-namespaces/check-name-availability) | [checknameavailability](/rest/api/relay/controlplane-preview/namespaces/checknameavailability) | +| **Namespaces-CreateOrUpdateNamespaceAsync**<br/>Service Bus/Event Hub/Relay<br/>```PUT https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}``` | [createorupdate](/rest/api/servicebus/controlplane-stable/namespaces/createorupdate) | [createorupdate](/rest/api/eventhub/controlplane-stable/namespaces/createorupdate) | [createorupdate](/rest/api/relay/controlplane-preview/namespaces/createorupdate) | +| **Topics-GetTopicAsync**<br/>```GET https://management.core.windows.net/{subscription ID}/services/ServiceBus/Namespaces/{namespace name}/topics/{topicPath}``` | [get](/rest/api/servicebus/controlplane-stable/topics/get) | | | ## Service Manager PowerShell - Resource Manager PowerShell | Service Manager PowerShell command (Deprecated) | New Resource Manager Commands | Newer Resource Manager Command | See the following documentation: - Latest REST API documentation - [Azure Service Bus](/rest/api/servicebus/) - [Azure Event Hubs](/rest/api/eventhub/)- - [Azure Relay](/rest/api/relay/) + - [Azure Relay](/rest/api/relay/controlplane-preview/) - Latest PowerShell documentation - [Azure Service Bus](/powershell/module/azurerm.servicebus/#service_bus) - [Azure Event Hubs](/powershell/module/azurerm.eventhub/#event_hub) |
service-bus-messaging | Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/explorer.md | To use the Service Bus Explorer, navigate to the Service Bus namespace on which 1. If you're looking to run operations against a queue, select **Queues** from the navigation menu. If you're looking to run operations against a topic (and it's related subscriptions), select **Topics**. :::image type="content" source="./media/service-bus-explorer/queue-topics-left-navigation.png" alt-text="Screenshot of left side navigation, where entity can be selected." lightbox="./media/service-bus-explorer/queue-topics-left-navigation.png":::- 1. After selecting **Queues** or **Topics**, select the specific queue or topic.++ :::image type="content" source="./media/service-bus-explorer/select-specific-queue.png" alt-text="Screenshot of the Queues page with a specific queue selected." lightbox="./media/service-bus-explorer/select-specific-queue.png"::: 1. Select the **Service Bus Explorer** from the left navigation menu :::image type="content" source="./media/service-bus-explorer/left-navigation-menu-selected.png" alt-text="Screenshot of queue page where Service Bus Explorer can be selected." lightbox="./media/service-bus-explorer/left-navigation-menu-selected.png"::: After peeking or receiving a message, we can resend it, which will send a copy o When working with Service Bus Explorer, it's possible to use either **Access Key** or **Azure Active Directory** authentication. 1. Select the **Settings** button.++ :::image type="content" source="./media/service-bus-explorer/select-settings.png" alt-text="Screenshot indicating the Settings button in Service Bus Explorer." lightbox="./media/service-bus-explorer/select-settings.png"::: 1. Choose the desired authentication method, and select the **Save** button. :::image type="content" source="./media/service-bus-explorer/queue-select-authentication-type.png" alt-text="Screenshot indicating the Settings button and a page showing the different authentication types." lightbox="./media/service-bus-explorer/queue-select-authentication-type.png"::: |
service-bus-messaging | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md | Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
service-bus-messaging | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
service-bus-messaging | Service Bus Amqp Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-overview.md | Title: Overview of AMQP 1.0 in Azure Service Bus description: Learn how Azure Service Bus supports Advanced Message Queuing Protocol (AMQP), an open standard protocol. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Advanced Message Queueing Protocol (AMQP) 1.0 support in Service Bus |
service-bus-messaging | Service Bus Amqp Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-troubleshoot.md | Title: Troubleshoot AMQP errors in Azure Service Bus | Microsoft Docs description: Provides a list of AMQP errors you may receive when using Azure Service Bus, and cause of those errors. Previously updated : 09/20/2021 Last updated : 08/16/2023 # AMQP errors in Azure Service Bus -This article provides some of the errors you receive when using AMQP with Azure Service Bus. They are all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link. +This article provides some of the errors you receive when using AMQP with Azure Service Bus. They're all standard behaviors of the service. You can avoid them by making send/receive calls on the connection/link, which automatically recreates the connection/link. ## Link is closed You see the following error when the AMQP connection and link are active but no calls (for example, send or receive) are made using the link for 10 minutes. So, the link is closed. The connection is still open. amqp:link:detach-forced:The link 'G2:7223832:user.tenant0.cud_00000000000-0000-0 ``` ## Connection is closed-You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link has not been created in 5 minutes. +You see the following error on the AMQP connection when all links in the connection have been closed because there was no activity (idle) and a new link hasn't been created in 5 minutes. ``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 300000 milliseconds and is closed by container 'LinkTracker'. TrackingId:00000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T17:32:00', info=null} ``` -## Link is not created -You see this error when a new AMQP connection is created but a link is not created within 1 minute of the creation of the AMQP Connection. +## Link isn't created +You see this error when a new AMQP connection is created but a link isn't created within 1 minute of the creation of the AMQP Connection. ``` Error{condition=amqp:connection:forced, description='The connection was inactive for more than the allowed 60000 milliseconds and is closed by container 'LinkTracker'. TrackingId:0000000000000000000000000000000000000_G21, SystemTracker:gateway5, Timestamp:2019-03-06T18:41:51', info=null} |
service-bus-messaging | Service Bus Geo Dr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-dr.md | If you try to create a pairing between a primary namespace with a private endpoi > [!NOTE] > When you try to pair the primary namespace with a private endpoint and the secondary namespace, the validation process only checks whether a private endpoint exists on the secondary namespace. It doesn't check whether the endpoint works or will work after failover. It's your responsibility to ensure that the secondary namespace with private endpoint will work as expected after failover. >-> To test that the private endpoint configurations are same, send a [Get queues](/rest/api/servicebus/stable/queues/get) request to the secondary namespace from outside the virtual network, and verify that you receive an error message from the service. +> To test that the private endpoint configurations are same, send a [Get queues](/rest/api/servicebus/controlplane-stable/queues/get) request to the secondary namespace from outside the virtual network, and verify that you receive an error message from the service. ### Existing pairings If pairing between primary and secondary namespace already exists, private endpoint creation on the primary namespace will fail. To resolve, create a private endpoint on the secondary namespace first and then create one for the primary namespace. Azure Active Directory (Azure AD) role-based access control (RBAC) assignments t ## Next steps -- See the Geo-disaster recovery [REST API reference here](/rest/api/servicebus/stable/disasterrecoveryconfigs).+- See the Geo-disaster recovery [REST API reference here](/rest/api/servicebus/controlplane-stable/disaster-recovery-configs). - Run the Geo-disaster recovery [sample on GitHub](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/GeoDR/SBGeoDR2/SBGeoDR2). - See the Geo-disaster recovery [sample that sends messages to an alias](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.ServiceBus.Messaging/GeoDR/TestGeoDR/ConsoleApp1). |
service-bus-messaging | Service Bus Ip Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-ip-filtering.md | From API version **2021-06-01-preview onwards**, the default value of the `defau The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet. -For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/preview/private-endpoint-connections/create-or-update). +For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/controlplane-preview/private-endpoint-connections/create-or-update). > [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings. |
service-bus-messaging | Service Bus Messaging Sql Filter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-sql-filter.md | Title: Azure Service Bus Subscription Rule SQL Filter syntax | Microsoft Docs description: This article provides details about SQL filter grammar. A SQL filter supports a subset of the SQL-92 standard. Previously updated : 05/31/2022 Last updated : 08/16/2023 # Subscription Rule SQL Filter Syntax -A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown below. +A *SQL filter* is one of the available filter types for Service Bus topic subscriptions. It's a text expression that leans on a subset of the SQL-92 standard. Filter expressions are used with the `sqlExpression` element of the 'sqlFilter' property of a Service Bus `Rule` in an [Azure Resource Manager template](service-bus-resource-manager-namespace-topic-with-rule.md), or the Azure CLI `az servicebus topic subscription rule create` command's [`--filter-sql-expression`](/cli/azure/servicebus/topic/subscription/rule#az-servicebus-topic-subscription-rule-create) argument, and several SDK functions that allow managing subscription rules. The allowed expressions are shown in this section. Service Bus Premium also supports the [JMS SQL message selector syntax](https://docs.oracle.com/javaee/7/api/javax/jms/Message.html) through the JMS 2.0 API. Service Bus Premium also supports the [JMS SQL message selector syntax](https:// ## Remarks -An attempt to access a non-existent system property is an error, while an attempt to access a non-existent user property isn't an error. Instead, a non-existent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation. +An attempt to access a nonexistent system property is an error, while an attempt to access a nonexistent user property isn't an error. Instead, a nonexistent user property is internally evaluated as an unknown value. An unknown value is treated specially during operator evaluation. ## property_name Consider the following Sql Filter semantics: ### Property evaluation semantics -- An attempt to evaluate a non-existent system property throws a `FilterException` exception. +- An attempt to evaluate a nonexistent system property throws a `FilterException` exception. - A property that doesn't exist is internally evaluated as **unknown**. |
service-bus-messaging | Service Bus Migrate Standard Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md | Title: Migrate Azure Service Bus namespaces - standard to premium description: Guide to allow migration of existing Azure Service Bus standard namespaces to premium Previously updated : 06/27/2022 Last updated : 08/17/2023 # Migrate existing Azure Service Bus standard namespaces to the premium tier Previously, Azure Service Bus offered namespaces only on the standard tier. Name This article describes how to migrate existing standard tier namespaces to the premium tier. >[!WARNING]-> Migration is intended for Service Bus standard namespaces to be upgraded to the premium tier. The migration tool does not support downgrading. +> Migration is intended for Service Bus standard namespaces to be upgraded to the premium tier. The migration tool doesn't support downgrading. Some of the points to note: - This migration is meant to happen in place, meaning that existing sender and receiver applications **don't require any changes to code or configuration**. The existing connection string will automatically point to the new premium namespace.-- The **premium** namespace should have **no entities** in it for the migration to succeed.+- If you're using an existing premium name, the **premium** namespace should have **no entities** in it for the migration to succeed. - All **entities** in the standard namespace are **copied** to the premium namespace during the migration process. - Migration supports **1,000 entities per messaging unit** on the premium tier. To identify how many messaging units you need, start with the number of entities that you have on your current standard namespace. - You can't directly migrate from **basic tier** to **premium tier**, but you can do so indirectly by migrating from basic to standard first and then from the standard to premium in the next step.-- The role-based access control (RBAC) settings are not migrated, so you will need to add them manually after the migration. +- The role-based access control (RBAC) settings aren't migrated, so you'll need to add them manually after the migration. ## Migration steps Some conditions are associated with the migration process. Familiarize yourself with the following steps to reduce the possibility of errors. These steps outline the migration process, and the step-by-step details are listed in the sections that follow. -1. Create a new premium namespace. +1. Create a new premium namespace. You complete the next three steps using the following CLI or Azure portal instructions in this article. 1. Pair the standard and premium namespaces to each other. 1. Sync (copy-over) entities from the standard to the premium namespace. 1. Commit the migration. To migrate your Service Bus standard namespace to premium by using the Azure CLI 1. Create a new Service Bus premium namespace. You can reference the [Azure Resource Manager templates](service-bus-resource-manager-namespace.md) or [use the Azure portal](service-bus-quickstart-portal.md#create-a-namespace-in-the-azure-portal). Be sure to select **premium** for the **serviceBusSku** parameter. -1. Set the following environment variables to simplify the migration commands. +1. Set the following environment variables to simplify the migration commands. You can get the Azure Resource Manager ID for your premium namespace by navigating to the namespace in the Azure portal and copying the portion of the URL that looks like the following sample: `/subscriptions/00000000-0000-0000-0000-00000000000000/resourceGroups/contosoresourcegroup/providers/Microsoft.ServiceBus/namespaces/contosopremiumnamespace`. ``` resourceGroup = <resource group for the standard namespace> To migrate your Service Bus standard namespace to premium by using the Azure CLI Migration by using the Azure portal has the same logical flow as migrating by using the commands. Follow these steps to migrate by using the Azure portal. -1. On the **Navigation** menu in the left pane, select **Migrate to premium**. Click the **Get Started** button to continue to the next page. +1. On the **Navigation** menu in the left pane, select **Migrate to premium**. Select the **Get Started** button to continue to the next page. :::image type="content" source="./media/service-bus-standard-premium-migration/migrate-premium-page.png" alt-text="Image showing the Migrate to premium page."::: 1. You see the following **Setup Namespaces** page. Migration by using the Azure portal has the same logical flow as migrating by us ## Caveats -Some of the features provided by Azure Service Bus Standard tier are not supported by Azure Service Bus Premium tier. These are by design since the premium tier offers dedicated resources for predictable throughput and latency. +Some of the features provided by Azure Service Bus Standard tier aren't supported by Azure Service Bus Premium tier. These are by design since the premium tier offers dedicated resources for predictable throughput and latency. -Here is a list of features not supported by Premium and their mitigation - +Here's a list of features not supported by Premium and their mitigation - ### Express entities -Express entities that don't commit any message data to storage are not supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system. +Express entities that don't commit any message data to storage aren't supported in the **Premium** tier. Dedicated resources provided significant throughput improvement while ensuring that data is persisted, as is expected from any enterprise messaging system. During migration, any of your express entities in your Standard namespace will be created on the Premium namespace as a non-express entity. -If you utilize Azure Resource Manager (ARM) templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors. +If you utilize Azure Resource Manager templates, please ensure that you remove the 'enableExpress' flag from the deployment configuration so that your automated workflows execute without errors. ### RBAC settings The role-based access control (RBAC) settings on the namespace aren't migrated to the premium namespace. You'll need to add them manually after the migration. The role-based access control (RBAC) settings on the namespace aren't migrated t After the migration is committed, the connection string that pointed to the standard namespace will point to the premium namespace. -The sender and receiver applications will disconnect from the standard Namespace and reconnect to the premium namespace automatically. +The sender and receiver applications will disconnect from the standard namespace and reconnect to the premium namespace automatically. -If your are using the ARM Id for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the ARM Id to be that of the Premium namespace. +If you are using the Azure Resource Manager ID for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the Azure Resource Manager ID to be that of the premium namespace. ### What do I do after the standard to premium migration is complete? The standard to premium migration ensures that the entity metadata such as topics, subscriptions, and filters are copied from the standard namespace to the premium namespace. The message data that was committed to the standard namespace isn't copied from the standard namespace to the premium namespace. -The standard namespace may have some messages that were sent and committed while the migration was underway. Manually drain these messages from the standard Namespace and manually send them to the premium Namespace. To manually drain the messages, use a console app or a script that drains the standard namespace entities by using the Post Migration DNS name that you specified in the migration commands. Send these messages to the premium namespace so that they can be processed by the receivers. +The standard namespace may have some messages that were sent and committed while the migration was underway. Manually drain these messages from the standard namespace and manually send them to the premium namespace. To manually drain the messages, use a console app or a script that drains the standard namespace entities by using the post-migration DNS name that you specified in the migration commands. Send these messages to the premium namespace so that they can be processed by the receivers. After the messages have been drained, delete the standard namespace. >[!IMPORTANT]-> After the messages from the standard namespace have been drained, delete the standard namespace. This is important because the connection string that initially referred to the standard namespace now refers to the premium namespace. You won't need the standard Namespace anymore. Deleting the standard namespace that you migrated helps reduce later confusion. +> After the messages from the standard namespace have been drained, delete the standard namespace. This is important because the connection string that initially referred to the standard namespace now refers to the premium namespace. You won't need the standard namespace anymore. Deleting the standard namespace that you migrated helps reduce later confusion. ### How much downtime do I expect? During migration, the actual message data/payload isn't copied from the standard However, if you can migrate during a planned maintenance/housekeeping window, and you don't want to manually drain and send the messages, follow these steps: 1. Stop the sender applications. The receiver applications will process the messages that are currently in the standard namespace and will drain the queue.-1. After the queues and subscriptions in the standard Namespace are empty, follow the procedure that is described earlier to execute the migration from the standard to the premium namespace. +1. After the queues and subscriptions in the standard namespace are empty, follow the procedure that is described earlier to execute the migration from the standard to the premium namespace. 1. After the migration is complete, you can restart the sender applications. 1. The senders and receivers will now automatically connect with the premium namespace. |
service-bus-messaging | Service Bus Prefetch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-prefetch.md | Title: Prefetch messages from Azure Service Bus description: Improve performance by prefetching messages from Azure Service Bus queues or subscriptions. Messages are readily available for local retrieval before the application requests for them. Previously updated : 12/15/2022 Last updated : 08/29/2023 ms.devlang: csharp,java,javascript,python # Prefetch Azure Service Bus messages -When you enable the **Prefetch** feature for any of the official Service Bus clients, the receiver acquires more messages than what the application initially asked for, up to the specified prefetch count. As messages are returned to the application, the client acquires further messages in the background, to fill the prefetch buffer. +When you enable the **Prefetch** feature for any of the official Service Bus clients, the receiver acquires more messages than what the application initially asked for, up to the specified prefetch count. As messages are returned to the application, the client acquires more messages in the background, to fill the prefetch buffer. ## Enable Prefetch To enable the Prefetch feature, set the prefetch count of the queue or subscription client to a number greater than zero. Setting the value to zero turns off prefetch. # [.NET](#tab/dotnet)-If you're using the latest Azure.Messaging.ServiceBus library, you can set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects. +Set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects. -If you're using the older .NET client library for Service Bus (Microsoft.Azure.ServiceBus), you can set the prefetch count property on the [MessageReceiver](/dotnet/api/microsoft.servicebus.messaging.messagereceiver.prefetchcount), [QueueClient](/dotnet/api/microsoft.azure.servicebus.queueclient.prefetchcount#Microsoft_Azure_ServiceBus_QueueClient_PrefetchCount) or the [SubscriptionClient](/dotnet/api/microsoft.azure.servicebus.subscriptionclient.prefetchcount). - # [Java](#tab/java)-If you're using the latest azure-messaging-servicebus library, you can set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects. +Set the prefetch count property on the [ServiceBusReceiver](/dotnet/api/azure.messaging.servicebus.servicebusreceiver.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusReceiver_PrefetchCount) and [ServiceBusProcessor](/dotnet/api/azure.messaging.servicebus.servicebusprocessor.prefetchcount#Azure_Messaging_ServiceBus_ServiceBusProcessor_PrefetchCount) objects. -If you're using the older Java client library for Service Bus (azure-servicebus), you can set the prefetch count property on the [MessageReceiver](/java/api/com.microsoft.azure.servicebus.imessagereceiver.setprefetchcount#com_microsoft_azure_servicebus_IMessageReceiver_setPrefetchCount_int_), [QueueClient](/java/api/com.microsoft.azure.servicebus.queueclient.setprefetchcount#com_microsoft_azure_servicebus_QueueClient_setPrefetchCount_int_) or the [SubscriptionClient](/java/api/com.microsoft.azure.servicebus.subscriptionclient.setprefetchcount#com_microsoft_azure_servicebus_SubscriptionClient_setPrefetchCount_int_). - # [Python](#tab/python) You can set **prefetch_count** on the [azure.servicebus.ServiceBusReceiver](/python/api/azure-servicebus/azure.servicebus.servicebusreceiver) or [azure.servicebus.aio.ServiceBusReceiver](/python/api/azure-servicebus/azure.servicebus.aio.servicebusreceiver). While messages are available in the prefetch buffer, any subsequent receive call ## Why is Prefetch not the default option? Prefetch speeds up the message flow by having a message readily available for local retrieval before the application asks for one. This throughput gain is the result of a trade-off that the application author must make explicitly. -With the [receive-and-delete](message-transfers-locks-settlement.md#receiveanddelete) mode, all messages that are acquired into the prefetch buffer are no longer available in the queue. The messages stay only in the in-memory prefetch buffer until they're received into the application. If the application ends before the messages are received into the application, those messages are irrecoverable (lost). +When you use the [receive and delete](message-transfers-locks-settlement.md#receiveanddelete) mode, all messages that are acquired into the prefetch buffer are no longer available in the queue. The messages stay only in the in-memory prefetch buffer until they're received into the application. If the application ends before the messages are received into the application, those messages are irrecoverable (lost). -In the [peek-lock](message-transfers-locks-settlement.md#peeklock) receive mode, messages fetched into the prefetch buffer are acquired into the buffer in a locked state. They have the timeout clock for the lock ticking. If the prefetch buffer is large, and processing takes so long that message locks expire while staying in the prefetch buffer or even while the application is processing the message, there might be some confusing events for the application to handle. +When you use the [peek lock](message-transfers-locks-settlement.md#peeklock) receive mode, messages fetched into the prefetch buffer are acquired into the buffer in a locked state. They have the timeout clock for the lock ticking. If the prefetch buffer is large, and processing takes so long that message locks expire while staying in the prefetch buffer or even while the application is processing the message, there might be some confusing events for the application to handle. The application might acquire a message with an expired or imminently expiring lock. If so, the application might process the message, but then find that it can't complete the message because of a lock expiration. The application can check the `LockedUntilUtc` property (which is subject to clock skew between the broker and local machine clock). -The application might acquire a message with an expired or imminently expiring lock. If so, the application might process the message, but then find that it can't complete the message because of a lock expiration. The application can check the `LockedUntilUtc` property (which is subject to clock skew between the broker and local machine clock). If the message lock has expired, the application must ignore the message, and shouldn't make any API call on the message. If the message isn't expired but expiration is imminent, the lock can be renewed and extended by another default lock period. +If the message lock has expired, the application must ignore the message, and shouldn't make any API call on the message. If the message isn't expired but expiration is imminent, the lock can be renewed and extended by another default lock period. If the lock silently expires in the prefetch buffer, the message is treated as abandoned and is again made available for retrieval from the queue. It might cause the message to be fetched into the prefetch buffer and placed at the end. If the prefetch buffer can't usually be worked through during the message expiration, messages are repeatedly prefetched but never effectively delivered in a usable (validly locked) state, and are eventually moved to the dead-letter queue once the maximum delivery count is exceeded. -If the lock silently expires in the prefetch buffer, the message is treated as abandoned and is again made available for retrieval from the queue. It might cause the message to be fetched into the prefetch buffer and placed at the end. If the prefetch buffer can't usually be worked through during the message expiration, messages are repeatedly prefetched but never effectively delivered in a usable (validly locked) state, and are eventually moved to the dead-letter queue once the maximum delivery count is exceeded. +If an application explicitly abandons a message, the message may again be available for retrieval from the queue. When the prefetch is enabled, the message is fetched into the prefetch buffer again and placed at the end. As the messages from the prefetch buffer are drained in the first-in first-out (FIFO) order, the application may receive messages out of order. For example, the application may receive a message with ID 2 and then a message with ID 1 (that was abandoned earlier) from the buffer. -If you need a high degree of reliability for message processing, and processing takes significant work and time, we recommend that you use the Prefetch feature conservatively, or not at all. --If you need high throughput and message processing is commonly cheap, prefetch yields significant throughput benefits. +If you need a high degree of reliability for message processing, and processing takes significant work and time, we recommend that you use the Prefetch feature conservatively, or not at all. If you need high throughput and message processing is commonly cheap, prefetch yields significant throughput benefits. The maximum prefetch count and the lock duration configured on the queue or subscription need to be balanced such that the lock timeout at least exceeds the cumulative expected message processing time for the maximum size of the prefetch buffer, plus one message. At the same time, the lock timeout shouldn't be so long that messages can exceed their maximum time to live when they're accidentally dropped, and so requiring their lock to expire before being redelivered. Try the samples in the language of your choice to explore Azure Service Bus feat - [Azure Service Bus client library samples for JavaScript](/samples/azure/azure-sdk-for-js/service-bus-javascript/) - [Azure Service Bus client library samples for TypeScript](/samples/azure/azure-sdk-for-js/service-bus-typescript/) -Find samples for the older .NET and Java client libraries below: +Samples for the older .NET and Java client libraries: - [Azure Service Bus client library samples for .NET (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/DotNet/Microsoft.Azure.ServiceBus/) - See the **Prefetch** sample. - [Azure Service Bus client library samples for Java (legacy)](https://github.com/Azure/azure-service-bus/tree/master/samples/Java/azure-servicebus) - See the **Prefetch** sample. |
service-bus-messaging | Service Bus Service Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md | From API version **2021-06-01-preview onwards**, the default value of the `defau The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet. -For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/preview/private-endpoint-connections/create-or-update). +For more information about these properties, see [Create or Update Network Rule Set](/rest/api/servicebus/controlplane-preview/namespaces-network-rule-set/create-or-update-network-rule-set) and [Create or Update Private Endpoint Connections](/rest/api/servicebus/controlplane-preview/private-endpoint-connections/create-or-update). > [!NOTE] > None of the above settings bypass validation of claims via SAS or Azure AD authentication. The authentication check always runs after the service validates the network checks that are configured by `defaultAction`, `publicNetworkAccess`, `privateEndpointConnections` settings. |
service-connector | How To Integrate Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md | This page shows the supported authentication types and client types of Azure Blo Supported authentication and clients for App Service, Container Apps and Azure Spring Apps: -### [Azure App Service](#tab/app-service) | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| Supported authentication and clients for App Service, Container Apps and Azure S | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -### [Azure Container Apps](#tab/container-apps) --| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | -|--|--|--|--|--| -| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | -| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | --### [Azure Spring Apps](#tab/spring-apps) --| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | -|--|--|--|--|--| -| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | -| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | Supported authentication and clients for App Service, Container Apps and Azure S Use the connection details below to connect compute services to Blob Storage. For each example below, replace the placeholder texts `<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name. -### Azure App Service and Azure Container Apps --#### Secret / connection string +### Secret / connection string +#### .NET, Java, Node.JS, Python | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` | -#### system-assigned managed identity +#### Java - SpringBoot ++| Application properties | Description | Example value | +|--|--|| +| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` | +| azure.storage.account-key | Your Blob Storage account key | `<account-key>` | +| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | +++### System-assigned managed identity | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | -#### User-assigned managed identity +### User-assigned managed identity | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_RESOURCEENDPOINT | Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | | AZURE_STORAGEBLOB_CLIENTID | Your client ID | `<client-ID>` | -#### Service principal +### Service principal | Default environment variable name | Description | Example value | ||--|| Use the connection details below to connect compute services to Blob Storage. Fo | AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `<tenant-ID>` | -### Azure Spring Apps --#### secret / connection string --| Application properties | Description | Example value | -|--|--|| -| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` | -| azure.storage.account-key | Your Blob Storage account key | `<account-key>` | -| azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` | ## Next steps |
service-connector | How To Integrate Storage Queue | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md | This page shows the supported authentication types and client types of Azure Que Supported authentication and clients for App Service, Container Apps and Azure Spring Apps: -### [Azure App Service](#tab/app-service) --| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | -|--|--|--|--|--| -| .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | | -| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | --### [Azure Container Apps](#tab/container-apps) - | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | | +|Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -### [Azure Spring Apps](#tab/spring-apps) --| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | -|--|--|--|--|--| -| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | | -| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | -| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | - ## Default environment variable names or application properties Supported authentication and clients for App Service, Container Apps and Azure S Use the connection details below to connect compute services to Queue Storage. For each example below, replace the placeholder texts `<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name. -### .NET, Java, Node.JS, Python +### Secret/ connection string -#### Secret/ connection string +#### .NET, Java, Node.JS, Python | Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGEQUEUE_CONNECTIONSTRING | Queue storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` | -#### System-assigned managed identity +#### Java - Spring Boot ++| Application properties | Description | Example value | +|-|-|--| +| spring.cloud.azure.storage.account | Queue storage account name | `<storage-account-name>` | +| spring.cloud.azure.storage.access-key | Queue storage account key | `<account-key>` | ++### System-assigned managed identity | Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` | -#### User-assigned managed identity ++### User-assigned managed identity | Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGEQUEUE_RESOURCEENDPOINT | Queue storage endpoint | `https://<storage-account-name>.queue.core.windows.net/` | | AZURE_STORAGEQUEUE_CLIENTID | Your client ID | `<client-ID>` | -#### Service principal +### Service principal | Default environment variable name | Description | Example value | |-||-| Use the connection details below to connect compute services to Queue Storage. F | AZURE_STORAGEQUEUE_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEQUEUE_TENANTID | Your tenant ID | `<tenant-ID>` | -### Azure Spring Apps --#### Java - Spring Boot secret / connection string --| Application properties | Description | Example value | -|-|-|--| -| spring.cloud.azure.storage.account | Queue storage account name | `<storage-account-name>` | -| spring.cloud.azure.storage.access-key | Queue storage account key | `<account-key>` | ## Next steps |
service-connector | How To Integrate Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md | Supported authentication and clients for App Service, Container Apps and Azure S | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |-|-|--|--|-|-| .NET | | | ![yes icon](./media/green-check.png) | | -| Java | | | ![yes icon](./media/green-check.png) | | -| Node.js | | | ![yes icon](./media/green-check.png) | | -| Python | | | ![yes icon](./media/green-check.png) | | +| .NET |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)| +| Java |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)| +| Node.js |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)| +| Python |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)| ## Default environment variable names or application properties Use the connection details below to connect compute services to Azure Table Storage. For each example below, replace the placeholder texts `<account-name>` and `<account-key>` with your own account name and account key. -### .NET, Java, Node.JS and Python secret / connection string +### Secret / connection string | Default environment variable name | Description | Example value | |-||-| | AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` | +### System-assigned managed identity ++| Default environment variable name | Description | Example value | +|-||-| +| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` | +++### User-assigned managed identity ++| Default environment variable name | Description | Example value | +|-||-| +| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` | +| AZURE_STORAGETABLE_CLIENTID | Your client ID | `<client-ID>` | ++### Service principal ++| Default environment variable name | Description | Example value | +|-||-| +| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` | +| AZURE_STORAGETABLE_CLIENTID | Your client ID | `<client-ID>` | +| AZURE_STORAGETABLE_CLIENTSECRET | Your client secret | `<client-secret>` | +| AZURE_STORAGETABLE_TENANTID | Your tenant ID | `<tenant-ID>` | ++ ## Next steps Follow the tutorials listed below to learn more about Service Connector. |
service-fabric | How To Managed Cluster Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-networking.md | The following steps describe enable public IP on your node. ```json {- "name": "Secondary Node Type", + "name": "<secondary_node_type_name>", "apiVersion": "2023-02-01-preview", "properties": { "isPrimary" : false, - "vmImageResourceId": "/subscriptions/<SubscriptionID>/resourceGroups/<myRG>/providers/Microsoft.Compute/images/<MyCustomImage>", + "vmImageResourceId": "/subscriptions/<your_subscription_id>/resourceGroups/<your_resource_group>/providers/Microsoft.Compute/images/<your_custom_image>", "vmSize": "Standard_D2", "vmInstanceCount": 5, "dataDiskSizeGB": 100, The following steps describe enable public IP on your node. "ipAddress": "<ip_address_0>", "ipConfiguration": { "id": "<configuration_id_0>",- "resourceGroup": "<your_resource_group" + "resourceGroup": "<your_resource_group>" }, "ipTags": [], "name": "<name>", "provisioningState": "Succeeded", "publicIPAddressVersion": "IPv4", "publicIPAllocationMethod": "Static",- "resourceGroup": "<your_resource_group", + "resourceGroup": "<your_resource_group>", "resourceGuid": "resource_guid_0", "sku": { "name": "Standard" The following steps describe enable public IP on your node. "ipAddress": "<ip_address_1>", "ipConfiguration": { "id": "<configuration_id_1>",- "resourceGroup": "<your_resource_group" + "resourceGroup": "<your_resource_group>" }, "ipTags": [], "name": "<name>", The following steps describe enable public IP on your node. "ipAddress": "<ip_address_2>", "ipConfiguration": { "id": "<configuration_id_2>",- "resourceGroup": "<your_resource_group" + "resourceGroup": "<your_resource_group>" }, "ipTags": [], "name": "<name>", |
service-fabric | Managed Cluster Deny Assignment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/managed-cluster-deny-assignment.md | + + Title: Deny assignment policy for Service Fabric managed clusters +description: An overview of the deny assignment policy for Service Fabric managed clusters. +++++ Last updated : 08/18/2023+++# Deny assignment policy for Service Fabric managed clusters ++Deny assignment policies for Service Fabric managed clusters enable customers to protect the resources of their clusters. Deny assignments attach a set of deny actions to a user, group, or service principal at a particular scope to deny access. Limiting access to certain actions can help users from inadvertently damaging their clusters when they delete, deallocate restart, or reimage their clusters' scale set directly in the infrastructure resource group, which can cause the resources of the cluster to be unsynchronized with the data in the managed cluster. ++All actions that are related to managed clusters should be done through the managed cluster resource APIs instead of directly against the infrastructure resource group. Using the resource APIs ensures the resources of the cluster are synchronized with the data in the managed cluster. ++This feature ensures that the correct, supported APIs are used when performing delete operations to avoid any errors. ++You can learn more about deny assignments in the [Azure role-based access control (RBAC) documentation](../role-based-access-control/deny-assignments.md). ++## Best practices ++The following are some best practices to minimize the threat of desyncing your cluster's resources: +* Instead of deleting virtual machine scale sets directly from the managed resource group, use NodeType level APIs to delete the NodeType or virtual machine scale set. Options include the Node blade on the Azure portal and [Azure PowerShell](/powershell/module/az.servicefabric/remove-azservicefabricmanagednodetype). +* Use the correct APIs to restart or reimage your scale sets: + * [Virtual machine scale set restarts](/powershell/module/az.servicefabric/restart-azservicefabricmanagednodetype) + * [Virtual machine scale set reimage](/powershell/module/az.servicefabric/set-azservicefabricmanagednodetype) ++## Next steps ++* Learn more about [granting permission to access resources on managed clusters](how-to-managed-cluster-grant-access-other-resources.md) |
service-fabric | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md | |
service-fabric | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md | |
service-fabric | Service Fabric Cluster Creation Create Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-create-template.md | |
service-fabric | Service Fabric Keyvault References | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-keyvault-references.md | string secret = Environment.GetEnvironmentVariable("MySecret"); ## Use Managed KeyVaultReferences in your application -First, you must enable secret monitoring by upgrading your cluster definition: +First, you must enable secret monitoring by upgrading your cluster definition to add the `EnableSecretMonitoring` setting, in addition to the [other required CSS configurations](service-fabric-application-secret-store.md): ```json "fabricSettings": [ First, you must enable secret monitoring by upgrading your cluster definition: { "name": "EnableSecretMonitoring", "value": "true"+ }, + { + "name": "DeployedState", + "value": "enabled" + }, + { + "name" : "EncryptionCertificateThumbprint", + "value": "<thumbprint>" + }, + { + "name": "MinReplicaSetSize", + "value": "<size>" + }, + { + "name": "TargetReplicaSetSize", + "value": "<size>" } ] } |
site-recovery | Avs Tutorial Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-replication.md | |
site-recovery | Azure To Azure How To Enable Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-replication.md | Prerequisites should be in place, and you should have created a Recovery Service ## Enable replication -Use the following procedure to replicate Azure VMs to another Azure region. As an example, primary Azure region is Eastasia, and the secondary is Southeast Asia. +Use the following procedure to replicate Azure VMs to another Azure region. As an example, primary Azure region is East Asia, and the secondary is Southeast Asia. 1. In the vault > **Site Recovery** page, under **Azure virtual machines**, select **Enable replication**. 1. In the **Enable replication** page, under **Source**, do the following: Use the following procedure to replicate Azure VMs to another Azure region. As a :::image type="fields needed to configure replication" source="./media/azure-to-azure-how-to-enable-replication/source.png" alt-text="Screenshot that highlights the fields needed to configure replication."::: 1. Select **Next**.-1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to ten VMs. Then select **Next**. +1. In **Virtual machines**, select each VM that you want to replicate. You can only select machines for which replication can be enabled. You can select up to 10 VMs. Then select **Next**. :::image type="Virtual machine selection" source="./media/azure-to-azure-how-to-enable-replication/virtual-machine-selection.png" alt-text="Screenshot that highlights where you select virtual machines."::: 1. In **Replication settings**, you can configure the following settings: 1. Under **Location and Resource group**,- - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery will provide you the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location. + - **Target location**: Select the location where your source virtual machine data must be replicated. Depending on the location of selected machines, Site Recovery provides you with the list of suitable target regions. We recommend that you keep the target location the same as the Recovery Services vault location. - **Target subscription**: Select the target subscription used for disaster recovery. By default, the target subscription will be same as the source subscription. - **Target resource group**: Select the resource group to which all your replicated virtual machines belong. - By default, Site Recovery creates a new resource group in the target region with an *asr* suffix in the name. Use the following procedure to replicate Azure VMs to another Azure region. As a - **Replica-managed disk**: Site Recovery creates new replica-managed disks in the target region to mirror the source VM's managed disks with the same storage type (Standard or premium) as the source VM's managed disk. - **Cache storage**: Site Recovery needs extra storage account called cache storage in the source region. All the changes happening on the source VMs are tracked and sent to cache storage account before replicating them to the target location. This storage account should be Standard. >[!Note]- >Azure Site Recovery supports High churn (Public Preview) where you can choose to use **High Churn** for the VM. With this, you can use a *Premium Block Blob* type of storage account. By default, **Normal Churn** is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). - - :::image type="Cache storage" source="./media/azure-to-azure-how-to-enable-replication/cache-storage.png" alt-text="Screenshot of customize target settings."::: + >Azure Site Recovery has a *High Churn* option that you can choose to protect VMs with high data change rate. With this, you can use a *Premium Block Blob* type of storage account. By default, the **Normal Churn** option is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). + >:::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churns.png" alt-text="Screenshot of churn."::: 1. **Availability options**: Select appropriate availability option for your VM in the target region. If an availability set that was created by Site Recovery already exists, it's reused. Select **View/edit availability options** to view or edit the availability options. >[!NOTE] Use the following procedure to replicate Azure VMs to another Azure region. As a :::image type="Availability option" source="./media/azure-to-azure-how-to-enable-replication/availability-option.png" alt-text="Screenshot of availability option."::: 1. **Capacity reservation**: Capacity Reservation lets you purchase capacity in the recovery region, and then failover to that capacity. You can either create a new Capacity Reservation Group or use an existing one. For more information, see [how capacity reservation works](../virtual-machines/capacity-reservation-overview.md).- Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM will be created in the assigned Capacity Reservation Group. + Select **View or Edit Capacity Reservation group assignment** to modify the capacity reservation settings. On triggering Failover, the new VM is created in the assigned Capacity Reservation Group. :::image type="Capacity reservation" source="./media/azure-to-azure-how-to-enable-replication/capacity-reservation.png" alt-text="Screenshot of capacity reservation."::: -1. Select **Next**. + 1. Select **Next**. + 1. In **Manage**, do the following: 1. Under **Replication policy**, - **Replication policy**: Select the replication policy. Defines the settings for recovery point retention history and app-consistent snapshot frequency. By default, Site Recovery creates a new replication policy with default settings of 24 hours for recovery point retention. |
site-recovery | Azure To Azure How To Reprotect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md | |
site-recovery | Azure To Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-powershell.md | $WusToEusPCMapping = Get-AzRecoveryServicesAsrProtectionContainerMapping -Protec ## Create cache storage account and target storage account -A cache storage account is a standard storage account in the same Azure region as the virtual machine being replicated. The cache storage account is used to hold replication changes temporarily, before the changes are moved to the recovery Azure region. High churn support (Public Preview) is now available in Azure Site Recovery using which you can create a Premium Block Blob type of storage accounts that can be used as cache storage account to get high churn limits. You can choose to, but it's not necessary, to specify different cache storage accounts for the different disks of a virtual machine. If you use different cache storage accounts, ensure they are of the same type (Standard or Premium Block Blobs). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). +A cache storage account is a standard storage account in the same Azure region as the virtual machine being replicated. The cache storage account is used to hold replication changes temporarily, before the changes are moved to the recovery Azure region. High churn support is also available in Azure Site Recovery to get higher churn limits. To use this feature, please create a Premium Block Blob type of storage accounts and then use it as the cache storage account. You can choose to, but it's not necessary, to specify different cache storage accounts for the different disks of a virtual machine. If you use different cache storage accounts, ensure they are of the same type (Standard or Premium Block Blobs). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). ```azurepowershell #Create Cache storage account for replication logs in the primary region |
site-recovery | Azure To Azure Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-quickstart.md | Title: Set up Azure VM disaster recovery to a secondary region with Azure Site Recovery description: Quickly set up disaster recovery to another Azure region for an Azure VM, using the Azure Site Recovery service. Previously updated : 05/02/2022 Last updated : 07/14/2023 +Azure Site Recovery has an option of *High Churn*, enabling you to configure disaster recovery for Azure VMs having data churn up to 100 MB/s. This helps you to enable disaster recovery for more IO intensive workloads. [Learn more](../site-recovery/concepts-azure-to-azure-high-churn-support.md). + This quickstart describes how to set up disaster recovery for an Azure VM by replicating it to a secondary Azure region. In general, default settings are used to enable replication. [Learn more](azure-to-azure-tutorial-enable-replication.md). ## Prerequisites The following steps enable VM replication to a secondary location. 1. In **Operations**, select **Disaster recovery**. 1. From **Basics** > **Target region**, select the target region. 1. To view the replication settings, select **Review + Start replication**. If you need to change any defaults, select **Advanced settings**.+ >[!Note] + >Azure Site Recovery has a *High Churn* option that you can choose to protect VMs with high data change rate. With this, you can use a *Premium Block Blob* type of storage account. By default, the **Normal Churn** option is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). + >:::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/churn-for-vms.png" alt-text="Screenshot of Churn for VM."::: 1. To start the job that enables VM replication, select **Start replication**. :::image type="content" source="media/azure-to-azure-quickstart/enable-replication1.png" alt-text="Enable replication."::: |
site-recovery | Azure To Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md | This table summarizes support for the cache storage account used by Site Recover **Setting** | **Support** | **Details** | | General purpose V2 storage accounts (Hot and Cool tier) | Supported | Usage of GPv2 is recommended because GPv1 doesn't support ZRS (Zonal Redundant Storage). -Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support (in Public Preview). For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). +Premium storage | Supported | Use Premium Block Blob storage accounts to get High Churn support. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). Region | Same region as virtual machine | Cache storage account should be in the same region as the virtual machine being protected. Subscription | Can be different from source virtual machines | Cache storage account need not be in the same subscription as the source virtual machine(s). Azure Storage firewalls for virtual networks | Supported | If you're using firewall enabled cache storage account or target storage account, ensure you ['Allow trusted Microsoft services'](../storage/common/storage-network-security.md#exceptions).<br></br>Also, ensure that you allow access to at least one subnet of source Vnet.<br></br>Note: Don't restrict virtual network access to your storage accounts used for Site Recovery. You should allow access from 'All networks'. Windows Server 2008 R2 with SP1/SP2 | Supported.<br/><br/> From version [9.30](h Windows 10 (x64) | Supported. Windows 8.1 (x64) | Supported. Windows 8 (x64) | Supported.-Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery) of the Mobility service extension for Azure VMs, you need to install a Windows [servicing stack update (SSU)](https://support.microsoft.com/help/4490628) and [SHA-2 update](https://support.microsoft.com/help/4474419) on machines running Windows 7 with SP1. SHA-1 isn't supported from September 2019, and if SHA-2 code signing isn't enabled the agent extension won't install/upgrade as expected.. Learn more about [SHA-2 upgrade and requirements](https://aka.ms/SHA-2KB). +Windows 7 (x64) with SP1 onwards | From version [9.30](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery) of the Mobility service extension for Azure VMs, you need to install a Windows [servicing stack update (SSU)](https://support.microsoft.com/help/4490628) and [SHA-2 update](https://support.microsoft.com/help/4474419) on machines running Windows 7 with SP1. SHA-1 isn't supported from September 2019, and if SHA-2 code signing isn't enabled the agent extension won't install/upgrade as expected. Learn more about [SHA-2 upgrade and requirements](https://aka.ms/SHA-2KB). GRS | Supported | RA-GRS | Supported | ZRS | Supported | Cool and Hot Storage | Not supported | Virtual machine disks aren't supported on cool and hot storage-Azure Storage firewalls for virtual networks | Supported | If restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). +Azure Storage firewalls for virtual networks | Supported | If you want to restrict virtual network access to storage accounts, enable [Allow trusted Microsoft services](../storage/common/storage-network-security.md#exceptions). General purpose V2 storage accounts (Both Hot and Cool tier) | Supported | Transaction costs increase substantially compared to General purpose V1 storage accounts Generation 2 (UEFI boot) | Supported NVMe disks | Not supported |
site-recovery | Azure To Azure Tutorial Enable Replication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-enable-replication.md | Title: Tutorial to set up Azure VM disaster recovery with Azure Site Recovery description: In this tutorial, set up disaster recovery for Azure VMs to another Azure region, using the Site Recovery service. Previously updated : 01/04/2023 Last updated : 07/14/2023 #Customer intent: As an Azure admin, I want to set up disaster recovery for my Azure VMs, so that they're available in a secondary region if the primary region becomes unavailable. Site Recovery retrieves the VMs associated with the selected subscription/resour ### Review replication settings 1. In **Replication settings**, review the settings. Site Recovery creates default settings/policy for the target region. For the purposes of this tutorial, we use the default settings.+ >[!Note] + >Azure Site Recovery has a *High Churn* option that you can choose to protect VMs with high data change rate. With this, you can use a *Premium Block Blob* type of storage account. By default, the **Normal Churn** option is selected. For more information, see [Azure VM Disaster Recovery - High Churn Support](./concepts-azure-to-azure-high-churn-support.md). You can select the **High Churn** option from **Storage** > **View/edit storage configuration** > **Churn for the VM**. + >:::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churns.png" alt-text="Screenshot of churn."::: 2. Select **Next**. Site Recovery retrieves the VMs associated with the selected subscription/resour - **Replication group**: Create replication group to replicate VMs together to generate Multi-VM consistent recovery points. Note that enabling multi-VM consistency can impact workload performance and should only be used if machines are running the same workload and you need consistency across multiple machines. 1. Under **Extension settings**, - Select **Update settings** and **Automation account**.-- :::image type="manage" source="./media/azure-to-azure-tutorial-enable-replication/manage.png" alt-text="Screenshot showing manage tab."::: + :::image type="manage" source="./media/azure-to-azure-tutorial-enable-replication/manage.png" alt-text="Screenshot showing manage tab."::: 1. Select **Next**. The VMs you enable appear on the vault > **Replicated items** page. ## Next steps -In this tutorial, you enabled disaster recovery for an Azure VM. Now, run a drill to check that failover works as expected. --> [!div class="nextstepaction"] -> [Run a disaster recovery drill](azure-to-azure-tutorial-dr-drill.md) +In this tutorial, you enabled disaster recovery for an Azure VM. Now, [run a disaster recovery drill](azure-to-azure-tutorial-dr-drill.md) to check that failover works as expected. |
site-recovery | Concepts Azure To Azure High Churn Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-azure-to-azure-high-churn-support.md | Title: Azure VM Disaster Recovery - High Churn support (Public Preview) -description: Describes how to protect your Azure VMs having high churning workloads + Title: Azure VM Disaster Recovery - High Churn support +description: Describes how to protect your Azure VMs having high churning workloads. Previously updated : 12/07/2022 Last updated : 07/14/2023 -# Azure VM Disaster Recovery - High Churn Support (Public Preview) +# Azure VM Disaster Recovery - High Churn Support -Azure Site Recovery supports churn (data change rate) up to 100 MB/s per VM. You will be able to protect your Azure VMs having high churning workloads (like databases) using Azure Site Recovery which earlier could not be protected efficiently because Azure Site Recovery has churn limits up to 54 MB/s per VM. You may be able to achieve better RPO performance for your high churning workloads. +Azure Site Recovery supports churn (data change rate) up to 100 MB/s per VM. You will be able to protect your Azure VMs having high churning workloads (like databases) using the *High Churn* option in Azure Site Recovery which supports churn up to 100 MB/s per VM. You may be able to achieve better RPO performance for your high churning workloads. With the default Normal Churn option, you can [support churn only up to 54 MB/s per VM](./azure-to-azure-support-matrix.md#limits-and-data-change-rates). ## Limitations Azure Site Recovery supports churn (data change rate) up to 100 MB/s per VM. You - Recommend VM SKUs with RAM of min 32GB. - Source disks must be Managed Disks. ->[!Warning] -> This Public Preview feature has been expanded in [all public regions](../site-recovery/azure-to-azure-support-matrix.md#region-support) where Azure Site Recovery is supported. However, this feature is not available in any Government cloud regions. When using *High Churn* with any other regions outside the supported regions, replication and/or reprotection may fail. +> [!NOTE] +> This feature is available in all [public regions](./azure-to-azure-support-matrix.md#region-support) where Azure Site Recovery is supported and premium block blobs are available. However, this feature is not yet available in any Government cloud regions. +> When using High Churn with any other regions outside the supported regions, replication and/or reprotection may fail. ## Data change limits - These limits are based on our tests and don't cover all possible application I/O combinations. - Actual results may vary based on your app I/O mix. -- There are two limits to consider, per disk data churn and per virtual machine data churn. +- There are two limits to consider: + - per disk data churn + - per virtual machine data churn. - Limit per virtual machine data churn - 100 MB/s. The following table summarizes Site Recovery limits: The following table summarizes Site Recovery limits: |Standard or P10 or P15|24 KB|6 MB/s| |Standard or P10 or P15|32 KB and above|10 MB/s| |P20|8 KB|10 MB/s|-|P20|16 KB|20 MB/s| +|P20 |16 KB|20 MB/s| |P20|24 KB and above|30 MB/s| |P30 and above|8 KB|20 MB/s| |P30 and above|16 KB|35 MB/s| The following table summarizes Site Recovery limits: 2. Under **Replication Settings** > **Storage**, select **View/edit storage configuration**. The **Customize target settings** page opens. - :::image type="Replication settings" source="media/concepts-azure-to-azure-high-churn-support/replication-settings-storage.png" alt-text="Screenshot of Replication settings storage."::: -+ :::image type="Replication settings" source="media/concepts-azure-to-azure-high-churn-support/replication-settings-storages.png" alt-text="Screenshot of Replication settings storage." lightbox="media/concepts-azure-to-azure-high-churn-support/replication-settings-storages.png"::: 3. Under **Churn for the VM**, there are two options: - - **Normal Churn** (default option) - You can get up to 54 MB/s per VM. Select Normal Churn to use *Standard* storage accounts only for Cache Storage. Hence, Cache storage dropdown will list only *Standard* storage accounts. + - **Normal Churn** (this is the default option) - You can get up to 54 MB/s per VM. Select Normal Churn to use *Standard* storage accounts only for Cache Storage. Hence, Cache storage dropdown will list only *Standard* storage accounts. - **High Churn** - You can get up to 100 MB/s per VM. Select High Churn to use *Premium Block Blob* storage accounts only for Cache Storage. Hence, Cache storage dropdown will list only *Premium Block blob* storage accounts. - :::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churn.png" alt-text="Screenshot of churn."::: + :::image type="Churn" source="media/concepts-azure-to-azure-high-churn-support/churns.png" alt-text="Screenshot of churn."::: + -4. Select **High Churn (Public Preview)**. +4. Select **High Churn** from the dropdown option. - :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/high-churn.png" alt-text="Screenshot of high-churn."::: + :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/high-churn-new.png" alt-text="Screenshot of high-churn."::: If you select multiple source VMs to configure Site Recovery and want to enable High Churn for all these VMs, select **High Churn** at the top level. - :::image type="Churn top level" source="media/concepts-azure-to-azure-high-churn-support/churn-top-level.png" alt-text="Screenshot of churn top level."::: - 5. After you select High Churn for the VM, you will see Premium Block Blob options only available for cache storage account. Select cache storage account and then select **Confirm Selection**. - :::image type="Cache storage" source="media/concepts-azure-to-azure-high-churn-support/cache-storage.png" alt-text="Screenshot of Cache storage."::: + :::image type="Cache storage" source="media/concepts-azure-to-azure-high-churn-support/cache-storages.png" alt-text="Screenshot of Cache storage."::: 6. Configure other settings and enable the replication. The following table summarizes Site Recovery limits: :::image type="Storage" source="media/concepts-azure-to-azure-high-churn-support/storage-show-details.png" alt-text="Screenshot of Storage show details."::: -6. Under **Storage settings** > **Churn for the VM**, select **High Churn (Public Preview)**. You will be able to use Premium Block Blob type of storage accounts only for cache storage. +6. Under **Storage settings** > **Churn for the VM**, select **High Churn**. You will be able to use Premium Block Blob type of storage accounts only for cache storage. - :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/churn-for-vm.png" alt-text="Screenshot of Churn for VM."::: + :::image type="High churn" source="media/concepts-azure-to-azure-high-churn-support/churn-for-vms.png" alt-text="Screenshot of Churn for VM."::: 6. Select **Next: Review + Start replication**. |
site-recovery | Hybrid How To Enable Replication Private Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hybrid-how-to-enable-replication-private-endpoints.md | When using the private link with modernized experience for VMware VMs, public ac | `*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. | |`*.microsoftonline.com `<br>`*.microsoftonline-p.com `| Create Azure Active Directory applications for the appliance to communicate with Azure Site Recovery. | | `management.azure.com` | Used for Azure Resource Manager deployments and operations. |+ | `*.siterecovery.windowsazure.com` | Used to connect to Site Recovery services. | Ensure the following URLs are allowed and reachable from the Azure Site Recovery replication appliance for continuous connectivity, when enabling replication to a government cloud: Ensure the following URLs are allowed and reachable from the Azure Site Recovery | `*.portal.azure.us` | `*.portal.azure.cn` | Navigate to the Azure portal. | | `management.usgovcloudapi.net` | `management.chinacloudapi.cn` | Create Azure Active Directory applications for the appliance to communicate with the Azure Site Recovery service. | - ## Create and use private endpoints for site recovery The following sections describe the steps you need to take to create and use private endpoints for site recovery in your virtual networks. When the private endpoint is created, five fully qualified domain names (FQDNs) The five domain names are formatted in this pattern: -`{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.siterecovery.windowsazure.com` +`{Vault-ID}-asr-pod01-{type}-.{target-geo-code}.privatelink.siterecovery.windowsazure.com` ### Approve private endpoints for site recovery |
site-recovery | Hyper V Azure Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-tutorial.md | Title: Set up Hyper-V disaster recovery by using Azure Site Recovery -description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery. +description: Learn how to set up disaster recovery of on-premises Hyper-V VMs (without SCVMM) to Azure by using Site Recovery and MARS. Last updated 05/04/2023 It's important to prepare the infrastructure before you set up disaster recovery ### Source settings -To set up the source environment, you create a Hyper-V site. You add to the site the Hyper-V hosts that contain VMs you want to replicate. Then, you download and install the Azure Site Recovery provider and the Azure Recovery Services agent on each host, and register the Hyper-V site in the vault. +To set up the source environment, you create a Hyper-V site. You add to the site the Hyper-V hosts that contain VMs you want to replicate. Then, you download and install the Azure Site Recovery provider and the Microsoft Azure Recovery Services (MARS) agent for Azure Site Recovery on each host, and register the Hyper-V site in the vault. 1. On **Prepare infrastructure**, on the **Source settings** tab, complete these steps: 1. For **Are you Using System Center VMM to manage Hyper-V hosts?**, select **No**. Site Recovery checks for compatible Azure storage accounts and networks in your #### Install the provider -Install the downloaded setup file (*AzureSiteRecoveryProvider.exe*) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Site Recovery provider and the Recovery Services agent on each Hyper-V host. +Install the downloaded setup file (*AzureSiteRecoveryProvider.exe*) on each Hyper-V host that you want to add to the Hyper-V site. Setup installs the Site Recovery provider and the Recovery Services agent (MARS for Azure Site Recovery) on each Hyper-V host. 1. Run the setup file. 1. In the Azure Site Recovery provider setup wizard, for **Microsoft Update**, opt in to use Microsoft Update to check for provider updates. |
site-recovery | Service Updates How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/service-updates-how-to.md | Title: Updates and component upgrades in Azure Site Recovery -description: Provides an overview of Azure Site Recovery service updates, and component upgrades. +description: Provides an overview of Azure Site Recovery service updates, MARS agent and component upgrades. We recommend always upgrading to the latest component versions: Review the latest update rollup (version N) in [this article](site-recovery-whats-new.md). Remember that Site Recovery provides support for N-4 versions. - ## Component expiry Site Recovery notifies you of expired components (or nearing expiry) by email (if you subscribed to email notifications), or on the vault dashboard in the portal. The example in the table shows how this works. ## Between an on-premises VMM site and Azure+ 1. Download the update for the Microsoft Azure Site Recovery Provider. 2. Install the Provider on the VMM server. If VMM is deployed in a cluster, install the Provider on all cluster nodes.-3. Install the latest Microsoft Azure Recovery Services agent on all Hyper-V hosts or cluster nodes. -+3. Install the latest Microsoft Azure Recovery Services agent (MARS for Azure Site Recovery) on all Hyper-V hosts or cluster nodes. ## Between two on-premises VMM sites+ 1. Download the latest update for the Microsoft Azure Site Recovery Provider. 2. Install the latest Provider on the VMM server managing the secondary recovery site. If VMM is deployed in a cluster, install the Provider on all cluster nodes. 3. After the recovery site is updated, install the Provider on the VMM server that's managing the primary site. ## Next steps -Follow our [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page to track new updates and releases. +Follow our [Azure Updates](https://azure.microsoft.com/updates/?product=site-recovery) page to track new updates and releases. |
site-recovery | Site Recovery Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-overview.md | Title: About Azure Site Recovery description: Provides an overview of the Azure Site Recovery service, and summarizes disaster recovery and migration deployment scenarios. Previously updated : 12/14/2022 Last updated : 07/24/2023 Azure Recovery Services contributes to your BCDR strategy: - **Site Recovery service**: Site Recovery helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery [replicates](azure-to-azure-quickstart.md) workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to a secondary location, and access apps from there. After the primary location is running again, you can fail back to it. - **Backup service**: The [Azure Backup](../backup/index.yml) service keeps your data safe and recoverable. +Azure Site Recovery has an option of *High Churn*, enabling you to configure disaster recovery for Azure VMs having data churn up to 100 MB/s. This helps you to enable disaster recovery for more IO intensive workloads. [Learn more](../site-recovery/concepts-azure-to-azure-high-churn-support.md). + Site Recovery can manage replication for: - Azure VMs replicating between Azure regions Site Recovery can manage replication for: **VMware VM replication** | You can replicate VMware VMs to Azure using the improved Azure Site Recovery replication appliance that offers better security and resilience than the configuration server. For more information, see [Disaster recovery of VMware VMs](vmware-azure-about-disaster-recovery.md). **On-premises VM replication** | You can replicate on-premises VMs and physical servers to Azure, or to a secondary on-premises datacenter. Replication to Azure eliminates the cost and complexity of maintaining a secondary datacenter. **Workload replication** | Replicate any workload running on supported Azure VMs, on-premises Hyper-V and VMware VMs, and Windows/Linux physical servers.-**Data resilience** | Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that provides. When failover occurs, Azure VMs are created based on the replicated data. This also applies to Public MEC to Azure region Azure Site Recovery scenario. In case of Azure Public MEC to Public MEC Azure Site Recovery scenario (the ASR functionality for Public MEC is in preview state), data is stored in the Public MEC. +**Data resilience** | Site Recovery orchestrates replication without intercepting application data. When you replicate to Azure, data is stored in Azure storage, with the resilience that provides. When failover occurs, Azure VMs are created based on the replicated data. This also applies to Public MEC to Azure region Azure Site Recovery scenario. In case of Azure Public MEC to Public MEC Azure Site Recovery scenario (the Azure Site Recovery functionality for Public MEC is in preview state), data is stored in the Public MEC. **RTO and RPO targets** | Keep recovery time objectives (RTO) and recovery point objectives (RPO) within organizational limits. Site Recovery provides continuous replication for Azure VMs and VMware VMs, and replication frequency as low as 30 seconds for Hyper-V. You can reduce RTO further by integrating with [Azure Traffic Manager](https://azure.microsoft.com/blog/reduce-rto-by-using-azure-traffic-manager-with-azure-site-recovery/). **Keep apps consistent over failover** | You can replicate using recovery points with application-consistent snapshots. These snapshots capture disk data, all data in memory, and all transactions in process. **Testing without disruption** | You can easily run disaster recovery drills, without affecting ongoing replication. |
site-recovery | Site Recovery Runbook Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md | |
site-recovery | Vmware Physical Large Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-large-deployment.md | Process server capacity is affected by data churn rates, and not by the number o **CPU** | **Memory** | **Cache disk** | **Churn rate** | | | -12 vCPUs<br> 2 sockets*6 cores @ 2.5 Ghz | 24 GB | 1 GB | Up to 2 TB a day +12 vCPUs<br> 2 sockets*6 cores @ 2.5 Ghz | 24 GB | 1 TB | Up to 2 TB a day Set up the process server as follows: To run a large-scale failover, we recommend the following: ## Next steps > [!div class="nextstepaction"]-> [Monitor Site Recovery](site-recovery-monitor-and-troubleshoot.md) +> [Monitor Site Recovery](site-recovery-monitor-and-troubleshoot.md) |
spring-apps | How To Configure Palo Alto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-palo-alto.md | -For example, the [Azure Spring Apps reference architecture](./reference-architecture.md) includes an Azure Firewall to secure your applications. However, if your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article. +If your current deployments include a Palo Alto firewall, you can omit the Azure Firewall from the Azure Spring Apps deployment and use Palo Alto instead, as described in this article. -You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md). +You should keep configuration information, such as rules and address wildcards, in CSV files in a Git repository. This article shows you how to use automation to apply these files to Palo Alto. To understand the configuration to be applied to Palo Alto, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md). -> [!Note] +> [!NOTE] > In describing the use of REST APIs, this article uses the PowerShell variable syntax to indicate names and values that are left to your discretion. Be sure to use the same values in all the steps. > > After you've configured the TLS/SSL certificate in Palo Alto, remove the `-SkipCertificateCheck` argument from all Palo Alto REST API calls in the examples below. |
spring-apps | How To Create User Defined Route Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md | The following example shows how to add rules to your firewall. For more informat az network firewall network-rule create \ --resource-group $RG \ --firewall-name $FWNAME \- --collection-name 'asafwnr' -n 'apiudp' \ - --protocols 'UDP' \ - --source-addresses '*' \ - --destination-addresses "AzureCloud" \ - --destination-ports 1194 \ - --action allow \ - --priority 100 -az network firewall network-rule create \ - --resource-group $RG \ - --firewall-name $FWNAME \ - --collection-name 'asafwnr' -n 'springcloudtcp' \ + --collection-name 'asafwnr' \ + --name 'springcloudtcp' \ --protocols 'TCP' \ --source-addresses '*' \ --destination-addresses "AzureCloud" \ --destination-ports 443 445-az network firewall network-rule create \ - --resource-group $RG \ - --firewall-name $FWNAME \ - --collection-name 'asafwnr' \ - --name 'time' \ - --protocols 'UDP' \ - --source-addresses '*' \ - --destination-fqdns 'ntp.ubuntu.com' \ - --destination-ports 123 # Add firewall application rules. az network firewall application-rule create \ --collection-name 'aksfwar'\ --name 'fqdn' \ --source-addresses '*' \- --protocols 'http=80' 'https=443' \ + --protocols 'https=443' \ --fqdn-tags "AzureKubernetesService" \ --action allow --priority 100 ``` |
spring-apps | How To Enterprise Configure Apm Integration And Ca Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-integration-and-ca-certificates.md | You can create an APM configuration and bind to app builds and deployments, as e You can manage APM integration by configuring properties or secrets in the APM configuration using the Azure portal or the Azure CLI. > [!NOTE]-> When configuring properties or secrets for APM, use key names without a prefix. For example, don't use a `DT_` prefix for a Dynatrace binding or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks transform the key name to the original environment variable name with a prefix. +> When configuring properties or secrets via APM configurations, use key names without the APM name as prefix. For example, don't use a `DT_` prefix for Dynatrace or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks transform the key name to the original environment variable name with a prefix. +> +> If you intend to override or configure some properties or secrets, such as app name or app level, you need to set environment variables when deploying an app with the original environment variables with the APM name as prefix. ##### [Azure portal](#tab/azure-portal) Use the following steps to show, add, edit, or delete an APM configuration: 1. Open the [Azure portal](https://portal.azure.com). 1. In the navigation pane, select **APM**.-1. To create an APM configuration, select **Add**. If you want to enable the APM configuration globally, select **Enable globally**. All the subsequent builds and deployments will use the APM configuration automatically. +1. To create an APM configuration, select **Add**. If you want to enable the APM configuration globally, select **Enable globally**. All the subsequent builds and deployments use the APM configuration automatically. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/add-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Add button highlighted." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/add-apm.png"::: Use the following steps to show, add, edit, or delete an APM configuration: :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Edit APM option selected." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/show-apm.png"::: -1. To delete an APM configuration, select the ellipsis (**...**) button for the configuration, then select **Delete**. If the APM configuration is used by any build or deployment, you won't be able to delete it. +1. To delete an APM configuration, select the ellipsis (**...**) button for the configuration and then select **Delete**. If the APM configuration is used by any build or deployment, you aren't able to delete it. :::image type="content" source="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/delete-apm.png" alt-text="Screenshot of the Azure portal showing the APM configuration page with the Delete button highlighted." lightbox="media/how-to-enterprise-configure-apm-integration-and-ca-certificates/delete-apm.png"::: |
spring-apps | How To Enterprise Deploy Polyglot Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md | The following table lists the features supported in Azure Spring Apps: | Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` | | Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` | +There are some limitations for Java Native Image. For more information, see the [Java Native Image limitations](#java-native-image-limitations) section. + ### Deploy PHP applications The buildpack for deploying PHP applications is [tanzu-buildpacks/php](https://network.tanzu.vmware.com/products/tbs-dependencies/#/releases/1335849/artifact_references). |
spring-apps | How To Start Stop Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-start-stop-service.md | -**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise +**This article applies to:** ❌ Standard consumption and dedicated (Preview) ✔️ Basic/Standard ✔️ Enterprise This article shows you how to start or stop your Azure Spring Apps service instance. -> [!NOTE] -> You can stop and start your Azure Spring Apps service instance to help you save costs, but you shouldn't stop and start a running instance for service recovery. - Your applications running in Azure Spring Apps may not need to run continuously. For example, an application may not need to run continuously if you have a service instance that's used only during business hours. There may be times when Azure Spring Apps is idle and running only the system components. -You can reduce the active footprint of Azure Spring Apps by reducing the running instances and ensuring costs for compute resources are reduced. +You can reduce the active footprint of Azure Spring Apps by reducing the running instances, which reduces costs for compute resources. For more information, see [Start, stop, and delete an application in Azure Spring Apps](./how-to-start-stop-delete.md) and [Scale an application in Azure Spring Apps](./how-to-scale-manual.md). -To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components will be stopped. However, all your objects and network settings will be saved so you can restart your service instance and pick up right where you left off. +To reduce your costs further, you can completely stop your Azure Spring Apps service instance. All user apps and system components are stopped. However, all your objects and network settings are saved so you can restart your service instance and pick up right where you left off. -> [!NOTE] -> The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days. If your cluster is stopped for more than 90 days, you can't recover the cluster state. +## Limitations ++The ability to stop and start your Azure Spring Apps service instance has the following limitations: -You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app. +- You can stop and start your Azure Spring Apps service instance to help you save costs. However, you shouldn't stop and start a running instance for service recovery - for example, to recover from an invalid virtual network configuration. +- The state of a stopped Azure Spring Apps service instance is preserved for up to 90 days. If your cluster is stopped for more than 90 days, you can't recover the cluster state. +- You can only start, view, or delete a stopped Azure Spring Apps service instance. You must start your service instance before performing any update operation, such as creating or scaling an app. +- If an Azure Spring Apps service instance has been stopped or started successfully, you have to wait for at least 30 minutes to start or stop the instance again. However, if your last operation failed, you can try again to start or stop without having to wait. +- For virtual network instances, the start operation may fail due to invalid virtual network configurations. For more information, see [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md). ## Prerequisites In the Azure portal, use the following steps to stop a running Azure Spring Apps :::image type="content" source="media/how-to-start-stop-service/spring-cloud-stop-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Stop button and Status value highlighted."::: -1. After the instance stops, the status will show **Succeeded (Stopped)**. +1. After the instance stops, the status shows **Succeeded (Stopped)**. ## Start a stopped instance In the Azure portal, use the following steps to start a stopped Azure Spring App :::image type="content" source="media/how-to-start-stop-service/spring-cloud-start-service.png" alt-text="Screenshot of Azure portal showing the Azure Spring Apps Overview page with the Start button and Status value highlighted."::: -1. After the instance starts, the status will show **Succeeded (Running)**. +1. After the instance starts, the status shows **Succeeded (Running)**. ## [Azure CLI](#tab/azure-cli) |
spring-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md | Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
spring-apps | Quickstart Configure Single Sign On Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-configure-single-sign-on-enterprise.md | To complete the single sign-on experience, use the following steps to deploy the --name identity-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name identity-service \- --routes-file azure/routes/identity-service.json + --routes-file azure-spring-apps-enterprise/resources/json/routes/identity-service.json ``` ## Configure single sign-on for Spring Cloud Gateway |
spring-apps | Quickstart Deploy Apps Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-apps-enterprise.md | Use the following steps to deploy and build applications. For these steps, make --resource-group <resource-group-name> \ --name quickstart-builder \ --service <Azure-Spring-Apps-service-instance-name> \- --builder-file azure/builder.json + --builder-file azure-spring-apps-enterprise/resources/json/tbs/builder.json ``` 1. Use the following command to build and deploy the payment service: Use the following steps to configure Spring Cloud Gateway and configure routes t --name cart-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name cart-service \- --routes-file azure/routes/cart-service.json + --routes-file azure-spring-apps-enterprise/resources/json/routes/cart-service.json ``` 1. Use the following command to create routes for the order service: Use the following steps to configure Spring Cloud Gateway and configure routes t --name order-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name order-service \- --routes-file azure/routes/order-service.json + --routes-file azure-spring-apps-enterprise/resources/json/routes/order-service.json ``` 1. Use the following command to create routes for the catalog service: Use the following steps to configure Spring Cloud Gateway and configure routes t --name catalog-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name catalog-service \- --routes-file azure/routes/catalog-service.json + --routes-file azure-spring-apps-enterprise/resources/json/routes/catalog-service.json ``` 1. Use the following command to create routes for the frontend: Use the following steps to configure Spring Cloud Gateway and configure routes t --name frontend-routes \ --service <Azure-Spring-Apps-service-instance-name> \ --app-name frontend \- --routes-file azure/routes/frontend.json + --routes-file azure-spring-apps-enterprise/resources/json/routes/frontend.json ``` 1. Use the following commands to retrieve the URL for Spring Cloud Gateway: |
spring-apps | Quickstart Deploy Event Driven App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-event-driven-app.md | The sample project is an event-driven application that subscribes to a [Service :::image type="content" source="media/quickstart-deploy-event-driven-app/diagram.png" alt-text="Diagram showing the Azure Spring Apps event-driven app architecture." lightbox="media/quickstart-deploy-event-driven-app/diagram.png" border="false"::: [!INCLUDE [quickstart-tool-introduction](includes/quickstart-deploy-event-driven-app/quickstart-tool-introduction.md)] The sample project is an event-driven application that subscribes to a [Service ## 1. Prerequisites --- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- [Git](https://git-scm.com/downloads).-- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.-- ### [Azure portal](#tab/Azure-portal) Use the following steps to confirm that the event-driven app works correctly. Yo 1. Confirm that there's a new message sent to the `upper-case` queue. For more information, see the [Peek a message](../service-bus-messaging/explorer.md#peek-a-message) section of [Use Service Bus Explorer to run data operations on Service Bus](../service-bus-messaging/explorer.md). 3. Use the following command to check the app's log to investigate any deployment issue: Use the following steps to confirm that the event-driven app works correctly. Yo ::: zone-end 3. From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs. |
spring-apps | Quickstart Deploy Infrastructure Vnet Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-azure-cli.md | The Enterprise deployment plan includes the following Tanzu components: ## Review the Azure CLI deployment script -The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md). +The deployment script used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture). ### [Standard plan](#tab/azure-spring-apps-standard) In this quickstart, you deployed an Azure Spring Apps instance into an existing * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). +* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). |
spring-apps | Quickstart Deploy Infrastructure Vnet Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-bicep.md | In this quickstart, you deployed an Azure Spring Apps instance into an existing * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI). * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). +* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). |
spring-apps | Quickstart Deploy Infrastructure Vnet Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet-terraform.md | For more customization including custom domain support, see the [Azure Spring Ap ## Review the Terraform plan -The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](reference-architecture.md). +The configuration file used in this quickstart is from the [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture). ### [Standard plan](#tab/azure-spring-apps-standard) In this quickstart, you deployed an Azure Spring Apps instance into an existing * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). +* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). |
spring-apps | Quickstart Deploy Infrastructure Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-infrastructure-vnet.md | The Enterprise deployment plan includes the following Tanzu components: ## Review the template -The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](reference-architecture.md). +The templates used in this quickstart are from the [Azure Spring Apps Reference Architecture](/previous-versions/azure/spring-apps/reference-architecture). ### [Standard plan](#tab/azure-spring-apps-standard) In this quickstart, you deployed an Azure Spring Apps instance into an existing * [Simple Hello World](./quickstart.md?pivots=programming-language-java&tabs=Azure-CLI) * Use [custom domains](how-to-custom-domain.md) with Azure Spring Apps. * Expose applications in Azure Spring Apps to the internet using Azure Application Gateway. For more information, see [Expose applications with end-to-end TLS in a virtual network](expose-apps-gateway-end-to-end-tls.md).-* View the secure end-to-end [Azure Spring Apps reference architecture](reference-architecture.md), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). +* View the secure end-to-end [Azure Spring Apps reference architecture](/previous-versions/azure/spring-apps/reference-architecture), which is based on the [Microsoft Azure Well-Architected Framework](/azure/architecture/framework/). * Learn more about [Azure Resource Manager](../azure-resource-manager/management/overview.md). |
spring-apps | Quickstart Deploy Java Native Image App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-java-native-image-app.md | + + Title: Quickstart - Deploy your first Java Native Image application to Azure Spring Apps +description: Describes how to deploy a Java Native Image application to Azure Spring Apps. +++ Last updated : 08/29/2023+++++# Quickstart: Deploy your first Java Native Image application to Azure Spring Apps ++> [!NOTE] +> The first 50 vCPU hours and 100 GB hours of memory are free each month. For more information, see [Price Reduction - Azure Spring Apps does more, costs less!](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/price-reduction-azure-spring-apps-does-more-costs-less/ba-p/3614058) on the [Apps on Azure Blog](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/bg-p/AppsonAzureBlog). ++> [!NOTE] +> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. ++**This article applies to:** ❌ Basic/Standard ✔️ Enterprise ++This quickstart shows how to deploy a Spring Boot application to Azure Spring Apps as a Native Image. ++[Native Image](https://www.graalvm.org/latest/reference-manual/native-image/) capability enables you to compile Java applications to standalone executables, known as Native Images. These executables can provide significant benefits, including faster startup times and lower runtime memory overhead compared to a traditional JVM (Java Virtual Machine). ++The sample project is the Spring Petclinic application. The following screenshot shows the application: +++## 1. Prerequisites ++- An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. +- [Git](https://git-scm.com/downloads). +- [Java Development Kit (JDK)](/java/azure/jdk/), version 17. +- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring` +- If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [View Azure Spring Apps Enterprise tier offering in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). +++## 5. Validate Native Image App ++Now you can access the deployed Native Image app to see whether it works. Use the following steps to validate: ++1. After the deployment has completed, you can run the following command to get the app URL: ++ ```azurecli + az spring app show \ + --service ${AZURE_SPRING_APPS_NAME} \ + --name ${NATIVE_APP_NAME} \ + --output table + ``` ++ You can access the app with the URL shown in the output as `Public Url`. The page should appear as you saw it o localhost. ++1. Use the following command to check the app's log to investigate any deployment issue: ++ ```azurecli + az spring app logs \ + --service ${AZURE_SPRING_APPS_NAME} \ + --name ${NATIVE_APP_NAME} + ``` ++## 6. Compare performance for JAR and Native Image ++The following sections describe how to compare the performance between JAR and Native Image deployment. ++### Server startup time ++Use the following command to check the app's log `Started PetClinicApplication in XXX seconds` to get the server startup time for a JAR app: ++```azurecli +az spring app logs \ + --service ${AZURE_SPRING_APPS_NAME} \ + --name ${JAR_APP_NAME} +``` ++The server startup time is around 25 s for a JAR app. ++Use the following command to check the app's log to get the server startup time for a Native Image app: ++```azurecli +az spring app logs \ + --service ${AZURE_SPRING_APPS_NAME} \ + --name ${NATIVE_APP_NAME} +``` ++The server startup time is less than 0.5 s for a Native Image app. ++### Memory usage ++Use the following command to scale down the memory size to 512 Mi for a Native Image app: ++```azurecli +az spring app scale \ + --service ${AZURE_SPRING_APPS_NAME} \ + --name ${NATIVE_APP_NAME} \ + --memory 512Mi +``` ++The command output should show that the Native Image app started successfully. ++Use the following command to scale down the memory size to 512 Mi for the JAR app: ++```azurecli +az spring app scale \ + --service ${AZURE_SPRING_APPS_NAME} \ + --name ${JAR_APP_NAME} \ + --memory 512Mi +``` ++The command output should show that the JAR app failed to start due to insufficient memory. The output message should be similar to the following example: `Terminating due to java.lang.OutOfMemoryError: Java heap space`. ++The following figure shows the optimized memory usage for the Native Image deployment for a constant workload of 400 requests per second into the Petclinic application. The memory usage is about 1/5th of the memory consumed by its equivalent JAR deployment. +++Native Images offer quicker startup times and reduced runtime memory overhead when compared to the conventional Java Virtual Machine (JVM). ++## 7. Clean up resources ++If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group: ++```azurecli +az group delete --name ${RESOURCE_GROUP} +``` ++## 8. Next steps ++> [!div class="nextstepaction"] +> [How to deploy Java Native Image apps in the Azure Spring Apps Enterprise plan](./how-to-enterprise-deploy-polyglot-apps.md#deploy-java-native-image-applications-preview) ++> [!div class="nextstepaction"] +> [Structured application log for Azure Spring Apps](./structured-app-log.md) ++> [!div class="nextstepaction"] +> [Map an existing custom domain to Azure Spring Apps](./tutorial-custom-domain.md) ++> [!div class="nextstepaction"] +> [Set up Azure Spring Apps CI/CD with GitHub Actions](./how-to-github-actions.md) ++> [!div class="nextstepaction"] +> [Set up Azure Spring Apps CI/CD with Azure DevOps](./how-to-cicd.md) ++> [!div class="nextstepaction"] +> [Use managed identities for applications in Azure Spring Apps](./how-to-use-managed-identities.md) ++> [!div class="nextstepaction"] +> [Create a service connection in Azure Spring Apps with the Azure CLI](../service-connector/quickstart-cli-spring-cloud-connection.md) ++> [!div class="nextstepaction"] +> [Run the polyglot ACME fitness store apps on Azure Spring Apps](./quickstart-sample-app-acme-fitness-store-introduction.md) ++For more information, see the following articles: ++- [Azure Spring Apps Samples](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples). +- [Spring on Azure](/azure/developer/java/spring/) +- [Spring Cloud Azure](/azure/developer/java/spring-framework/) |
spring-apps | Quickstart Deploy Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-web-app.md | The following diagram shows the architecture of the system: :::image type="content" source="media/quickstart-deploy-web-app/diagram.png" alt-text="Diagram that shows the architecture of a Spring web application." border="false"::: This article provides the following options for deploying to Azure Spring Apps: This article provides the following options for deploying to Azure Spring Apps: ## 1. Prerequisites ### [Azure portal](#tab/Azure-portal) This article provides the following options for deploying to Azure Spring Apps: - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure Developer CLI](https://aka.ms/azd-install), version 1.0.+- [Azure Developer CLI](https://aka.ms/azd-install), version 1.0.2 or higher. ::: zone-end - An Azure subscription. If you don't have a subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. - [Git](https://git-scm.com/downloads). - [Java Development Kit (JDK)](/java/azure/jdk/), version 17. - [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher. Use the following command to install the Azure Spring Apps extension: `az extension add --name spring`--- - If you're deploying an Azure Spring Apps Enterprise plan instance for the first time in the target subscription, see the [Requirements](./how-to-enterprise-marketplace-offer.md#requirements) section of [Enterprise plan in Azure Marketplace](./how-to-enterprise-marketplace-offer.md). ::: zone-end This article provides the following options for deploying to Azure Spring Apps: Now you can access the deployed app to see whether it works. Use the following steps to validate: --1. After the deployment has completed, use the following command to access the app with the URL retrieved: -- ```azurecli - az spring app show \ - --service ${AZURE_SPRING_APPS_NAME} \ - --name ${APP_NAME} \ - --query properties.url \ - --output tsv - ``` -- The page should appear as you saw in localhost. --1. Use the following command to check the app's log to investigate any deployment issue: -- ```azurecli - az spring app logs \ - --service ${AZURE_SPRING_APPS_NAME} \ - --name ${APP_NAME} - ``` -- ::: zone pivot="sc-enterprise" 1. After the deployment has completed, you can access the app with this URL: `https://${AZURE_SPRING_APPS_NAME}-${APP_NAME}.azuremicroservices.io/`. The page should appear as you saw in localhost. Now you can access the deployed app to see whether it works. Use the following s ::: zone-end 1. Access the application with the output application URL. The page should appear as you saw in localhost. Now you can access the deployed app to see whether it works. Use the following s ## 6. Clean up resources [!INCLUDE [clean-up-resources-portal-or-azd](includes/quickstart-deploy-web-app/clean-up-resources.md)] ::: zone-end If you plan to continue working with subsequent quickstarts and tutorials, you might want to leave these resources in place. When you no longer need the resources, delete them by deleting the resource group. Use the following command to delete the resource group: |
spring-apps | Quickstart Integrate Azure Database And Redis Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-integrate-azure-database-and-redis-enterprise.md | The following instructions describe how to provision an Azure Cache for Redis an [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)] -You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure/templates/azuredeploy.json). +You can find the template used in this quickstart in the [fitness store sample GitHub repository](https://github.com/Azure-Samples/acme-fitness-store/blob/Azure/azure-spring-apps-enterprise/resources/json/deploy/azuredeploy.json). To deploy this template, follow these steps: 1. Select the following image to sign in to Azure and open a template. The template creates an Azure Cache for Redis and an Azure Database for PostgreSQL Flexible Server. - :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure%2Ftemplates%2Fazuredeploy.json"::: + :::image type="content" source="../media/template-deployments/deploy-to-azure.svg" alt-text="Button to deploy the ARM template to Azure." border="false" link="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Facme-fitness-store%2FAzure%2Fazure-spring-apps-enterprise%2Fresources%2Fjson%2Fdeploy%2Fazuredeploy.json"::: 1. Enter values for the following fields: |
spring-apps | Quickstart Set Request Rate Limits Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-set-request-rate-limits-enterprise.md | az spring gateway route-config update \ --service <Azure-Spring-Apps-service-instance-name> \ --name catalog-routes \ --app-name catalog-service \- --routes-file azure/routes/catalog-service_rate-limit.json + --routes-file azure-spring-apps-enterprise/resources/json/routes/catalog-service_rate-limit.json ``` Use the following commands to retrieve the URL for the `/products` route in Spring Cloud Gateway: |
spring-apps | Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart.md | This article explains how to deploy a small application to run on Azure Spring A The application code used in this tutorial is a simple app. When you've completed this example, the application is accessible online, and you can manage it through the Azure portal. [!INCLUDE [quickstart-tool-introduction](includes/quickstart/quickstart-tool-introduction.md)] The application code used in this tutorial is a simple app. When you've complete ## 1. Prerequisites --- An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]-- [Git](https://git-scm.com/downloads).-- [Java Development Kit (JDK)](/java/azure/jdk/), version 17.-- [Azure CLI](/cli/azure/install-azure-cli) version 2.45.0 or higher.-- ### [Azure portal](#tab/Azure-portal) The application code used in this tutorial is a simple app. When you've complete After deployment, you can access the app at `https://<your-Azure-Spring-Apps-instance-name>-demo.azuremicroservices.io`. When you open the app, you get the response `Hello World`. --Use the following command to check the app's log to investigate any deployment issue: --```azurecli -az spring app logs \ - --service ${SERVICE_NAME} \ - --name ${APP_NAME} -``` -- From the navigation pane of the Azure Spring Apps instance overview page, select **Logs** to check the app's logs. |
spring-apps | Reference Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/reference-architecture.md | - Previously updated : 05/31/2022-- Title: Azure Spring Apps reference architecture--- -description: This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. ---# Azure Spring Apps reference architecture --> [!NOTE] -> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. --**This article applies to:** ✔️ Standard ✔️ Enterprise --This reference architecture is a foundation using a typical enterprise hub and spoke design for the use of Azure Spring Apps. In the design, Azure Spring Apps is deployed in a single spoke that's dependent on shared services hosted in the hub. The architecture is built with components to achieve the tenets in the [Microsoft Azure Well-Architected Framework][16]. --There are two flavors of Azure Spring Apps: Standard plan and Enterprise plan. --The Azure Spring Apps Standard plan is composed of the Spring Cloud Config Server, the Spring Cloud Service Registry, and the kpack build service. --The Azure Spring Apps Enterprise plan is composed of the VMware Tanzu® Build Service™, Application Configuration Service for VMware Tanzu®, VMware Tanzu® Service Registry, Spring Cloud Gateway for VMware Tanzu®, and API portal for VMware Tanzu®. --For an implementation of this architecture, see the [Azure Spring Apps Reference Architecture][10] on GitHub. --Deployment options for this architecture include Azure Resource Manager (ARM), Terraform, Azure CLI, and Bicep. The artifacts in this repository provide a foundation that you can customize for your environment. You can group resources such as Azure Firewall or Application Gateway into different resource groups or subscriptions. This grouping helps keep different functions separate, such as IT infrastructure, security, business application teams, and so on. --## Planning the address space --Azure Spring Apps requires two dedicated subnets: --* Service runtime -* Spring Boot applications --Each of these subnets requires a dedicated Azure Spring Apps cluster. Multiple clusters can't share the same subnets. The minimum size of each subnet is /28. The number of application instances that Azure Spring Apps can support varies based on the size of the subnet. You can find the detailed virtual network requirements in the [Virtual network requirements][11] section of [Deploy Azure Spring Apps in a virtual network][17]. --> [!WARNING] -> The selected subnet size can't overlap with the existing virtual network address space, and shouldn't overlap with any peered or on-premises subnet address ranges. --## Use cases --Typical uses for this architecture include: --* Private applications: Internal applications deployed in hybrid cloud environments -* Public applications: Externally facing applications --These use cases are similar except for their security and network traffic rules. This architecture is designed to support the nuances of each. --## Private applications --The following list describes the infrastructure requirements for private applications. These requirements are typical in highly regulated environments. --* A subnet must only have one instance of Azure Spring Apps. -* Adherence to at least one Security Benchmark should be enforced. -* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS. -* Azure service dependencies should communicate through Service Endpoints or Private Link. -* Data at rest should be encrypted. -* Data in transit should be encrypted. -* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps. -* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall). -* If [Azure Spring Apps Config Server][8] is used to load config properties from a repository, the repository must be private. -* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault. -* Name resolution of hosts on-premises and in the Cloud should be bidirectional. -* No direct egress to the public Internet except for control plane traffic. -* Resource Groups managed by the Azure Spring Apps deployment must not be modified. -* Subnets managed by the Azure Spring Apps deployment must not be modified. --The following list shows the components that make up the design: --* On-premises network - * Domain Name Service (DNS) - * Gateway -* Hub subscription - * Application Gateway Subnet - * Azure Firewall Subnet - * Shared Services Subnet -* Connected subscription - * Azure Bastion Subnet - * Virtual Network Peer --The following list describes the Azure services in this reference architecture: --* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources. --* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises. --* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps. --* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure. --* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications. --The following diagrams represent a well-architected hub and spoke design that addresses the above requirements: --### [Standard plan](#tab/azure-spring-standard) ---### [Enterprise plan](#tab/azure-spring-enterprise) -----## Public applications --The following list describes the infrastructure requirements for public applications. These requirements are typical in highly regulated environments. --* A subnet must only have one instance of Azure Spring Apps. -* Adherence to at least one Security Benchmark should be enforced. -* Application host Domain Name Service (DNS) records should be stored in Azure Private DNS. -* Azure DDoS Protection should be enabled. -* Azure service dependencies should communicate through Service Endpoints or Private Link. -* Data at rest should be encrypted. -* Data in transit should be encrypted. -* DevOps deployment pipelines can be used (for example, Azure DevOps) and require network connectivity to Azure Spring Apps. -* Egress traffic should travel through a central Network Virtual Appliance (NVA) (for example, Azure Firewall). -* Ingress traffic should be managed by at least Application Gateway or Azure Front Door. -* Internet routable addresses should be stored in Azure Public DNS. -* Microsoft's Zero Trust security approach requires secrets, certificates, and credentials to be stored in a secure vault. The recommended service is Azure Key Vault. -* Name resolution of hosts on-premises and in the Cloud should be bidirectional. -* No direct egress to the public Internet except for control plane traffic. -* Resource Groups managed by the Azure Spring Apps deployment must not be modified. -* Subnets managed by the Azure Spring Apps deployment must not be modified. --The following list shows the components that make up the design: --* On-premises network - * Domain Name Service (DNS) - * Gateway -* Hub subscription - * Application Gateway Subnet - * Azure Firewall Subnet - * Shared Services Subnet -* Connected subscription - * Azure Bastion Subnet - * Virtual Network Peer --The following list describes the Azure services in this reference architecture: --* [Azure Application Firewall][7]: a feature of Azure Application Gateway that provides centralized protection of applications from common exploits and vulnerabilities. --* [Azure Application Gateway][6]: a load balancer responsible for application traffic with Transport Layer Security (TLS) offload operating at layer 7. --* [Azure Key Vault][2]: a hardware-backed credential management service that has tight integration with Microsoft identity services and compute resources. --* [Azure Monitor][3]: an all-encompassing suite of monitoring services for applications that deploy both in Azure and on-premises. --* [Azure Pipelines][5]: a fully featured Continuous Integration / Continuous Development (CI/CD) service that can automatically deploy updated Spring Boot apps to Azure Spring Apps. --* [Microsoft Defender for Cloud][4]: a unified security management and threat protection system for workloads across on-premises, multiple clouds, and Azure. --* [Azure Spring Apps][1]: a managed service that's designed and optimized specifically for Java-based Spring Boot applications and .NET-based [Steeltoe][9] applications. --The following diagrams represent a well-architected hub and spoke design that addresses the above requirements. Only the hub-virtual-network communicates with the internet: --### [Standard plan](#tab/azure-spring-standard) ---### [Enterprise plan](#tab/azure-spring-enterprise) -----## Azure Spring Apps on-premises connectivity --Applications in Azure Spring Apps can communicate to various Azure, on-premises, and external resources. By using the hub and spoke design, applications can route traffic externally or to the on-premises network using Express Route or Site-to-Site Virtual Private Network (VPN). --## Azure Well-Architected Framework considerations --The [Azure Well-Architected Framework][16] is a set of guiding tenets to follow in establishing a strong infrastructure foundation. The framework contains the following categories: cost optimization, operational excellence, performance efficiency, reliability, and security. --### Cost optimization --Because of the nature of distributed system design, infrastructure sprawl is a reality. This reality results in unexpected and uncontrollable costs. Azure Spring Apps is built using components that scale so that it can meet demand and optimize cost. The core of this architecture is the Azure Kubernetes Service (AKS). The service is designed to reduce the complexity and operational overhead of managing Kubernetes, which includes efficiencies in the operational cost of the cluster. --You can deploy different applications and application types to a single instance of Azure Spring Apps. The service supports autoscaling of applications triggered by metrics or schedules that can improve utilization and cost efficiency. --You can also use Application Insights and Azure Monitor to lower operational cost. With the visibility provided by the comprehensive logging solution, you can implement automation to scale the components of the system in real time. You can also analyze log data to reveal inefficiencies in the application code that you can address to improve the overall cost and performance of the system. --### Operational excellence --Azure Spring Apps addresses multiple aspects of operational excellence. You can combine these aspects to ensure that the service runs efficiently in production environments, as described in the following list: --* You can use Azure Pipelines to ensure that deployments are reliable and consistent while helping you avoid human error. -* You can use Azure Monitor and Application Insights to store log and telemetry data. - You can assess collected log and metric data to ensure the health and performance of your applications. Application Performance Monitoring (APM) is fully integrated into the service through a Java agent. This agent provides visibility into all the deployed applications and dependencies without requiring extra code. For more information, see the blog post [Effortlessly monitor applications and dependencies in Azure Spring Apps][15]. -* You can use Microsoft Defender for Cloud to ensure that applications maintain security by providing a platform to analyze and assess the data provided. -* The service supports various deployment patterns. For more information, see [Set up a staging environment in Azure Spring Apps][14]. --### Reliability --Azure Spring Apps is built on AKS. While AKS provides a level of resiliency through clustering, this reference architecture goes even further by incorporating services and architectural considerations to increase availability of the application if there's component failure. --By building on top of a well-defined hub and spoke design, the foundation of this architecture ensures that you can deploy it to multiple regions. For the private application use case, the architecture uses Azure Private DNS to ensure continued availability during a geographic failure. For the public application use case, Azure Front Door and Azure Application Gateway ensure availability. --### Security --The security of this architecture is addressed by its adherence to industry-defined controls and benchmarks. In this context, "control" means a concise and well-defined best practice, such as "Employ the least privilege principle when implementing information system access. IAM-05" The controls in this architecture are from the [Cloud Control Matrix][19] (CCM) by the [Cloud Security Alliance][18] (CSA) and the [Microsoft Azure Foundations Benchmark][20] (MAFB) by the [Center for Internet Security][21] (CIS). In the applied controls, the focus is on the primary security design principles of governance, networking, and application security. It is your responsibility to handle the design principles of Identity, Access Management, and Storage as they relate to your target infrastructure. --#### Governance --The primary aspect of governance that this architecture addresses is segregation through the isolation of network resources. In the CCM, DCS-08 recommends ingress and egress control for the datacenter. To satisfy the control, the architecture uses a hub and spoke design using Network Security Groups (NSGs) to filter east-west traffic between resources. The architecture also filters traffic between central services in the hub and resources in the spoke. The architecture uses an instance of Azure Firewall to manage traffic between the internet and the resources within the architecture. --The following list shows the control that addresses datacenter security in this reference: --| CSA CCM Control ID | CSA CCM Control Domain | -|:-|:--| -| DCS-08 | Datacenter Security Unauthorized Persons Entry | --#### Network --The network design supporting this architecture is derived from the traditional hub and spoke model. This decision ensures that network isolation is a foundational construct. CCM control IVS-06 recommends that traffic between networks and virtual machines are restricted and monitored between trusted and untrusted environments. This architecture adopts the control by implementation of the NSGs for east-west traffic (within the "data center"), and the Azure Firewall for north-south traffic (outside of the "data center"). CCM control IPY-04 recommends that the infrastructure should use secure network protocols for the exchange of data between services. The Azure services supporting this architecture all use standard secure protocols such as TLS for HTTP and SQL. --The following list shows the CCM controls that address network security in this reference: --| CSA CCM Control ID | CSA CCM Control Domain | -| :-- | :-| -| IPY-04 | Network Protocols | -| IVS-06 | Network Security | --The network implementation is further secured by defining controls from the MAFB. The controls ensure that traffic into the environment is restricted from the public Internet. --The following list shows the CIS controls that address network security in this reference: --| CIS Control ID | CIS Control Description | -|:|:| -| 6.2 | Ensure that SSH access is restricted from the internet. | -| 6.3 | Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP). | -| 6.5 | Ensure that Network Watcher is 'Enabled'. | -| 6.6 | Ensure that ingress using UDP is restricted from the internet. | --Azure Spring Apps requires management traffic to egress from Azure when deployed in a secured environment. You must allow the network and application rules listed in [Customer responsibilities for running Azure Spring Apps in a virtual network](./vnet-customer-responsibilities.md). --#### Application security --This design principle covers the fundamental components of identity, data protection, key management, and application configuration. By design, an application deployed in Azure Spring Apps runs with least privilege required to function. The set of authorization controls is directly related to data protection when using the service. Key management strengthens this layered application security approach. --The following list shows the CCM controls that address key management in this reference: --| CSA CCM Control ID | CSA CCM Control Domain | -|:-|:--| -| EKM-01 | Encryption and Key Management Entitlement | -| EKM-02 | Encryption and Key Management Key Generation | -| EKM-03 | Encryption and Key Management Sensitive Data Protection | -| EKM-04 | Encryption and Key Management Storage and Access | --From the CCM, EKM-02, and EKM-03 recommend policies and procedures to manage keys and to use encryption protocols to protect sensitive data. EKM-01 recommends that all cryptographic keys have identifiable owners so that they can be managed. EKM-04 recommends the use of standard algorithms. --The following list shows the CIS controls that address key management in this reference: --| CIS Control ID | CIS Control Description | -|:|:-| -| 8.1 | Ensure that the expiration date is set on all keys. | -| 8.2 | Ensure that the expiration date is set on all secrets. | -| 8.4 | Ensure the key vault is recoverable. | --The CIS controls 8.1 and 8.2 recommend that expiration dates are set for credentials to ensure that rotation is enforced. CIS control 8.4 ensures that the contents of the key vault can be restored to maintain business continuity. --The aspects of application security set a foundation for the use of this reference architecture to support a Spring workload in Azure. --## Next steps --Explore this reference architecture through the ARM, Terraform, and Azure CLI deployments available in the [Azure Spring Apps Reference Architecture][10] repository. --<!-- Reference links in article --> -[1]: ./index.yml -[2]: ../key-vault/index.yml -[3]: ../azure-monitor/index.yml -[4]: ../security-center/index.yml -[5]: /azure/devops/pipelines/ -[6]: ../application-gateway/index.yml -[7]: ../web-application-firewall/index.yml -[8]: ./how-to-config-server.md -[9]: https://steeltoe.io/ -[10]: https://github.com/Azure/azure-spring-apps-landing-zone-accelerator/tree/reference-architecture -[11]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements -[12]: ./vnet-customer-responsibilities.md#azure-spring-apps-network-requirements -[13]: ./vnet-customer-responsibilities.md#azure-spring-apps-fqdn-requirements--application-rules -[14]: ./how-to-staging-environment.md -[15]: https://devblogs.microsoft.com/java/monitor-applications-and-dependencies-in-azure-spring-cloud/ -[16]: /azure/architecture/framework/ -[17]: ./how-to-deploy-in-azure-virtual-network.md#virtual-network-requirements -[18]: https://cloudsecurityalliance.org/ -[19]: https://cloudsecurityalliance.org/research/working-groups/cloud-controls-matrix -[20]: /azure/security/benchmarks/v2-cis-benchmark -[21]: https://www.cisecurity.org/ |
spring-apps | Secure Communications End To End | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/secure-communications-end-to-end.md | Azure Spring Apps is jointly built, operated, and supported by Microsoft and VMw - [Deploy Spring microservices to Azure](/training/modules/azure-spring-cloud-workshop/) - [Azure Key Vault Certificates Spring Cloud Azure Starter (GitHub.com)](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/spring/spring-cloud-azure-starter-keyvault-certificates/pom.xml)-- [Azure Spring Apps reference architecture](reference-architecture.md)+- [Azure Spring Apps architecture design](/azure/architecture/web-apps/spring-apps?toc=/azure/spring-apps/toc.json&bc=/azure/spring-apps/breadcrumb/toc.json) - Migrate your [Spring Boot](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-apps), [Spring Cloud](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-apps), and [Tomcat](/azure/developer/java/migration/migrate-tomcat-to-azure-spring-apps) applications to Azure Spring Apps |
spring-apps | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
spring-apps | Vnet Customer Responsibilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/vnet-customer-responsibilities.md | The following list shows the resource requirements for Azure Spring Apps service | Destination Endpoint | Port | Use | Note | |-||-|--| | \*:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureCloud:443 | TCP:443 | Azure Spring Apps Service Management. | Information of service instance "requiredTraffics" could be known in resource payload, under "networkProfile" section. |-| \*:123 *or* ntp.ubuntu.com:123 | UDP:123 | NTP time synchronization on Linux nodes. | | | \*.azurecr.io:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - AzureContainerRegistry:443 | TCP:443 | Azure Container Registry. | Can be replaced by enabling *Azure Container Registry* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.core.windows.net:443 and \*.core.windows.net:445 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - Storage:443 and Storage:445 | TCP:443, TCP:445 | Azure Files | Can be replaced by enabling *Azure Storage* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | | \*.servicebus.windows.net:443 *or* [ServiceTag](../virtual-network/service-tags-overview.md#available-service-tags) - EventHub:443 | TCP:443 | Azure Event Hubs. | Can be replaced by enabling *Azure Event Hubs* [service endpoint in virtual network](../virtual-network/virtual-network-service-endpoints-overview.md). | Azure Firewall provides the FQDN tag **AzureKubernetesService** to simplify the |--|--|| | <i>*.azmk8s.io</i> | HTTPS:443 | Underlying Kubernetes Cluster management. | | <i>mcr.microsoft.com</i> | HTTPS:443 | Microsoft Container Registry (MCR). |-| <i>*.cdn.mscr.io</i> | HTTPS:443 | MCR storage backed by the Azure CDN. | | <i>*.data.mcr.microsoft.com</i> | HTTPS:443 | MCR storage backed by the Azure CDN. | | <i>management.azure.com</i> | HTTPS:443 | Underlying Kubernetes Cluster management. |-| <i>*login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. | -| <i>*login.microsoft.com</i> | HTTPS:443 | Azure Active Directory authentication. | +| <i>login.microsoftonline.com</i> | HTTPS:443 | Azure Active Directory authentication. | | <i>packages.microsoft.com</i> | HTTPS:443 | Microsoft packages repository. | | <i>acs-mirror.azureedge.net</i> | HTTPS:443 | Repository required to install required binaries like kubenet and Azure CNI. |-| *mscrl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. | -| *crl.microsoft.com*<sup>1</sup> | HTTPS:80 | Required Microsoft Certificate Chain Paths. | -| *crl3.digicert.com*<sup>1</sup> | HTTPS:80 | Third-Party TLS/SSL Certificate Chain Paths. | --<sup>1</sup> Please note that these FQDNs aren't included in the FQDN tag. ## Azure Spring Apps optional FQDN for third-party application performance management |
static-web-apps | Application Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/application-settings.md | In a terminal or command line, execute the following command to delete a setting ## Next steps > [!div class="nextstepaction"]-> [Define configuration for Azure Static Web Apps in the _staticwebapp.config.json_ file](configuration.md) +> [Define configuration for Azure Static Web Apps in the _staticwebapp.config.json_ file](configuration.md) ## Related articles |
static-web-apps | Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md | Define each IPv4 address block in Classless Inter-Domain Routing (CIDR) notation When one or more IP address blocks are specified, requests originating from IP addresses that don't match a value in `allowedIpRanges` are denied access. -In addition to IP address blocks, you can also specify [service tags](../virtual-network/service-tags-overview.md) in the `allowedIpRanges` array to restrict traffic to certain Azure services. +In addition to IP address blocks, you can also specify [service tags](../virtual-network/service-tags-overview.md#discover-service-tags-by-using-downloadable-json-files) in the `allowedIpRanges` array to restrict traffic to certain Azure services. ```json "networking": { |
static-web-apps | Deploy Nextjs Static Export | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-static-export.md | By default, the application is treated as a hybrid rendered Next.js application, ### [Azure Pipelines](#tab/azure-pipelines) ```yaml- - task: AzureStaticWebAppLatest@0 + - task: AzureStaticWebApp@0 inputs: azure_static_web_apps_api_token: $(AZURE_STATIC_WEB_APPS_TOKEN) ###### Repository/Build Configurations - These values can be configured to match your app requirements. ###### |
static-web-apps | Publish Vuepress | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/publish-vuepress.md | -This article demonstrates how to create and deploy a [VuePress](https://vuepress.vuejs.org/) web application to [Azure Azure Static Web Apps](overview.md). The final result is a new Azure Static Web Apps application with the associated GitHub Actions that give you control over how the app is built and published. +This article demonstrates how to create and deploy a [VuePress](https://vuepress.vuejs.org/) web application to [Azure Static Web Apps](overview.md). The final result is a new Azure Static Web Apps application with the associated GitHub Actions that give you control over how the app is built and published. In this tutorial, you learn how to: |
storage-mover | Agent Register | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md | The agent displays detailed progress. Once the registration is complete, you're To accomplish seamless authentication with Azure and authorization to various Azure resources, the agent is registered with the following Azure - Azure Storage Mover (Microsoft.StorageMover)-- Azure ARC (Microsoft.HybridCompute)+- Azure Arc (Microsoft.HybridCompute) ### Azure Storage Mover service Registration to the Azure Storage mover service is visible and manageable throug You can reference this Azure Resource Manager (ARM) resource when you want to assign migration jobs to the specific agent VM it symbolizes. -### Azure ARC service +### Azure Arc service -The agent is also registered with the [Azure ARC service](../azure-arc/overview.md). ARC is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent. +The agent is also registered with the [Azure Arc service](../azure-arc/overview.md). Arc is used to assign and maintain an [Azure AD managed identity](../active-directory/managed-identities-azure-resources/overview.md) for this registered agent. Azure Storage Mover uses a system-assigned managed identity. A managed identity is a service principal of a special type that can only be used with Azure resources. When the managed identity is deleted, the corresponding service principal is also automatically removed. The process of deletion is automatically initiated when you unregister the agent. However, there are other ways to remove this identity. Doing so incapacitates the registered agent and require the agent to be unregistered. Only the registration process can get an agent to obtain and maintain its Azure identity properly. > [!NOTE]-> During public preview, there is a side effect of the registration with the Azure ARC service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource. +> During public preview, there is a side effect of the registration with the Azure Arc service. A separate resource of the type *Server-Azure Arc* is also deployed in the same resource group as your storage mover resource. You won't be able to manage the agent through this resource. It may appear that you're able to manage aspects of the storage mover agent through the *Server-Azure Arc* resource, but in most cases you can't. It's best to exclusively manage the agent through the *Registered agents* pane in your storage move resource or through the local administrative shell. > [!WARNING]-> Do not delete the Azure ARC server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to. +> Do not delete the Azure Arc server resource that is created for a registered agent in the same resource group as the storage mover resource. The only safe time to delete this resource is when you previously unregistered the agent this resource corresponds to. ### Authorization For a migration job, access to the target endpoint is perhaps the most important These assignments are made in the admin's sign-in context in the Azure portal. Therefore, the admin must be a member of the role-based access control (RBAC) control plane role "Owner" for the target container. This assignment is made just-in-time when you start a migration job. It is at this point that you've selected an agent to execute a migration job. As part of this start action, the agent is given permissions to the data plane of the target container. The agent isn't authorized to perform any management plane actions, such as deleting the target container or configuring any features on it. > [!WARNING]-> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure ARC services. +> Access is granted to a specific agent just-in-time for running a migration job. However, the agent's authorization to access the target is not automatically removed. You must either manually remove the agent's managed identity from a specific target or unregister the agent to destroy the service principal. This action removes all target storage authorization as well as the ability of the agent to communicate with the Storage Mover and Azure Arc services. ## Next steps |
storage-mover | Endpoint Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/endpoint-manage.md | REVIEW Engineering: not reviewed EDIT PASS: started Initial doc score: 93-Current doc score: 100 (3269 words and 0 issues) +Current doc score: 100 (3365 words and 0 issues) !######################################################## --> Current doc score: 100 (3269 words and 0 issues) While the term *endpoint* is often used in networking, it's used in the context of the Storage Mover service to describe a storage location with a high level of detail. -A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition. Only certain types of endpoints may be used as a source or a target, respectively. +A storage mover endpoint is a resource that contains the path to either a source or destination location and other relevant information. Endpoints are used in the creation of a job definition to define the source and target locations for a particular copy operation. Only certain types of endpoints may be used as a source or a target, respectively. For example, data contained within an NFS (Network File System) file share endpoint can only be copied to a blob storage container. Similarly, copy operations with an SMB-based (Server Message Block) file share target can only be migrated to an Azure file share, This article guides you through the creation and management of Azure Storage Mover endpoints. To follow these examples, you need a top-level storage mover resource. If you haven't yet created one, follow the steps within the [Create a Storage Mover resource](storage-mover-create.md) article before continuing. After you complete the steps within this article, you'll be able to create and m Within the Azure Storage Mover resource hierarchy, a migration project is used to organize migration jobs into logical tasks or components. A migration project in turn contains at least one job definition, which describes both the source and target locations for your migration project. The [Understanding the Storage Mover resource hierarchy](resource-hierarchy.md) article contains more detailed information about the relationships between a Storage Mover, its endpoints, and its projects. -Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB (Server Message Block) shares, and Azure Storage blob container endpoints each require fundamentally different information. +Because a migration requires both a well-defined source and target, endpoints are parented to the top-level storage mover resource. This placement allows you to reuse endpoints across any number of job definitions. While there's only a single endpoint resource, the properties of each endpoint may vary based on its type. For example, NFS (Network File System) shares, SMB shares, and Azure Storage blob container endpoints each require fundamentally different information. [!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)] Agent access to both your Key Vault and target storage resources is controlled t There are many use cases that require preserving metadata values such as file and folder timestamps, ACLs, and file attributes. Storage Mover supports the same level of file fidelity as the underlying Azure file share. Azure Files in turn [supports a subset](/rest/api/storageservices/set-file-properties) of the [NTFS file properties](/windows/win32/fileio/file-attribute-constants). The following table represents common metadata that is migrated: -|Metadata property |Outcome | -|--|--| +|Metadata property |Outcome | +|--|| |Directory structure |The original directory structure of the source is preserved on the target share. |-|Access permissions |Permissions on the source file or directory are preserved on the target share. | -|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. | +|Access permissions |Permissions on the source file or directory are preserved on the target share. | +|Symbolic links |Symbolic links on the source are preserved and mapped on the target share. | |Create timestamp |The original create timestamp of the source file is preserved on the target share. | |Change timestamp |The original change timestamp of the source file is preserved on the target share. | |Modified timestamp |The original modified timestamp of the source file is preserved on the target share. | Follow the steps in this section to view endpoints accessible to your Storage Mo 1. On the **Storage endpoints** page, the default **Storage endpoints** view displays the names of any provisioned source endpoints and a summary of their associated properties. To view provisioned destination endpoint, select **Target endpoints**. You can filter the results further by selecting the **Protocol** or **Host** filters and the relevant option. - :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing the endpoint details and the location of the target endpoint filters." lightbox="media/endpoint-manage/endpoint-filter-lrg.png"::: + :::image type="content" source="media/endpoint-manage/endpoint-filter.png" alt-text="Screenshot of the Storage Endpoints page within the Azure portal showing endpoint details and the target endpoint filters location." lightbox="media/endpoint-manage/endpoint-filter-lrg.png"::: - At this time, the Azure Portal doesn't provide the ability to to directly modify provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure Portal should be deleted and recreated. + At this time, the Azure portal doesn't support the direct modification of provisioned endpoints. An endpoint's description, however, can be modified using Azure PowerShell by following [this example](endpoint-manage.md?tabs=powershell#view-and-edit-an-endpoints-properties). Endpoint resources that require updating within the Azure portal should be deleted and recreated. ### [PowerShell](#tab/powershell) |
storage | Anonymous Read Access Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md | Title: Configure anonymous public read access for containers and blobs description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container public access setting to make containers and blobs available for anonymous access.-+ Last updated 11/09/2022-+ ms.devlang: powershell, azurecli |
storage | Anonymous Read Access Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-overview.md | Title: Overview of remediating anonymous public read access for blob data description: Learn how to remediate anonymous public read access to blob data for both Azure Resource Manager and classic storage accounts.-+ Last updated 11/09/2022-+ |
storage | Anonymous Read Access Prevent Classic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent-classic.md | Title: Remediate anonymous public read access to blob data (classic deployments) description: Learn how to prevent anonymous requests against a classic storage account by disabling anonymous public access to containers.-+ Last updated 11/09/2022-+ ms.devlang: powershell, azurecli |
storage | Anonymous Read Access Prevent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md | Title: Remediate anonymous public read access to blob data (Azure Resource Manager deployments) description: Learn how to analyze anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container.-+ Last updated 05/23/2023-+ ms.devlang: powershell, azurecli |
storage | Assign Azure Role Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md | Title: Assign an Azure role for access to blob data description: Learn how to assign permissions for blob data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD.-+ Last updated 04/19/2022-+ ms.devlang: powershell, azurecli |
storage | Authorize Access Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md | Title: Authorize access to blobs using Active Directory description: Authorize access to Azure blobs using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account.-+ Last updated 03/17/2023-+ |
storage | Authorize Data Operations Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-cli.md | Title: Authorize access to blob data with Azure CLI description: Specify how to authorize data operations against blob data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token.-+ Last updated 07/12/2021-+ ms.devlang: azurecli |
storage | Authorize Data Operations Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md | Title: Authorize access to blob data in the Azure portal description: When you access blob data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ Last updated 12/10/2021-+ |
storage | Authorize Data Operations Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md | Title: Run PowerShell commands with Azure AD credentials to access blob data description: PowerShell supports signing in with Azure AD credentials to run commands on blob data in Azure Storage. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ Last updated 05/12/2022-+ ms.devlang: powershell |
storage | Blob Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md | You can use the `az storage blob upload-batch` command to recursively upload mul In the following example, the first operation uses the `az storage blob upload` command to upload a single, named file. The source file and destination storage container are specified with the `--file` and `--container-name` parameters. -The second operation demonstrates the use of the `az storage blob upload-batch` command to upload multiple files. The `--if-unmodified-since` parameter ensures that only files modified with the last seven days will be uploaded. The value supplied by this parameter must be provided in UTC format. +The second operation demonstrates the use of the `az storage blob upload-batch` command to upload multiple files. The `--if-modified-since` parameter ensures that only files modified within the last seven days will be uploaded. The value supplied by this parameter must be provided in UTC format. ```azurecli-interactive #!/bin/bash storageAccount="<storage-account>" containerName="demo-container"-lastModified=`date -d "10 days ago" '+%Y-%m-%dT%H:%MZ'` +lastModified=`date -d "7 days ago" '+%Y-%m-%dT%H:%MZ'` path="C:\\temp\\" filename="demo-file.txt" az storage blob upload-batch \ --pattern *.png \ --account-name $storageAccount \ --auth-mode login \- --if-unmodified-since $lastModified + --if-modified-since $lastModified ``` |
storage | Blob V11 Samples Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-dotnet.md | |
storage | Blob V11 Samples Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v11-samples-javascript.md | |
storage | Blob V2 Samples Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-v2-samples-python.md | |
storage | Client Side Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md | description: The Blob Storage client library supports client-side encryption and -+ Last updated 12/12/2022 |
storage | Data Lake Storage Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md | To set file and directory level permissions, see any of the following articles: |REST API |[Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update)| > [!IMPORTANT]-> If the security principal is a *service* principal, it's important to use the object ID of the service principal and not the object ID of the related app registration. To get the object ID of the service principal open the Azure CLI, and then use this command: `az ad sp show --id <Your App ID> --query objectId`. make sure to replace the `<Your App ID>` placeholder with the App ID of your app registration. +> If the security principal is a *service* principal, it's important to use the object ID of the service principal and not the object ID of the related app registration. To get the object ID of the service principal open the Azure CLI, and then use this command: `az ad sp show --id <Your App ID> --query objectId`. Make sure to replace the `<Your App ID>` placeholder with the App ID of your app registration. The service principal is treated as a named user. You'll add this ID to the ACL as you would any named user. Named users are described later in this article. ## Types of ACLs |
storage | Encryption Customer Provided Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-customer-provided-keys.md | Title: Provide an encryption key on a request to Blob storage description: Clients making requests against Azure Blob storage can provide an encryption key on a per-request basis. Including the encryption key on the request provides granular control over encryption settings for Blob storage operations. -+ Last updated 05/09/2022 -+ |
storage | Encryption Scope Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-manage.md | Title: Create and manage encryption scopes description: Learn how to create an encryption scope to isolate blob data at the container or blob level. -+ Last updated 05/10/2023 -+ ms.devlang: powershell, azurecli |
storage | Encryption Scope Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/encryption-scope-overview.md | Title: Encryption scopes for Blob storage description: Encryption scopes provide the ability to manage encryption at the level of the container or an individual blob. You can use encryption scopes to create secure boundaries between data that resides in the same storage account but belongs to different customers. -+ Last updated 06/01/2023 -+ Keep in mind that customer-managed keys are protected by soft delete and purge p ## Billing for encryption scopes -When you enable an encryption scope, you are billed for a minimum of one month (30 days). After the first month, charges for an encryption scope are prorated on an hourly basis. +When you enable an encryption scope, you are billed for a minimum of 30 days. After 30 days, charges for an encryption scope are prorated on an hourly basis. -If you disable the encryption scope within the first month, then you are billed for that full month, but not for subsequent months. If you disable the encryption scope after the first month, then you are charged for the first month, plus the number of hours that the encryption scope was in effect after the first month. +After enabling the encryption scope, if you disable it within 30 days, you are still billed for 30 days. If you disable the encryption scope after 30 days, you are charged for those 30 days plus the number of hours the encryption scope was in effect after 30 days. Disable any encryption scopes that are not needed to avoid unnecessary charges. To learn about pricing for encryption scopes, see [Blob Storage pricing](https:/ - [Create and manage encryption scopes](encryption-scope-manage.md) - [Customer-managed keys for Azure Storage encryption](../common/customer-managed-keys-overview.md) - [What is Azure Key Vault?](../../key-vault/general/overview.md)+ |
storage | Lifecycle Management Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md | The run conditions are based on age. Current versions use the last modified time The platform runs the lifecycle policy once a day. Once you configure or edit a policy, it can take up to 24 hours for changes to go into effect. Once the policy is in effect, it could take up to 24 hours for some actions to run. Therefore, the policy actions may take up to 48 hours to complete. -If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes. +If you disable a policy, then no new policy runs will be scheduled, but if a run is already in progress, that run will continue until it completes and you're billed for any actions that are required to complete the run. See [Regional availability and pricing](#regional-availability-and-pricing). ### Lifecycle policy completed event When last access time tracking is enabled, the blob property called `LastAccessT If last access time tracking is enabled, lifecycle management uses `LastAccessTime` to determine whether the run condition **daysAfterLastAccessTimeGreaterThan** is met. Lifecycle management uses the date the lifecycle policy was enabled instead of `LastAccessTime` in the following cases: - The value of the `LastAccessTime` property of the blob is a null value.+ > [!NOTE] > The `LastAccessTime` property of the blob is null if a blob hasn't been accessed since last access time tracking was enabled. To minimize the effect on read access latency, only the first read of the last 2 In the following example, blobs are moved to cool storage if they haven't been accessed for 30 days. The `enableAutoTierToHotFromCool` property is a Boolean value that indicates whether a blob should automatically be tiered from cool back to hot if it's accessed again after being tiered to cool. +> [!TIP] +> If a blob is moved to the cool tier, and then is automatically moved back before 30 days has elapsed, an early deletion fee is charged. Before you set the `enablAutoTierToHotFromCool` property, make sure to analyze the access patterns of your data so you can reduce unexpected charges. + ```json { "enabled": true, |
storage | Network File System Protocol Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md | This article describes limitations and known issues of Network File System (NFS) - GRS, GZRS, and RA-GRS redundancy options aren't supported when you create an NFS 3.0 storage account. +- Access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if the ACL or a blob or directory contains an entry for a named user or group, that file becomes inaccessible on the client for non-root users. You'll have to remove these entries to restore access to non-root users on the client. For information about how to remove an ACL entry for named users and groups, see [How to set ACLs](data-lake-storage-access-control.md#how-to-set-acls). + ## NFS 3.0 features The following NFS 3.0 features aren't yet supported. |
storage | Network File System Protocol Support How To | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md | Your storage account must be contained within a virtual network. A virtual netwo ## Step 2: Configure network security -Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs), are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them. +Currently, the only way to secure the data in your storage account is by using a virtual network and other network security settings. See [Network security recommendations for Blob storage](security-recommendations.md#networking). -To secure the data in your account, see these recommendations: [Network security recommendations for Blob storage](security-recommendations.md#networking). +Any other tools used to secure data, including account key authorization, Azure Active Directory (Azure AD) security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request. In fact, if you add an entry for a named user or group to the ACL of a blob or directory, that file becomes inaccessible on the client for non-root users. You would have to remove that entry to restore access to non-root users on the client. > [!IMPORTANT] > The NFS 3.0 protocol uses ports 111 and 2048. If you're connecting from an on-premises network, make sure that your client allows outgoing communication through those ports. If you have granted access to specific VNets, make sure that any network security groups associated with those VNets don't contain security rules that block incoming communication through those ports. |
storage | Network File System Protocol Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support.md | For step-by-step guidance, see [Mount Blob storage by using the Network File Sys ## Network security -Traffic must originate from a VNet. A VNet enables clients to securely connect to your storage account. The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) are not yet supported in accounts that have the NFS 3.0 protocol support enabled on them. +Traffic must originate from a VNet. A VNet enables clients to securely connect to your storage account. The only way to secure the data in your account is by using a VNet and other network security settings. Any other tool used to secure data including account key authorization, Azure Active Directory (AD) security, and access control lists (ACLs) can't be used to authorize an NFS 3.0 request. To learn more, see [Network security recommendations for Blob storage](security-recommendations.md#networking). |
storage | Object Replication Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md | Object replication isn't supported for blobs in the source account that are encr Customer-managed failover isn't supported for either the source or the destination account in an object replication policy. -Object replication is not supported for blobs that are uploaded to the Data Lake Storage endpoint (`dfs.core.windows.net`) by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs. +Object replication is not supported for blobs that are uploaded by using [Data Lake Storage Gen2](/rest/api/storageservices/data-lake-storage-gen2) APIs. ## How object replication works |
storage | Quickstart Blobs Javascript Browser | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/quickstart-blobs-javascript-browser.md | This code calls the [ContainerClient.deleteBlob](/javascript/api/@azure/storage- http://localhost:1234 ``` -## Step 1 - Create a container +## Step 1: Create a container 1. In the web app, select **Create container**. The status indicates that a container was created. 2. In the Azure portal, verify your container was created. Select your storage account. Under **Blob service**, select **Containers**. Verify that the new container appears. (You may need to select **Refresh**.) -## Step 2 - Upload a blob to the container +## Step 2: Upload a blob to the container 1. On your local computer, create and save a test file, such as *test.txt*. 2. In the web app, select **Select and upload files**. 3. Browse to your test file, and then select **Open**. The status indicates that the file was uploaded, and the file list was retrieved. 4. In the Azure portal, select the name of the new container that you created earlier. Verify that the test file appears. -## Step 3 - Delete the blob +## Step 3: Delete the blob 1. In the web app, under **Files**, select the test file. 2. Select **Delete selected files**. The status indicates that the file was deleted and that the container contains no files. 3. In the Azure portal, select **Refresh**. Verify that you see **No blobs found**. -## Step 4 - Delete the container +## Step 4: Delete the container 1. In the web app, select **Delete container**. The status indicates that the container was deleted. 2. In the Azure portal, select the **\<account-name\> | Containers** link at the top-left of the portal pane. |
storage | Sas Service Create Dotnet Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet-container.md | |
storage | Sas Service Create Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-dotnet.md | |
storage | Sas Service Create Java Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java-container.md | |
storage | Sas Service Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-java.md | |
storage | Sas Service Create Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-javascript.md | |
storage | Sas Service Create Python Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python-container.md | |
storage | Sas Service Create Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create-python.md | |
storage | Scalability Targets Premium Block Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-block-blobs.md | Title: Scalability targets for premium block blob storage accounts description: Learn about premium-performance block blob storage accounts. Block blob storage accounts are optimized for applications that use smaller, kilobyte-range objects.-+ Last updated 12/18/2019-+ |
storage | Scalability Targets Premium Page Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-page-blobs.md | Title: Scalability targets for premium page blob storage accounts description: A premium performance page blob storage account is optimized for read/write operations. This type of storage account backs an unmanaged disk for an Azure virtual machine.-+ Last updated 09/24/2021-+ |
storage | Scalability Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets.md | Title: Scalability and performance targets for Blob storage description: Learn about scalability and performance targets for Blob storage.-+ Last updated 01/11/2023-+ |
storage | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md | Title: Security recommendations for Blob storage description: Learn about security recommendations for Blob storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ Last updated 04/06/2023-+ |
storage | Simulate Primary Region Failure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/simulate-primary-region-failure.md | |
storage | Snapshots Manage Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md | description: Learn how to use the .NET client library to create a read-only snap -+ Last updated 08/27/2020 ms.devlang: csharp |
storage | Storage Auth Abac Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-powershell.md | |
storage | Storage Blob Account Delegation Sas Create Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-account-delegation-sas-create-javascript.md | description: Create and use account SAS tokens in a JavaScript application that -+ Last updated 11/30/2022 |
storage | Storage Blob Append | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-append.md | |
storage | Storage Blob Block Blob Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md | Title: Premium block blob storage accounts description: Achieve lower and consistent latencies for Azure Storage workloads that require fast and consistent response times.-+ -+ Last updated 10/14/2021 |
storage | Storage Blob Client Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-client-management.md | description: Learn how to create and manage clients that interact with data reso -+ Last updated 02/08/2023 |
storage | Storage Blob Container Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-java.md | description: Learn how to create a blob container in your Azure Storage account -+ Last updated 08/02/2023 |
storage | Storage Blob Container Create Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-javascript.md | |
storage | Storage Blob Container Create Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-python.md | description: Learn how to create a blob container in your Azure Storage account -+ Last updated 08/02/2023 |
storage | Storage Blob Container Create Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create-typescript.md | |
storage | Storage Blob Container Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-create.md | description: Learn how to create a blob container in your Azure Storage account -+ Last updated 07/25/2022 |
storage | Storage Blob Container Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-java.md | description: Learn how to delete and restore a blob container in your Azure Stor -+ Last updated 08/02/2023 |
storage | Storage Blob Container Delete Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-javascript.md | |
storage | Storage Blob Container Delete Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-python.md | description: Learn how to delete and restore a blob container in your Azure Stor -+ Last updated 08/02/2023 |
storage | Storage Blob Container Delete Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete-typescript.md | |
storage | Storage Blob Container Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-delete.md | |
storage | Storage Blob Container Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-java.md | |
storage | Storage Blob Container Lease Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-javascript.md | |
storage | Storage Blob Container Lease Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-python.md | |
storage | Storage Blob Container Lease Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease-typescript.md | |
storage | Storage Blob Container Lease | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-lease.md | |
storage | Storage Blob Container Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md | description: Learn how to set and retrieve system properties and store custom me -+ Last updated 08/02/2023 |
storage | Storage Blob Container Properties Metadata Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-javascript.md | |
storage | Storage Blob Container Properties Metadata Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-python.md | description: Learn how to set and retrieve system properties and store custom me -+ Last updated 08/02/2023 |
storage | Storage Blob Container Properties Metadata Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-typescript.md | |
storage | Storage Blob Container Properties Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata.md | |
storage | Storage Blob Container User Delegation Sas Create Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-dotnet.md | description: Learn how to create a user delegation SAS for a container with Azur -+ Last updated 06/22/2023 |
storage | Storage Blob Container User Delegation Sas Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-java.md | description: Learn how to create a user delegation SAS for a container with Azur -+ Last updated 06/12/2023 |
storage | Storage Blob Container User Delegation Sas Create Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-user-delegation-sas-create-python.md | description: Learn how to create a user delegation SAS for a container with Azur -+ Last updated 06/09/2023 |
storage | Storage Blob Containers List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-java.md | description: Learn how to list blob containers in your Azure Storage account usi -+ Last updated 08/02/2023 |
storage | Storage Blob Containers List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-javascript.md | |
storage | Storage Blob Containers List Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-python.md | description: Learn how to list blob containers in your Azure Storage account usi -+ Last updated 08/02/2023 |
storage | Storage Blob Containers List Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list-typescript.md | |
storage | Storage Blob Containers List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-containers-list.md | |
storage | Storage Blob Copy Async Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-dotnet.md | |
storage | Storage Blob Copy Async Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-java.md | |
storage | Storage Blob Copy Async Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-javascript.md | |
storage | Storage Blob Copy Async Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-python.md | |
storage | Storage Blob Copy Async Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-typescript.md | |
storage | Storage Blob Copy Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-java.md | |
storage | Storage Blob Copy Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-javascript.md | |
storage | Storage Blob Copy Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-python.md | |
storage | Storage Blob Copy Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-typescript.md | |
storage | Storage Blob Copy Url Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-dotnet.md | |
storage | Storage Blob Copy Url Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-java.md | |
storage | Storage Blob Copy Url Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-javascript.md | |
storage | Storage Blob Copy Url Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-python.md | |
storage | Storage Blob Copy Url Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-typescript.md | |
storage | Storage Blob Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md | |
storage | Storage Blob Create User Delegation Sas Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-create-user-delegation-sas-javascript.md | |
storage | Storage Blob Customer Provided Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-customer-provided-key.md | Title: Specify a customer-provided key on a request to Blob storage with .NET description: Learn how to specify a customer-provided key on a request to Blob storage using .NET. -+ Last updated 05/09/2022-+ ms.devlang: csharp |
storage | Storage Blob Delete Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-java.md | |
storage | Storage Blob Delete Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-javascript.md | |
storage | Storage Blob Delete Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-python.md | |
storage | Storage Blob Delete Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete-typescript.md | |
storage | Storage Blob Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-delete.md | |
storage | Storage Blob Dotnet Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-dotnet-get-started.md | |
storage | Storage Blob Download Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-java.md | |
storage | Storage Blob Download Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-javascript.md | |
storage | Storage Blob Download Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-python.md | |
storage | Storage Blob Download Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download-typescript.md | |
storage | Storage Blob Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-download.md | |
storage | Storage Blob Encryption Status | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md | Title: Check the encryption status of a blob description: Learn how to use Azure portal, PowerShell, or Azure CLI to check whether a given blob is encrypted. -+ Last updated 02/09/2023-+ |
storage | Storage Blob Get Url Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-javascript.md | |
storage | Storage Blob Get Url Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-get-url-typescript.md | |
storage | Storage Blob Java Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-java-get-started.md | |
storage | Storage Blob Javascript Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-javascript-get-started.md | |
storage | Storage Blob Lease Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-java.md | |
storage | Storage Blob Lease Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-javascript.md | |
storage | Storage Blob Lease Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-python.md | |
storage | Storage Blob Lease Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease-typescript.md | |
storage | Storage Blob Lease | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-lease.md | |
storage | Storage Blob Object Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-object-model.md | |
storage | Storage Blob Pageblob Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-pageblob-overview.md | Title: Overview of Azure page blobs description: An overview of Azure page blobs and their advantages, including use cases with sample scripts. -+ Last updated 05/11/2023-+ ms.devlang: csharp |
storage | Storage Blob Properties Metadata Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md | |
storage | Storage Blob Properties Metadata Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-javascript.md | |
storage | Storage Blob Properties Metadata Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-python.md | |
storage | Storage Blob Properties Metadata Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-typescript.md | |
storage | Storage Blob Properties Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata.md | |
storage | Storage Blob Python Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-python-get-started.md | |
storage | Storage Blob Query Endpoint Srp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-query-endpoint-srp.md | |
storage | Storage Blob Reserved Capacity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md | Title: Optimize costs for Blob storage with reserved capacity description: Learn about purchasing Azure Storage reserved capacity to save costs on block blob and Azure Data Lake Storage Gen2 resources. -+ Last updated 05/17/2021-+ # Optimize costs for Blob storage with reserved capacity |
storage | Storage Blob Tags Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-java.md | |
storage | Storage Blob Tags Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-javascript.md | |
storage | Storage Blob Tags Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-python.md | |
storage | Storage Blob Tags Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags-typescript.md | |
storage | Storage Blob Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-tags.md | |
storage | Storage Blob Typescript Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-typescript-get-started.md | |
storage | Storage Blob Upload Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-java.md | |
storage | Storage Blob Upload Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-javascript.md | |
storage | Storage Blob Upload Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-python.md | |
storage | Storage Blob Upload Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload-typescript.md | |
storage | Storage Blob Upload | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-upload.md | |
storage | Storage Blob Use Access Tier Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md | |
storage | Storage Blob Use Access Tier Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-java.md | |
storage | Storage Blob Use Access Tier Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-javascript.md | |
storage | Storage Blob Use Access Tier Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-python.md | |
storage | Storage Blob Use Access Tier Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-typescript.md | |
storage | Storage Blob User Delegation Sas Create Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md | Title: Use Azure CLI to create a user delegation SAS for a container or blob description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using Azure CLI. -+ Last updated 12/18/2019-+ |
storage | Storage Blob User Delegation Sas Create Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-dotnet.md | description: Learn how to create a user delegation SAS for a blob with Azure Act -+ Last updated 06/22/2023 |
storage | Storage Blob User Delegation Sas Create Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-java.md | description: Learn how to create a user delegation SAS for a blob with Azure Act -+ Last updated 06/12/2023 |
storage | Storage Blob User Delegation Sas Create Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md | Title: Use PowerShell to create a user delegation SAS for a container or blob description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using PowerShell. -+ Last updated 12/18/2019-+ |
storage | Storage Blob User Delegation Sas Create Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-python.md | description: Learn how to create a user delegation SAS for a blob with Azure Act -+ Last updated 06/06/2023 |
storage | Storage Blobs Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md | Title: Introduction to Blob (object) Storage description: Use Azure Blob Storage to store massive amounts of unstructured object data, such as text or binary data. Azure Blob Storage is highly scalable and available. -+ Last updated 03/28/2023-+ |
storage | Storage Blobs Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md | Title: Latency in Blob storage description: Understand and measure latency for Blob storage operations, and learn how to design your Blob storage applications for low latency. -+ Last updated 09/05/2019-+ # Latency in Blob storage |
storage | Storage Blobs List Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-java.md | description: Learn how to list blobs in your storage account using the Azure Sto -+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: java To list the blobs in a storage account, call one of these methods: - [listBlobs](/java/api/com.azure.storage.blob.BlobContainerClient) - [listBlobsByHierarchy](/java/api/com.azure.storage.blob.BlobContainerClient) +### Manage how many results are returned ++By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for Java](/azure/developer/java/sdk/pagination). ++### Filter results with a prefix ++To filter the list of blobs, pass a string as the `prefix` parameter to [ListBlobsOptions.setPrefix(String prefix)](/java/api/com.azure.storage.blob.models.listblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. + ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character. To organize blobs into virtual directories, use a delimiter character in the blo If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically. -If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs. - ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory. Page 3 Name: folderA/folderB/file3.txt, Is deleted? false ``` +> [!NOTE] +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-java.md#list-directory-contents). + ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy. |
storage | Storage Blobs List Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-javascript.md | Related functionality can be found in the following methods: ### Manage how many results are returned -By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. +By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results). ### Filter results with a prefix -To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. +To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. ```javascript const listOptions = { To organize blobs into virtual directories, use a delimiter character in the blo If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically. -If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs. - ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory. Flat listing: 5: folder2/sub1/c Flat listing: 6: folder2/sub1/d ``` +> [!NOTE] +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents). + ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy. |
storage | Storage Blobs List Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-python.md | description: Learn how to list blobs in your storage account using the Azure Sto -+ Previously updated : 08/02/2023 Last updated : 08/16/2023 ms.devlang: python To list the blobs in a container using a hierarchical listing, call the followin - [ContainerClient.walk_blobs](/python/api/azure-storage-blob/azure.storage.blob.containerclient#azure-storage-blob-containerclient-walk-blobs) (along with the name, you can optionally include metadata, tags, and other information associated with each blob) +### Filter results with a prefix ++To filter the list of blobs, specify a string for the `name_starts_with` keyword argument. The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. + ### Flat listing versus hierarchical listing Blobs in Azure Storage are organized in a flat paradigm, rather than a hierarchical paradigm (like a classic file system). However, you can organize blobs into *virtual directories* in order to mimic a folder structure. A virtual directory forms part of the name of the blob and is indicated by the delimiter character. To organize blobs into virtual directories, use a delimiter character in the blo If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically. -If you've enabled the hierarchical namespace feature on your account, directories aren't virtual. Instead, they're concrete, independent objects. Therefore, directories appear in the list as zero-length blobs. - ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs aren't organized by virtual directory. Name: folderA/file2.txt Name: folderA/folderB/file3.txt ``` -You can also specify options to filter list results or show additional information. The following example lists blobs with a specified prefix, and also lists blob tags: +You can also specify options to filter list results or show additional information. The following example lists blobs and blob tags: :::code language="python" source="~/azure-storage-snippets/blobs/howto/python/blob-devguide-py/blob-devguide-blobs.py" id="Snippet_list_blobs_flat_options"::: Name: folderA/file2.txt, Tags: None Name: folderA/folderB/file3.txt, Tags: {'tag1': 'value1', 'tag2': 'value2'} ``` +> [!NOTE] +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-python.md#list-directory-contents). + ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy. |
storage | Storage Blobs List Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list-typescript.md | Related functionality can be found in the following methods: ### Manage how many results are returned -By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. +By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for JavaScript](/azure/developer/javascript/core/use-azure-sdk#asynchronous-paging-of-results) ### Filter results with a prefix -To filter the list of blobs, specify a string for the `prefix` property in the [list options](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. +To filter the list of blobs, specify a string for the `prefix` property in [ContainerListBlobsOptions](/javascript/api/@azure/storage-blob/containerlistblobsoptions). The prefix string can include one or more characters. Azure Storage then returns only the blobs whose names start with that prefix. ```typescript const listOptions: ContainerListBlobsOptions = { To organize blobs into virtual directories, use a delimiter character in the blo If you name your blobs using a delimiter, then you can choose to list blobs hierarchically. For a hierarchical listing operation, Azure Storage returns any virtual directories and blobs beneath the parent object. You can call the listing operation recursively to traverse the hierarchy, similar to how you would traverse a classic file system programmatically. -If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs. - ## Use a flat listing By default, a listing operation returns blobs in a flat listing. In a flat listing, blobs are not organized by virtual directory. Flat listing: 5: folder2/sub1/c Flat listing: 6: folder2/sub1/d ``` +> [!NOTE] +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-javascript.md#list-directory-contents). + ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy. |
storage | Storage Blobs List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-list.md | To list the blobs in a storage account, call one of these methods: ### Manage how many results are returned -By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. +By default, a listing operation returns up to 5000 results at a time, but you can specify the number of results that you want each listing operation to return. The examples presented in this article show you how to return results in pages. To learn more about pagination concepts, see [Pagination with the Azure SDK for .NET](/dotnet/azure/sdk/pagination). ### Filter results with a prefix By default, a listing operation returns blobs in a flat listing. In a flat listi The following example lists the blobs in the specified container using a flat listing, with an optional segment size specified, and writes the blob name to a console window. -If you've enabled the hierarchical namespace feature on your account, directories are not virtual. Instead, they are concrete, independent objects. Therefore, directories appear in the list as zero-length blobs. - :::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/CRUD.cs" id="Snippet_ListBlobsFlatListing"::: The sample output is similar to: Blob name: FolderA/FolderB/FolderC/blob2.txt Blob name: FolderA/FolderB/FolderC/blob3.txt ``` +> [!NOTE] +> The sample output shown assumes that you have a storage account with a flat namespace. If you've enabled the hierarchical namespace feature for your storage account, directories are not virtual. Instead, they are concrete, independent objects. As a result, directories appear in the list as zero-length blobs.</br></br>For an alternative listing option when working with a hierarchical namespace, see [List directory contents (Azure Data Lake Storage Gen2)](data-lake-storage-directory-file-acl-dotnet.md#list-directory-contents). + ## Use a hierarchical listing When you call a listing operation hierarchically, Azure Storage returns the virtual directories and blobs at the first level of the hierarchy. |
storage | Storage Blobs Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-overview.md | Title: About Blob (object) storage description: Azure Blob storage stores massive amounts of unstructured object data, such as text or binary data. Blob storage also supports Azure Data Lake Storage Gen2 for big data analytics. -+ Last updated 11/04/2019-+ # What is Azure Blob storage? |
storage | Storage Blobs Tune Upload Download Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download-python.md | description: Learn how to tune your uploads and downloads for better performance -+ Last updated 07/07/2023 ms.devlang: python |
storage | Storage Blobs Tune Upload Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-tune-upload-download.md | description: Learn how to tune your uploads and downloads for better performance -+ Last updated 12/09/2022 ms.devlang: csharp |
storage | Storage Create Geo Redundant Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-create-geo-redundant-storage.md | description: Use read-access geo-zone-redundant (RA-GZRS) storage to make your a -+ Last updated 09/02/2022 |
storage | Storage Encrypt Decrypt Blobs Key Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md | Title: Encrypt and decrypt blobs using Azure Key Vault description: Learn how to encrypt and decrypt a blob using client-side encryption with Azure Key Vault. -+ Last updated 11/2/2022 |
storage | Storage Performance Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-performance-checklist.md | Title: Performance and scalability checklist for Blob storage description: A checklist of proven practices for use with Blob storage in developing high-performance applications. -+ Last updated 06/01/2023-+ ms.devlang: csharp |
storage | Storage Quickstart Blobs Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md | description: In this quickstart, you will learn how to use the Azure Blob Storag Last updated 11/09/2022-+ ms.devlang: csharp |
storage | Storage Quickstart Blobs Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-go.md | go get github.com/Azure/azure-sdk-for-go/sdk/azidentity ## Authenticate to Azure and authorize access to blob data +Application requests to Azure Blob Storage must be authorized. Using `DefaultAzureCredential` and the Azure Identity client library is the recommended approach for implementing passwordless connections to Azure services in your code, including Blob Storage. -`DefaultAzureCredential` is a class provided by the Azure Identity client library for Go. `DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. +You can also authorize requests to Azure Blob Storage by using the account access key. However, this approach should be used with caution. Developers must be diligent to never expose the access key in an unsecure location. Anyone who has the access key is able to authorize requests against the storage account, and effectively has access to all the data. `DefaultAzureCredential` offers improved management and security benefits over the account key to allow passwordless authentication. Both options are demonstrated in the following example. ++`DefaultAzureCredential` is a credential chain implementation provided by the Azure Identity client library for Go. `DefaultAzureCredential` supports multiple authentication methods and determines which method to use at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code. To learn more about the order and locations in which `DefaultAzureCredential` looks for credentials, see [Azure Identity library overview](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential). |
storage | Storage Quickstart Blobs Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md | |
storage | Storage Quickstart Blobs Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs.md | description: In this quickstart, you learn how to use the Azure Blob Storage for Last updated 10/28/2022-+ ms.devlang: javascript |
storage | Storage Quickstart Blobs Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md | |
storage | Storage Retry Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md | |
storage | Versions Manage Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versions-manage-dotnet.md | Title: Create and list blob versions in .NET description: Learn how to use the .NET client library to create a previous version of a blob.-+ -+ Last updated 02/14/2023 |
storage | Account Encryption Key Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/account-encryption-key-create.md | Title: Create an account that supports customer-managed keys for tables and queu description: Learn how to create a storage account that supports configuring customer-managed keys for tables and queues. Use the Azure CLI or an Azure Resource Manager template to create a storage account that relies on the account encryption key for Azure Storage encryption. You can then configure customer-managed keys for the account. -+ Last updated 06/09/2021-+ |
storage | Authorization Resource Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorization-resource-provider.md | Title: Use the Azure Storage resource provider to access management resources description: The Azure Storage resource provider is a service that provides access to management resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and delete resources such as storage accounts, private endpoints, and account access keys. -+ Last updated 12/12/2019-+ |
storage | Authorize Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md | Title: Authorize operations for data access description: Learn about the different ways to authorize access to data in Azure Storage. Azure Storage supports authorization with Azure Active Directory, Shared Key authorization, or shared access signatures (SAS), and also supports anonymous access to blobs. -+ Last updated 05/31/2023-+ |
storage | Classic Account Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migrate.md | Title: How to migrate your classic storage accounts to Azure Resource Manager description: Learn how to migrate your classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024. -+ Last updated 05/02/2023-+ For more information about errors that may occur when deleting disk artifacts an # [PowerShell](#tab/azure-powershell) -To learn how to locate and delete disk artifacts in classic storage accounts with PowerShell, see [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account). +To learn how to locate and delete disk artifacts in classic storage accounts with PowerShell, see [Migrate to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-5b-migrate-a-storage-account). |
storage | Classic Account Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-overview.md | Title: We're retiring classic storage accounts on August 31, 2024 description: Overview of migration of classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024. -+ Last updated 07/26/2023-+ |
storage | Classic Account Migration Process | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/classic-account-migration-process.md | Title: Understand storage account migration from classic to Azure Resource Manag description: Learn about the process of migrating classic storage accounts to the Azure Resource Manager deployment model. All classic accounts must be migrated by August 31, 2024. -+ Last updated 04/28/2023-+ The Validation step analyzes the state of resources in the classic deployment mo The Validation step doesn't check for VM disks that may be associated with the storage account. You must check your storage accounts manually to determine whether they contain VM disks. For more information, see the following articles: - [Migrate classic storage accounts to Azure Resource Manager](classic-account-migrate.md)-- [Migrate VMs to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-52-migrate-a-storage-account)+- [Migrate VMs to Resource Manager with PowerShell](../../virtual-machines/migration-classic-resource-manager-ps.md#step-5b-migrate-a-storage-account) - [Migrate VMs to Resource Manager using Azure CLI](../../virtual-machines/migration-classic-resource-manager-cli.md#step-5-migrate-a-storage-account) Keep in mind that it's not possible to check for every constraint that the Azure Resource Manager stack might impose on the storage account during migration. Some constraints are only checked when the resources undergo transformation in the next step of migration (the Prepare step). |
storage | Customer Managed Keys Configure Cross Tenant Existing Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-existing-account.md | Title: Configure cross-tenant customer-managed keys for an existing storage acco description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account resides. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider. -+ Last updated 10/31/2022-+ |
storage | Customer Managed Keys Configure Cross Tenant New Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-cross-tenant-new-account.md | Title: Configure cross-tenant customer-managed keys for a new storage account description: Learn how to configure Azure Storage encryption with customer-managed keys in an Azure key vault that resides in a different tenant than the tenant where the storage account will be created. Customer-managed keys allow a service provider to encrypt the customer's data using an encryption key that is managed by the service provider's customer and that isn't accessible to the service provider. -+ Last updated 10/31/2022-+ |
storage | Customer Managed Keys Configure Existing Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md | Title: Configure customer-managed keys in the same tenant for an existing storag description: Learn how to configure Azure Storage encryption with customer-managed keys for an existing storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault. -+ Last updated 06/07/2023-+ |
storage | Customer Managed Keys Configure Key Vault Hsm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault-hsm.md | Title: Configure encryption with customer-managed keys stored in Azure Key Vault description: Learn how to configure Azure Storage encryption with customer-managed keys stored in Azure Key Vault Managed HSM by using Azure CLI. -+ Last updated 05/05/2022-+ |
storage | Customer Managed Keys Configure New Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md | Title: Configure customer-managed keys in the same tenant for a new storage acco description: Learn how to configure Azure Storage encryption with customer-managed keys for a new storage account by using the Azure portal, PowerShell, or Azure CLI. Customer-managed keys are stored in an Azure key vault. -+ Last updated 03/23/2023-+ |
storage | Customer Managed Keys Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md | Title: Customer-managed keys for account encryption description: You can use your own encryption key to protect the data in your storage account. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Customer-managed keys offer greater flexibility to manage access controls. -+ Last updated 05/11/2023 -+ |
storage | Infrastructure Encryption Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md | Title: Enable infrastructure encryption for double encryption of data description: Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level. When infrastructure encryption is enabled, data in a storage account or encryption scope is encrypted twice with two different encryption algorithms and two different keys. -+ Last updated 10/19/2022 -+ |
storage | Lock Account Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/lock-account-resource.md | Title: Apply an Azure Resource Manager lock to a storage account description: Learn how to apply an Azure Resource Manager lock to a storage account. -+ Last updated 03/09/2021-+ |
storage | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md | Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 --++ |
storage | Resource Graph Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md | Title: Azure Resource Graph sample queries description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties.-+ Last updated 07/07/2022 -+ |
storage | Sas Expiration Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md | Title: Configure an expiration policy for shared access signatures (SAS) description: Configure a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks. -+ Last updated 12/12/2022-+ |
storage | Scalability Targets Resource Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-resource-provider.md | Title: Scalability targets for the Azure Storage resource provider description: Scalability and performance targets for operations against the Azure Storage resource provider. The resource provider implements Azure Resource Manager for Azure Storage. -+ Last updated 12/18/2019-+ |
storage | Scalability Targets Standard Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md | Title: Scalability and performance targets for standard storage accounts description: Learn about scalability and performance targets for standard storage accounts. -+ Last updated 05/25/2022-+ |
storage | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 --++ |
storage | Shared Key Authorization Prevent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md | Title: Prevent authorization with Shared Key description: To require clients to use Azure AD to authorize requests, you can disallow requests to the storage account that are authorized with Shared Key. -+ Last updated 06/06/2023-+ ms.devlang: azurecli |
storage | Storage Account Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md | Title: Create a storage account description: Learn to create a storage account to store blobs, files, queues, and tables. An Azure storage account provides a unique namespace in Microsoft Azure for reading and writing your data. -+ Previously updated : 05/02/2023- Last updated : 08/18/2023+ -+ # Create a storage account A storage account is an Azure Resource Manager resource. Resource Manager is the Every Resource Manager resource, including an Azure storage account, must belong to an Azure resource group. A resource group is a logical container for grouping your Azure services. When you create a storage account, you have the option to either create a new resource group, or use an existing resource group. This how-to shows how to create a new resource group. +### Storage account type parameters ++When you create a storage account using PowerShell, the Azure CLI, Bicep, or Azure Templates, the storage account type is specified by the `kind` parameter (for example, `StorageV2`). The performance tier and redundancy configuration are specified together by the `sku` or `SkuName` parameter (for example, `Standard_GRS`). The following table shows which values to use for the `kind` parameter and the `sku` or `SkuName` parameter to create a particular type of storage account with the desired redundancy configuration. ++| Type of storage account | Supported redundancy configurations | Supported values for the kind parameter | Supported values for the sku or SkuName parameter | Supports hierarchical namespace | +|--|--|--|--|--| +| Standard general-purpose v2 | LRS / GRS / RA-GRS / ZRS / GZRS / RA-GZRS | StorageV2 | Standard_LRS / Standard_GRS / Standard_RAGRS/ Standard_ZRS / Standard_GZRS / Standard_RAGZRS | Yes | +| Premium block blobs | LRS / ZRS | BlockBlobStorage | Premium_LRS / Premium_ZRS | Yes | +| Premium file shares | LRS / ZRS | FileStorage | Premium_LRS / Premium_ZRS | No | +| Premium page blobs | LRS | StorageV2 | Premium_LRS | No | +| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | +| Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | + # [Portal](#tab/azure-portal) To create an Azure storage account with the Azure portal, follow these steps: To enable a hierarchical namespace for the storage account to use [Azure Data La The following table shows which values to use for the `SkuName` and `Kind` parameters to create a particular type of storage account with the desired redundancy configuration. -| Type of storage account | Supported redundancy configurations | Supported values for the Kind parameter | Supported values for the SkuName parameter | Supports hierarchical namespace | -|--|--|--|--|--| -| Standard general-purpose v2 | LRS / GRS / RA-GRS / ZRS / GZRS / RA-GZRS | StorageV2 | Standard_LRS / Standard_GRS / Standard_RAGRS/ Standard_ZRS / Standard_GZRS / Standard_RAGZRS | Yes | -| Premium block blobs | LRS / ZRS | BlockBlobStorage | Premium_LRS / Premium_ZRS | Yes | -| Premium file shares | LRS / ZRS | FileStorage | Premium_LRS / Premium_ZRS | No | -| Premium page blobs | LRS | StorageV2 | Premium_LRS | No | -| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | -| Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | - # [Azure CLI](#tab/azure-cli) To create a general-purpose v2 storage account with Azure CLI, first create a new resource group by calling the [az group create](/cli/azure/group#az-group-create) command. az storage account show \ To enable a hierarchical namespace for the storage account to use [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/), set the `enable-hierarchical-namespace` parameter to `true` on the call to the **az storage account create** command. Creating a hierarchical namespace requires Azure CLI version 2.0.79 or later. -The following table shows which values to use for the `sku` and `kind` parameters to create a particular type of storage account with the desired redundancy configuration. --| Type of storage account | Supported redundancy configurations | Supported values for the kind parameter | Supported values for the sku parameter | Supports hierarchical namespace | -|--|--|--|--|--| -| Standard general-purpose v2 | LRS / GRS / RA-GRS / ZRS / GZRS / RA-GZRS | StorageV2 | Standard_LRS / Standard_GRS / Standard_RAGRS/ Standard_ZRS / Standard_GZRS / Standard_RAGZRS | Yes | -| Premium block blobs | LRS / ZRS | BlockBlobStorage | Premium_LRS / Premium_ZRS | Yes | -| Premium file shares | LRS / ZRS | FileStorage | Premium_LRS / Premium_ZRS | No | -| Premium page blobs | LRS | StorageV2 | Premium_LRS | No | -| Legacy standard general-purpose v1 | LRS / GRS / RA-GRS | Storage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | -| Legacy blob storage | LRS / GRS / RA-GRS | BlobStorage | Standard_LRS / Standard_GRS / Standard_RAGRS | No | - # [Bicep](#tab/bicep) You can use either Azure PowerShell or Azure CLI to deploy a Bicep file to create a storage account. The Bicep file used in this how-to article is from [Azure Resource Manager quickstart templates](https://azure.microsoft.com/resources/templates/storage-account-create/). Bicep currently doesn't support deploying a remote file. Download and save [the Bicep file](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/main.bicep) to your local computer, and then run the scripts. az storage account delete --name storageAccountName --resource-group resourceGro Alternately, you can delete the resource group, which deletes the storage account and any other resources in that resource group. For more information about deleting a resource group, see [Delete resource group and resources](../../azure-resource-manager/management/delete-resource-group.md). +## Create a general purpose v1 storage account +++General purpose v1 (GPv1) storage accounts can no longer be created from the Azure portal. If you need to create a GPv1 storage account, follow the steps in section [Create a storage account](#create-a-storage-account-1) for PowerShell, the Azure CLI, Bicep, or Azure Templates. For the `kind` parameter, specify `Storage`, and choose a `sku` or `SkuName` from the [table of supported values](#storage-account-type-parameters). + ## Next steps - [Storage account overview](storage-account-overview.md) Alternately, you can delete the resource group, which deletes the storage accoun - [Move a storage account to another region](storage-account-move.md) - [Recover a deleted storage account](storage-account-recover.md) - [Migrate a classic storage account](classic-account-migrate.md)- - |
storage | Storage Account Get Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md | Title: Get storage account configuration information description: Use the Azure portal, PowerShell, or Azure CLI to retrieve storage account configuration properties, including the Azure Resource Manager resource ID, account location, account type, or replication SKU. -+ -+ Last updated 12/12/2022 |
storage | Storage Account Keys Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md | Title: Manage account access keys description: Learn how to view, manage, and rotate your storage account access keys. -+ Last updated 03/22/2023-+ |
storage | Storage Account Move | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md | Title: Move an Azure Storage account to another region description: Shows you how to move an Azure Storage account to another region. -+ Last updated 06/15/2022-+ |
storage | Storage Account Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md | Title: Storage account overview description: Learn about the different types of storage accounts in Azure Storage. Review account naming, performance tiers, access tiers, redundancy, encryption, endpoints, and more. -+ Last updated 06/28/2022-+ # Storage account overview |
storage | Storage Account Recover | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md | Title: Recover a deleted storage account description: Learn how to recover a deleted storage account within the Azure portal. -+ Last updated 01/25/2023-+ |
storage | Storage Account Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md | Title: Upgrade to a general-purpose v2 storage account description: Upgrade to general-purpose v2 storage accounts using the Azure portal, PowerShell, or the Azure CLI. Specify an access tier for blob data. -+ Previously updated : 04/29/2021-- Last updated : 08/17/2023++ # Upgrade to a general-purpose v2 storage account General-purpose v2 storage accounts support the latest Azure Storage features and incorporate all of the functionality of general-purpose v1 and Blob storage accounts. General-purpose v2 accounts are recommended for most storage scenarios. General-purpose v2 accounts deliver the lowest per-gigabyte capacity prices for Azure Storage, as well as industry-competitive transaction prices. General-purpose v2 accounts support default account access tiers of hot or cool and blob level tiering between hot, cool, or archive. -Upgrading to a general-purpose v2 storage account from your general-purpose v1 or Blob storage accounts is straightforward. You can upgrade using the Azure portal, PowerShell, or Azure CLI. There is no downtime or risk of data loss associated with upgrading to a general-purpose v2 storage account. The account upgrade happens via a simple Azure Resource Manager operation that changes the account type. +Upgrading to a general-purpose v2 storage account from your general-purpose v1 or Blob storage accounts is straightforward. You can upgrade using the Azure portal, PowerShell, or Azure CLI. There's no downtime or risk of data loss associated with upgrading to a general-purpose v2 storage account. The account upgrade happens via a simple Azure Resource Manager operation that changes the account type. > [!IMPORTANT] > Upgrading a general-purpose v1 or Blob storage account to general-purpose v2 is permanent and cannot be undone. -> [!NOTE] -> Although Microsoft recommends general-purpose v2 accounts for most scenarios, Microsoft will continue to support general-purpose v1 accounts for new and existing customers. You can create general-purpose v1 storage accounts in new regions whenever Azure Storage is available in those regions. Microsoft does not currently have a plan to deprecate support for general-purpose v1 accounts and will provide at least one year's advance notice before deprecating any Azure Storage feature. Microsoft will continue to provide security updates for general-purpose v1 accounts, but no new feature development is expected for this account type. -> -> For new Azure regions that have come online after October 1, 2020, pricing for general-purpose v1 accounts has changed and is equivalent to pricing for general-purpose v2 accounts in those regions. Pricing for general-purpose v1 accounts in Azure regions that existed prior to October 1, 2020 has not changed. For pricing details for general-purpose v1 accounts in a specific region, see the Azure Storage pricing page. Choose your region, and then next to **Pricing offers**, select **Other**. ## Upgrade an account az storage account update -g <resource-group> -n <storage-account> --set kind=St ## Specify an access tier for blob data -General-purpose v2 accounts support all Azure storage services and data objects, but access tiers are available only to block blobs within Blob storage. When you upgrade to a general-purpose v2 storage account, you can specify a default account access tier of hot or cool, which indicates the default tier your blob data will be uploaded as if the individual blob access tier parameter is not specified. +General-purpose v2 accounts support all Azure storage services and data objects, but access tiers are available only to block blobs within Blob storage. When you upgrade to a general-purpose v2 storage account, you can specify a default account access tier of hot or cool, which indicates the default tier your blob data will be uploaded as if the individual blob access tier parameter isn't specified. Blob access tiers enable you to choose the most cost-effective storage based on your anticipated usage patterns. Block blobs can be stored in a hot, cool, or archive tiers. For more information on access tiers, see [Azure Blob storage: Hot, Cool, and Archive storage tiers](../blobs/access-tiers-overview.md). -By default, a new storage account is created in the hot access tier, and a general-purpose v1 storage account can be upgraded to either the hot or cool account tier. If an account access tier is not specified on upgrade, it will be upgraded to hot by default. If you are exploring which access tier to use for your upgrade, consider your current data usage scenario. There are two typical user scenarios for migrating to a general-purpose v2 account: +By default, a new storage account is created in the hot access tier, and a general-purpose v1 storage account can be upgraded to either the hot or cool account tier. If an account access tier isn't specified on upgrade, it will be upgraded to hot by default. If you're exploring which access tier to use for your upgrade, consider your current data usage scenario. There are two typical user scenarios for migrating to a general-purpose v2 account: - You have an existing general-purpose v1 storage account and want to evaluate an upgrade to a general-purpose v2 storage account, with the right storage access tier for blob data. - You have decided to use a general-purpose v2 storage account or already have one and want to evaluate whether you should use the hot or cool storage access tier for blob data. In both cases, the first priority is to estimate the cost of storing, accessing, ## Pricing and billing -Upgrading a v1 storage account to a general-purpose v2 account is free. You may specify the desired account tier during the upgrade process. If an account tier is not specified on upgrade, the default account tier of the upgraded account will be `Hot`. However, changing the storage access tier after the upgrade may result in changes to your bill so it is recommended to specify the new account tier during upgrade. +Upgrading a v1 storage account to a general-purpose v2 account is free. You may specify the desired account tier during the upgrade process. If an account tier isn't specified on upgrade, the default account tier of the upgraded account will be `Hot`. However, changing the storage access tier after the upgrade may result in changes to your bill so it's recommended to specify the new account tier during upgrade. All storage accounts use a pricing model for blob storage based on the tier of each blob. When using a storage account, the following billing considerations apply: - **Storage costs**: In addition to the amount of data stored, the cost of storing data varies depending on the storage access tier. The per-gigabyte cost decreases as the tier gets cooler. -- **Data access costs**: Data access charges increase as the tier gets cooler. For data in the cool and archive storage access tier, you are charged a per-gigabyte data access charge for reads.+- **Data access costs**: Data access charges increase as the tier gets cooler. For data in the cool and archive storage access tier, you're charged a per-gigabyte data access charge for reads. -- **Transaction costs**: There is a per-transaction charge for all tiers that increases as the tier gets cooler.+- **Transaction costs**: There's a per-transaction charge for all tiers that increases as the tier gets cooler. - **Geo-Replication data transfer costs**: This charge only applies to accounts with geo-replication configured, including GRS and RA-GRS. Geo-replication data transfer incurs a per-gigabyte charge. With this enabled, capacity data is recorded daily for a storage account's Blob To monitor data access patterns for Blob storage, you need to enable the hourly transaction metrics from the API. With hourly transaction metrics enabled, per API transactions are aggregated every hour, and recorded as a table entry that is written to the *$MetricsHourPrimaryTransactionsBlob* table within the same storage account. The *$MetricsHourSecondaryTransactionsBlob* table records the transactions to the secondary endpoint when using RA-GRS storage accounts. > [!NOTE]-> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process is not applicable. The capacity data does not differentiate block blobs from other types, and does not give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill. +> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process isn't applicable. The capacity data doesn't differentiate block blobs from other types, and doesn't give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill. To get a good approximation of your data consumption and access pattern, we recommend you choose a retention period for the metrics that is representative of your regular usage and extrapolate. One option is to retain the metrics data for seven days and collect the data every week, for analysis at the end of the month. Another option is to retain the metrics data for the last 30 days and collect and analyze the data at the end of the 30-day period. This total capacity consumed by both user data and analytics logs (if enabled) c The sum of *'TotalBillableRequests'*, across all entries for an API in the transaction metrics table indicates the total number of transactions for that particular API. *For example*, the total number of *'GetBlob'* transactions in a given period can be calculated by the sum of total billable requests for all entries with the row key *'user;GetBlob'*. -In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they are priced differently. +In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they're priced differently. - Write transactions such as *'PutBlob'*, *'PutBlock'*, *'PutBlockList'*, *'AppendBlock'*, *'ListBlobs'*, *'ListContainers'*, *'CreateContainer'*, *'SnapshotBlob'*, and *'CopyBlob'*. - Delete transactions such as *'DeleteBlob'* and *'DeleteContainer'*. In order to estimate transaction costs for GPv1 storage accounts, you need to ag #### Data access and geo-replication data transfer costs -While storage analytics does not provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes. +While storage analytics doesn't provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes. In order to estimate the data access costs for Blob storage accounts, you need to break down the transactions into two groups. |
storage | Storage Configure Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md | Title: Configure a connection string description: Configure a connection string for an Azure storage account. A connection string contains the information needed to authorize access to a storage account from your application at runtime using Shared Key authorization. -+ Last updated 01/24/2023-+ |
storage | Storage Encryption Key Model Get | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-encryption-key-model-get.md | Title: Determine which encryption key model is in use for the storage account description: Use Azure portal, PowerShell, or Azure CLI to check how encryption keys are being managed for the storage account. Keys may be managed by Microsoft (the default), or by the customer. Customer-managed keys must be stored in Azure Key Vault. -+ Last updated 03/13/2020-+ |
storage | Storage Explorers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorers.md | Title: Microsoft client tools for working with Azure Storage description: A list of client tools provided by Microsoft that enable you to view and interact with your Azure Storage data. -+ Last updated 09/27/2019-+ |
storage | Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md | Title: Introduction to Azure Storage - Cloud storage on Azure description: The Azure Storage platform is Microsoft's cloud storage solution. Azure Storage provides highly available, secure, durable, massively scalable, and redundant storage for data objects in the cloud. Learn about the services available in Azure Storage and how you can use them in your applications, services, or enterprise solutions. -+ Last updated 01/10/2023-+ |
storage | Storage Network Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md | Title: Configure Azure Storage firewalls and virtual networks -description: Configure layered network security for your storage account by using Azure Storage firewalls and Azure Virtual Network. +description: Configure layered network security for your storage account by using the Azure Storage firewall. Previously updated : 08/01/2023 Last updated : 08/15/2023 -+ # Configure Azure Storage firewalls and virtual networks -Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks or resources that you use. +Azure Storage provides a layered security model. This model enables you to control the level of access to your storage accounts that your applications and enterprise environments require. In this article, you will learn how to configure the Azure Storage firewall to protect the data in your storage account at the network layer. -When you configure network rules, only applications that request data over the specified set of networks or through the specified set of Azure resources can access a storage account. You can limit access to your storage account to requests that come from specified IP addresses, IP ranges, subnets in an Azure virtual network, or resource instances of some Azure services. +> [!IMPORTANT] +> Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules. +> +> Some operations, such as blob container operations, can be performed through both the control plane and the data plane. So if you attempt to perform an operation such as listing containers from the Azure portal, the operation will succeed unless it is blocked by another mechanism. Attempts to access blob data from an application such as Azure Storage Explorer are controlled by the firewall restrictions. +> +> For a list of data plane operations, see the [Azure Storage REST API Reference](/rest/api/storageservices/). +> For a list of control plane operations, see the [Azure Storage Resource Provider REST API Reference](/rest/api/storagerp/). -Storage accounts have a public endpoint that's accessible through the internet. You can also create [private endpoints for your storage account](storage-private-endpoints.md). Creating private endpoints assigns a private IP address from your virtual network to the storage account. It helps secure traffic between your virtual network and the storage account over a private link. +## Configure network access to Azure Storage -The Azure Storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when you're using private endpoints. Your firewall configuration also enables trusted Azure platform services to access the storage account. +You can control access to the data in your storage account over network endpoints, or through trusted services or resources in any combination including: -An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized. The firewall rules remain in effect and will block anonymous traffic. +- [Allow access from selected virtual network subnets using private endpoints](storage-private-endpoints.md). +- [Allow access from selected virtual network subnets using service endpoints](#grant-access-from-a-virtual-network). +- [Allow access from specific public IP addresses or ranges](#grant-access-from-an-internet-ip-range). +- [Allow access from selected Azure resource instances](#grant-access-from-azure-resource-instances). +- [Allow access from trusted Azure services](#grant-access-to-trusted-azure-services) (using [Manage exceptions](#manage-exceptions)). +- [Configure exceptions for logging and metrics services](#manage-exceptions). -Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service that operates within an Azure virtual network or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services. +### About virtual network endpoints -You can grant access to Azure services that operate from within a virtual network by allowing traffic from the subnet that hosts the service instance. You can also enable a limited number of scenarios through the exceptions mechanism that this article describes. To access data from the storage account through the Azure portal, you need to be on a machine within the trusted boundary (either IP or virtual network) that you set up. +There are two types of virtual network endpoints for storage accounts: +- [Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) +- [Private endpoints](storage-private-endpoints.md) -## Scenarios +Virtual network service endpoints are public and accessible via the internet. The Azure Storage firewall provides the ability to control access to your storage account over such public endpoints. When you enable public network access to your storage account, all incoming requests for data are blocked by default. Only applications that request data from allowed sources that you configure in your storage account firewall settings will be able to access your data. Sources can include the source IP address or virtual network subnet of a client, or an Azure service or resource instance through which clients or services access your data. Requests that are blocked include those from other Azure services, from the Azure portal, and from logging and metrics services, unless you explicitly allow access in your firewall configuration. -To secure your storage account, you should first configure a rule to deny access to traffic from all networks (including internet traffic) on the public endpoint, by default. Then, you should configure rules that grant access to traffic from specific virtual networks. You can also configure rules to grant access to traffic from selected public internet IP address ranges, enabling connections from specific internet or on-premises clients. This configuration helps you build a secure network boundary for your applications. +A private endpoint uses a private IP address from your virtual network to access a storage account over the Microsoft backbone network. With a private endpoint, traffic between your virtual network and the storage account are secured over a private link. Storage firewall rules only apply to the public endpoints of a storage account, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, you can use the firewall to block all access through the public endpoint. -You can combine firewall rules that allow access from specific virtual networks and from public IP address ranges on the same storage account. You can apply storage firewall rules to existing storage accounts or when you create new storage accounts. +To help you decide when to use each type of endpoint in your environment, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). -Storage firewall rules apply to the public endpoint of a storage account. You don't need any firewall access rules to allow traffic for private endpoints of a storage account. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. +### How to approach network security for your storage account -> [!IMPORTANT] -> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior. -> -> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior. +To secure your storage account and build a secure network boundary for your applications: ++1. Start by disabling all public network access for the storage account under the **Public network access** setting in the storage account firewall. +1. Where possible, configure private links to your storage account from private endpoints on virtual network subnets where the clients reside that require access to your data. +1. If client applications require access over the public endpoints, change the **Public network access** setting to **Enabled from selected virtual networks and IP addresses**. Then, as needed: -Network rules are enforced on all network protocols for Azure Storage, including REST and SMB. To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must configure explicit network rules. + 1. Specify the virtual network subnets from which you want to allow access. + 1. Specify the public IP address ranges of clients from which you want to allow access, such as those on on-premises networks. + 1. Allow access from selected Azure resource instances. + 1. Add exceptions to allow access from trusted services required for operations such as backing up data. + 1. Add exceptions for logging and metrics. After you apply network rules, they're enforced for all requests. SAS tokens that grant access to a specific IP address serve to limit the access of the token holder, but they don't grant new access beyond configured network rules. -Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O. Network rules help protect REST access to page blobs. +## Restrictions and considerations ++Before implementing network security for your storage accounts, review the important restrictions and considerations discussed in this section. ++> [!div class="checklist"] +> +> - Azure Storage firewall rules only apply to [data plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#data-plane) operations. [Control plane](../../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are not subject to the restrictions specified in firewall rules. +> - Review the [Restrictions for IP network rules](#restrictions-for-ip-network-rules). +> - To access data by using tools such as the Azure portal, Azure Storage Explorer, and AzCopy, you must be on a machine within the trusted boundary that you establish when configuring network security rules. +> - Network rules are enforced on all network protocols for Azure Storage, including REST and SMB. +> - Network rules don't affect virtual machine (VM) disk traffic, including mount and unmount operations and disk I/O, but they do help protect REST access to page blobs. +> - You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by [creating an exception](#manage-exceptions). Firewall exceptions aren't applicable to managed disks, because Azure already manages them. +> - Classic storage accounts don't support firewalls and virtual networks. +> - If you delete a subnet that's included in a virtual network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account. +> - When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior. Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior. +> - By design, access to a storage account from trusted services takes the highest precedence over other network access restrictions. If you set **Public network access** to **Disabled** after previously setting it to **Enabled from selected virtual networks and IP addresses**, any [resource instances](#grant-access-from-azure-resource-instances) and [exceptions](#manage-exceptions) that you previously configured, including [Allow Azure services on the trusted services list to access this storage account](#grant-access-to-trusted-azure-services), will remain in effect. As a result, those resources and services might still have access to the storage account. ++### Authorization -Classic storage accounts don't support firewalls and virtual networks. +Clients granted access via network rules must continue to meet the authorization requirements of the storage account to access the data. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with a shared access signature (SAS) token. -You can use unmanaged disks in storage accounts with network rules applied to back up and restore VMs by creating an exception. The [Manage exceptions](#manage-exceptions) section of this article documents this process. Firewall exceptions aren't applicable with managed disks, because Azure already manages them. +When you configure a blob container for anonymous public access, requests to read data in that container don't need to be authorized, but the firewall rules remain in effect and will block anonymous traffic. ## Change the default network access rule By default, storage accounts accept connections from clients on any network. You You must set the default rule to **deny**, or network rules have no effect. However, changing this setting can affect your application's ability to connect to Azure Storage. Be sure to grant access to any allowed networks or set up access through a private endpoint before you change this setting. + ### [Portal](#tab/azure-portal) 1. Go to the storage account that you want to secure. You can enable a [service endpoint](../../virtual-network/virtual-network-servic Each storage account supports up to 200 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range). > [!IMPORTANT]-> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account. +> When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior. +> +> Additionally, it's recommended that you honor the time-to-live (TTL) of the DNS record and avoid overriding it. Overriding the DNS TTL may result in unexpected behavior. ### Required permissions Cross-region service endpoints for Azure Storage became generally available in A Configuring service endpoints between virtual networks and service instances in a [paired region](../../best-practices-availability-paired-regions.md) can be an important part of your disaster recovery plan. Service endpoints allow continuity during a regional failover and access to read-only geo-redundant storage (RA-GRS) instances. Network rules that grant access from a virtual network to a storage account also grant access to any RA-GRS instance. -When you're planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts. +When you're planning for disaster recovery during a regional outage, create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your geo-redundant storage accounts. Local and cross-region service endpoints can't coexist on the same subnet. To replace existing service endpoints with cross-region ones, delete the existing `Microsoft.Storage` endpoints and re-create them as cross-region endpoints (`Microsoft.Storage.Global`). If you want to enable access to your storage account from a virtual network or s 6. Select **Save** to apply your changes. +> [!IMPORTANT] +> If you delete a subnet that's included in a network rule, it will be removed from the network rules for the storage account. If you create a new subnet by the same name, it won't have access to the storage account. To allow access, you must explicitly authorize the new subnet in the network rules for the storage account. + #### [PowerShell](#tab/azure-powershell) 1. Install [Azure PowerShell](/powershell/azure/install-azure-powershell) and [sign in](/powershell/azure/authenticate-azureps). If you want to enable access to your storage account from a virtual network or s You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic. +### Restrictions for IP network rules + The following restrictions apply to IP address ranges: - IP network rules are allowed only for *public internet* IP addresses. To learn more about working with storage analytics, see [Use Azure Storage analy ## Next steps Learn more about [Azure network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).- Dig deeper into [Azure Storage security](../blobs/security-recommendations.md). |
storage | Storage Powershell Independent Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md | Title: Use PowerShell to manage data in Azure independent clouds description: Managing Storage in the China Cloud, Government Cloud, and German Cloud Using Azure PowerShell. -+ Last updated 12/04/2019-+ |
storage | Storage Sas Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md | Title: Grant limited access to data with shared access signatures (SAS) description: Learn about using shared access signatures (SAS) to delegate access to Azure Storage resources, including blobs, queues, tables, and files. -+ Last updated 06/07/2023-+ |
storage | Storage Service Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md | Title: Azure Storage encryption for data at rest description: Azure Storage protects your data by automatically encrypting it before persisting it to the cloud. You can rely on Microsoft-managed keys for the encryption of the data in your storage account, or you can manage encryption with your own keys. -+ Last updated 02/09/2023 -+ |
storage | Container Storage Aks Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md | description: Learn how to install Azure Container Storage Preview on an Azure Ku Previously updated : 08/03/2023 Last updated : 08/18/2023 +- Optional: We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp). + ## Install Azure Container Storage Follow these instructions to install Azure Container Storage on your AKS cluster using an installation script. Follow these instructions to install Azure Container Storage on your AKS cluster | -g | --resource-group | The resource group name.| | -c  | --cluster-name | The name of the cluster where Azure Container Storage is to be installed.| | -n  | --nodepool-name | The name of the nodepool. Defaults to the first nodepool in the cluster.|- | -r  | --release-train | The release train for the installation. Defaults to prod.| + | -r  | --release-train | The release train for the installation. Defaults to stable.| For example: |
storage | Container Storage Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md | description: An overview of Azure Container Storage Preview, a service built nat Previously updated : 08/02/2023 Last updated : 08/14/2023 -To sign up for Azure Container Storage Preview, complete the [onboarding survey](https://aka.ms/AzureContainerStoragePreviewSignUp). To get started using Azure Container Storage, see [Install Azure Container Storage for use with AKS](container-storage-aks-quickstart.md) or watch the video. +To get started using Azure Container Storage, see [Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md) or watch the video. ++We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp). :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube.com/embed/I_2nCQ1FKTU" title="Get started with Azure Container Storage" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> + > [!VIDEO https://www.youtube.com/embed/I_2nCQ1FKTU] :::column-end::: :::column::: This video provides an introduction to Azure Container Storage, an end-to-end storage management and orchestration service for stateful applications. See how simple it is to create and manage volumes for production-scale stateful container applications. Learn how to optimize the performance of stateful workloads on Azure Kubernetes Service (AKS) to effectively scale across storage services while providing a cost-effective container-native experience. |
storage | Install Container Storage Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md | description: Learn how to install Azure Container Storage Preview for use with A Previously updated : 08/02/2023 Last updated : 08/14/2023 +- Optional: We'd like input on how you plan to use Azure Container Storage. Please complete this [short survey](https://aka.ms/AzureContainerStoragePreviewSignUp). + > [!NOTE] > Instead of following the steps in this article, you can install Azure Container Storage Preview using a provided installation script. See [Quickstart: Use Azure Container Storage Preview with Azure Kubernetes Service](container-storage-aks-quickstart.md). |
storage | Use Container Storage With Elastic San | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md | description: Configure Azure Container Storage Preview for use with Azure Elasti Previously updated : 07/03/2023 Last updated : 08/14/2023 -Azure Container Storage Preview is only available in the following Azure regions: --- East US-- West Europe-- West US 2-- West US 3+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central. ## Create a storage pool |
storage | Use Container Storage With Local Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md | description: Configure Azure Container Storage Preview for use with Ephemeral Di Previously updated : 07/03/2023 Last updated : 08/14/2023 -Azure Container Storage Preview is only available in the following Azure regions: --- East US-- West Europe-- West US 2-- West US 3+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central. ## Create a storage pool |
storage | Use Container Storage With Managed Disks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md | description: Configure Azure Container Storage Preview for use with Azure manage Previously updated : 07/03/2023 Last updated : 08/14/2023 -Azure Container Storage Preview is only available in the following Azure regions: --- East US-- West Europe-- West US 2-- West US 3+Azure Container Storage Preview is only available in the following Azure regions: East US, East US 2, West US 2, West US 3, South Central US, Southeast Asia, Australia East, West Europe, North Europe, UK South, Sweden Central, and France Central. ## Create a storage pool |
storage | Elastic San Connect Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md | description: Learn how to connect to an Azure Elastic SAN Preview volume an Azur Previously updated : 04/28/2023 Last updated : 07/11/2023 The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h ## Prerequisites -- Have an [Azure Elastic SAN](elastic-san-create.md) with volumes - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver+- [Deploy an Elastic SAN Preview](elastic-san-create.md) +- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint) +- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules) ## Limitations After deployment, check the pods status to verify that the driver installed. ```bash kubectl -n kube-system get pod -o wide -l app=csi-iscsi-node ```-### Configure Elastic SAN Volume Group --To connect an Elastic SAN volume to an AKS cluster, you need to configure Elastic SAN Volume Group to allow access from AKS node pool subnets, follow [Configure Elastic SAN networking Preview](elastic-san-networking.md) ### Get volume information |
storage | Elastic San Connect Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md | description: Learn how to connect to an Azure Elastic SAN Preview volume from a Previously updated : 04/24/2023 Last updated : 07/11/2023 In this article, you'll add the Storage service endpoint to an Azure virtual net ## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) +- [Deploy an Elastic SAN Preview](elastic-san-create.md) +- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint) +- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules) ## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)] -## Networking configuration --To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets. --### Enable Storage service endpoint --In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. --> [!NOTE] -> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. --# [Portal](#tab/azure-portal) --1. Navigate to your virtual network and select **Service Endpoints**. -1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**. -1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**. ---# [PowerShell](#tab/azure-powershell) --```powershell -$resourceGroupName = "yourResourceGroup" -$vnetName = "yourVirtualNetwork" -$subnetName = "yourSubnet" --$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName --$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName --$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork -``` --# [Azure CLI](#tab/azure-cli) --```azurecli -az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global" -``` ---### Configure volume group networking --Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks. --By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints). --# [Portal](#tab/azure-portal) --1. Navigate to your SAN and select **Volume groups**. -1. Select a volume group and select **Create**. -1. Add an existing virtual network and subnet and select **Save**. --# [PowerShell](#tab/azure-powershell) --```azurepowershell -$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow --Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule --``` -# [Azure CLI](#tab/azure-cli) --```azurecli -# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones. -virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)' --az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}" -``` -- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows. |
storage | Elastic San Connect Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md | description: Learn how to connect to an Azure Elastic SAN Preview volume from a Previously updated : 04/24/2023 Last updated : 07/11/2023 In this article, you'll add the Storage service endpoint to an Azure virtual net ## Prerequisites -- Complete [Deploy an Elastic SAN Preview](elastic-san-create.md)-- An Azure Virtual Network, which you'll need to establish a connection from compute clients in Azure to your Elastic SAN volumes.+- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) +- [Deploy an Elastic SAN Preview](elastic-san-create.md) +- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint) +- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules) ## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)] -## Configure networking --To connect to a SAN volume, you need to enable the storage service endpoint on your Azure virtual network subnet, and then connect your volume groups to your Azure virtual network subnets. --### Enable Storage service endpoint --In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. --> [!NOTE] -> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. --# [Portal](#tab/azure-portal) --1. Navigate to your virtual network and select **Service Endpoints**. -1. Select **+ Add** and for **Service** select **Microsoft.Storage.Global**. -1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**. ---# [PowerShell](#tab/azure-powershell) --```powershell -$resourceGroupName = "yourResourceGroup" -$vnetName = "yourVirtualNetwork" -$subnetName = "yourSubnet" --$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName --$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName --$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork -``` --# [Azure CLI](#tab/azure-cli) --```azurecli -az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global" -``` ---### Configure volume group networking --Now that you've enabled the service endpoint, configure the network security settings on your volume groups. You can grant network access to a volume group from one or more Azure virtual networks. --By default, no network access is allowed to any volumes in a volume group. Adding a virtual network to your volume group lets you establish iSCSI connections from clients in the same virtual network and subnet to the volumes in the volume group. For details on accessing your volumes from another region, see [Azure Storage cross-region service endpoints](elastic-san-networking.md#azure-storage-cross-region-service-endpoints). --# [Portal](#tab/azure-portal) --1. Navigate to your SAN and select **Volume groups**. -1. Select a volume group and select **Create**. -1. Add an existing virtual network and subnet and select **Save**. --# [PowerShell](#tab/azure-powershell) --```azurepowershell -$rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow --Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule --``` -# [Azure CLI](#tab/azure-cli) --```azurecli -# First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones. -virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)' --az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}" -``` -- ## Connect to a volume You can either create single sessions or multiple-sessions to every Elastic SAN volume based on your application's multi-threaded capabilities and performance requirements. To achieve higher IOPS and throughput to a volume and reach its maximum limits, use multiple sessions and adjust the queue depth and IO size as needed, if your workload allows. |
storage | Elastic San Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md | description: Learn how to deploy an Azure Elastic SAN (preview) with the Azure p Previously updated : 08/14/2023 Last updated : 08/21/2023 This article explains how to deploy and configure an elastic storage area networ - If you're using Azure CLI, install the [latest version](/cli/azure/install-azure-cli). - Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN. ++## Preview Registration +Register your subscription with Microsoft.ElasticSAN resource provider and the preview feature using the following command: ++# [Portal](#tab/azure-portal) +If you are using the portal, follow the steps in either the Azure PowerShell module or the Azure CLI to register your subscription for the preview. ++# [PowerShell](#tab/azure-powershell) ++```azurepowershell +Register-AzResourceProvider -ProviderNamespace Microsoft.ElasticSan +Register-AzProviderFeature -FeatureName ElasticSanPreviewAccess -ProviderNamespace Microsoft.ElasticSan +``` ++It may take a few minutes for registration to complete. To confirm that you've registered, use the following command: ++```azurepowershell +Get-AzResourceProvider -ProviderNamespace Microsoft.ElasticSan +Get-AzProviderFeature -FeatureName "ElasticSanPreviewAccess" -ProviderNamespace "Microsoft.ElasticSan" +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli +az provider register --namespace Microsoft.ElasticSan +az feature register --name ElasticSanPreviewAccess --namespace Microsoft.ElasticSan +``` ++It may take a few minutes for registration to complete. To confirm you've registered, use the following command: ++```azurecli +az provider show --namespace Microsoft.ElasticSan +az feature show --name ElasticSanPreviewAccess --namespace Microsoft.ElasticSan +``` ++> [!IMPORTANT] +> If you are using PowerShell or CLI, you don't need to run this command. Skip this step and proceed to deploy an Elastic SAN. + ## Limitations [!INCLUDE [elastic-san-regions](../../../includes/elastic-san-regions.md)] This article explains how to deploy and configure an elastic storage area networ # [PowerShell](#tab/azure-powershell) -The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`. +Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in of all the examples in this article: ++| Placeholder | Description | +|-|-| +| `<ResourceGroupName>` | The name of the resource group where the resources will be deployed. | +| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* | +| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. | +| `<VolumeName>` | The name of the Elastic SAN Volume to be created. | +| `<Location>` | The region where the new resources will be created. | +| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* | ++The following command creates an Elastic SAN that uses **locally-redundant** storage. ```azurepowershell-## Variables -$rgName = "yourResourceGroupName" -## Select the same availability zone as where you plan to host your workload -$zone = 1 -## Select the same region as your Azure virtual network -$region = "yourRegion" -$sanName = "desiredSANName" -$volGroupName = "desiredVolumeGroupName" -$volName = "desiredVolumeName" --## Create the SAN, itself -New-AzElasticSAN -ResourceGroupName $rgName -Name $sanName -AvailabilityZone $zone -Location $region -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS +# Define some variables. +$RgName = "<ResourceGroupName>" +$EsanName = "<ElasticSanName>" +$EsanVgName = "<ElasticSanVolumeGroupName>" +$VolumeName = "<VolumeName>" +$Location = "<Location>" +$Zone = <Zone> ++# Create the SAN. +New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -AvailabilityZone $Zone -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_LRS ```++The following command creates an Elastic SAN that uses **zone-redundant** storage. ++```azurepowershell +# Define some variables. +$RgName = "<ResourceGroupName>" +$EsanName = "<ElasticSanName>" +$EsanVgName = "<ElasticSanVolumeGroupName>" +$VolumeName = "<VolumeName>" +$Location = "<Location>" ++# Create the SAN +New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -Location $Location -BaseSizeTib 100 -ExtendedCapacitySizeTiB 20 -SkuName Premium_ZRS +``` + # [Azure CLI](#tab/azure-cli) -The following command creates an Elastic SAN that uses locally redundant storage. To create one that uses zone-redundant storage, replace `Premium_LRS` with `Premium_ZRS`. +Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in of all the examples in this article: ++| Placeholder | Description | +|-|-| +| `<ResourceGroupName>` | The name of the resource group where the resources will be deployed. | +| `<ElasticSanName>` | The name of the Elastic SAN to be created.<br>*The Elastic SAN name must be between 3 and 24 characters long. The name may only contain lowercase letters, numbers, hyphens and underscores, and must begin and end with a letter or a number. Each hyphen and underscore must be preceded and followed by an alphanumeric character.* | +| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. | +| `<VolumeName>` | The name of the Elastic SAN Volume to be created. | +| `<Location>` | The region where the new resources will be created. | +| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* | ++The following command creates an Elastic SAN that uses **locally-redundant** storage. ```azurecli-## Variables -sanName="yourSANNameHere" -resourceGroupName="yourResourceGroupNameHere" -sanLocation="desiredRegion" -volumeGroupName="desiredVolumeGroupName" +# Define some variables. +RgName="<ResourceGroupName>" +EsanName="<ElasticSanName>" +EsanVgName="<ElasticSanVolumeGroupName>" +VolumeName="<VolumeName>" +Location="<Location>" +Zone=<Zone> ++az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}" --availability-zones $Zone +``` -az elastic-san create -n $sanName -g $resourceGroupName -l $sanLocation --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_LRS,tier:Premium}" +The following command creates an Elastic SAN that uses **zone-redundant** storage. ++```azurecli +# Define some variables. +RgName="<ResourceGroupName>" +EsanName="<ElasticSanName>" +EsanVgName="<ElasticSanVolumeGroupName>" +VolumeName="<VolumeName>" +Location="<Location>" ++az elastic-san create -n $EsanName -g $RgName -l $Location --base-size-tib 100 --extended-capacity-size-tib 20 --sku "{name:Premium_ZRS,tier:Premium}" ```+ ## Create volume groups Now that you've configured the basic settings and provisioned your storage, you # [PowerShell](#tab/azure-powershell) +The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san). ```azurepowershell-## Create the volume group, this script only creates one. -New-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSANName $sanName -Name $volGroupName +# Create the volume group, this script only creates one. +New-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSANName $EsanName -Name $EsanVgName ``` # [Azure CLI](#tab/azure-cli) +The following sample command creates an Elastic SAN volume group in the Elastic SAN you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san). + ```azurecli-az elastic-san volume-group create --elastic-san-name $sanName -g $resourceGroupName -n $volumeGroupName +az elastic-san volume-group create --elastic-san-name $EsanName -g $RgName -n $EsanVgName ``` Volumes are usable partitions of the SAN's total capacity, you must allocate a p # [PowerShell](#tab/azure-powershell) -In this article, we provide you the command to create a single volume. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md). +The following sample command creates a single volume in the Elastic SAN volume group you created previously. To create a batch of volumes, see [Create multiple Elastic SAN volumes](elastic-san-batch-create-sample.md). Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san). > [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created. -Replace `volumeName` with the name you'd like the volume to use, then run the following script: +Use the same variables, then run the following script: ```azurepowershell-## Create the volume, this command only creates one. -New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -Name $volName -sizeGiB 2000 +# Create the volume, this command only creates one. +New-AzElasticSanVolume -ResourceGroupName $RgName -ElasticSanName $EsanName -VolumeGroupName $EsanVgName -Name $VolumeName -sizeGiB 2000 ``` # [Azure CLI](#tab/azure-cli) New-AzElasticSanVolume -ResourceGroupName $rgName -ElasticSanName $sanName -Volu > [!IMPORTANT] > The volume name is part of your volume's iSCSI Qualified Name, and can't be changed once created. -Replace `$volumeName` with the name you'd like the volume to use, then run the following script: +The following sample command creates an Elastic SAN volume in the Elastic SAN volume group you created previously. Use the same variables and values you defined when you [created the Elastic SAN](#create-the-san). ```azurecli-az elastic-san volume create --elastic-san-name $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --size-gib 2000 +az elastic-san volume create --elastic-san-name $EsanName -g $RgName -v $EsanVgName -n $VolumeName --size-gib 2000 ``` |
storage | Elastic San Delete | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md | -To delete an elastic storage area network (SAN), you first need to disconnect every volume in your Elastic SAN Preview from any connected hosts. +Your Elastic storage area network (SAN) resources can be deleted at different resource levels. This article covers the overall deletion process, starting from disconnecting iSCSI connections to volumes, deleting the volumes themselves, deleting a volume group, and deleting an elastic SAN itself. Before you delete your elastic SAN, make sure it's not being used in any running workloads. ## Disconnect volumes from clients iscsiadm --mode node --target **yourStorageTargetIQN** --portal **yourStorageTar ## Delete a SAN -When your SAN has no active connections to any clients, you may delete it using the Azure portal or Azure PowerShell module. +When your SAN has no active connections to any clients, you may delete it using the Azure portal or Azure PowerShell module. If you delete a SAN or a volume group, the corresponding child resources will be deleted along with it. The delete commands for each of the resource levels are below. -First, delete each volume. ++To delete volumes, run the following commands. # [PowerShell](#tab/azure-powershell) az elastic-san volume delete -e $sanName -g $resourceGroupName -v $volumeGroupNa ``` -Then, delete each volume group. +To delete volume groups, run the following commands. # [PowerShell](#tab/azure-powershell) az elastic-san volume-group delete -e $sanName -g $resourceGroupName -n $volumeG ``` -Finally, delete the Elastic SAN itself. +To delete the Elastic SAN itself, run the following commands. # [PowerShell](#tab/azure-powershell) |
storage | Elastic San Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md | description: An overview of Azure Elastic SAN Preview, a service that enables yo Previously updated : 05/02/2023 Last updated : 08/15/2023 The status of items in this table may change over time. | Encryption at rest| ✔️ | | Encryption in transit| ⛔ | | [LRS or ZRS redundancy types](elastic-san-planning.md#redundancy)| ✔️ |-| Private endpoints | ⛔ | +| Private endpoints | ✔️ | | Grant network access to specific Azure virtual networks| ✔️ | | Soft delete | ⛔ | | Snapshots | ⛔ | |
storage | Elastic San Networking Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md | + + Title: Azure Elastic SAN networking Preview concepts +description: An overview of Azure Elastic SAN Preview networking options, including storage service endpoints, private endpoints, and iSCSI. +++ Last updated : 08/16/2023+++++# Elastic SAN Preview networking ++Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md). ++You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. ++Depending on your configuration, applications on peered virtual networks or on-premises networks can also access volumes in the group. On-premises networks must be connected to the virtual network by a VPN or ExpressRoute. For more details about virtual network configurations, see [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md). ++There are two types of virtual network endpoints you can configure to allow access to an Elastic SAN volume group: ++- [Storage service endpoints](#storage-service-endpoints) +- [Private endpoints](#private-endpoints) ++To decide which option is best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). Generally, you should use private endpoints instead of service endpoints since Private Link offers better capabilities. For more information, see [Azure Private Link](../../private-link/private-endpoint-overview.md). ++After configuring endpoints, you can configure network rules to further control access to your Elastic SAN volume group. Once the endpoints and network rules have been configured, clients can connect to volumes in the group to process their workloads. ++## Storage service endpoints ++[Azure Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them. ++[Cross-region service endpoints for Azure Storage](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints) work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from a subnet to a storage account uses a private IP address as a source IP. ++> [!TIP] +> The original local service endpoints, identified as **Microsoft.Storage**, are still supported for backward compatibility, but you should create cross-region endpoints, identified as **Microsoft.Storage.Global**, for new deployments. +> +> Cross-region service endpoints and local ones can't coexist on the same subnet. To use cross-region service endpoints, you might have to delete existing **Microsoft.Storage** endpoints and recreate them as **Microsoft.Storage.Global**. ++## Private endpoints ++> [!IMPORTANT] +> For Elastic SANs using [locally-redundant storage (LRS)](elastic-san-planning.md#redundancy) as their redundancy option, private endpoints are supported in all regions that Elastic SAN is available. Private endpoints aren't currently supported for elastic SANs using [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) as their redundancy option. ++Azure [Private Link](../../private-link/private-link-overview.md) enables you to access an Elastic SAN volume group securely over a [private endpoint](../../private-link/private-endpoint-overview.md) from a virtual network subnet. Traffic between your virtual network and the service traverses the Microsoft backbone network, eliminating the risk of exposing your service to the public internet. An Elastic SAN private endpoint uses a set of IP addresses from the subnet address space for each volume group. The maximum number used per endpoint is 20. ++Private endpoints have several advantages over service endpoints. For a complete comparison of private endpoints to service endpoints, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). ++Traffic between the virtual network and the Elastic SAN is routed over an optimal path on the Azure backbone network. Unlike service endpoints, you don't need to configure network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. ++For details on how to configure private endpoints, see [Enable private endpoint](elastic-san-networking.md#configure-a-private-endpoint). ++## Virtual network rules ++To further secure access to your Elastic SAN volumes, you can create virtual network rules for volume groups configured with service endpoints to allow access from specific subnets. You don't need network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. ++Each volume group supports up to 200 virtual network rules. If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group. ++Clients granted access via these network rules must also be granted the appropriate permissions to the Elastic SAN to volume group. ++To learn how to define network rules, see [Managing virtual network rules](elastic-san-networking.md#configure-virtual-network-rules). ++## Client connections ++After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more details on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections) ++> [!NOTE] +> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart. ++## Next steps ++[Configure Elastic SAN networking Preview](elastic-san-networking.md) |
storage | Elastic San Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md | Title: Azure Elastic SAN networking Preview -description: An overview of Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols. + Title: How to configure Azure Elastic SAN Preview networking +description: How to configure networking for Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols. Previously updated : 05/04/2023 Last updated : 08/25/2023 -+ -# Configure Elastic SAN networking Preview +# Configure networking for an Elastic SAN Preview -Azure Elastic storage area network (SAN) allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments demand, based on the type and subset of networks or resources used. When network rules are configured, only applications requesting data over the specified set of networks or through the specified set of Azure resources that can access an Elastic SAN Preview. Access to your SAN's volumes are limited to resources in subnets in the same Azure Virtual Network that your SAN's volume group is configured with. +Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. -Volume groups are configured to allow access only from specific subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Azure Active Directory tenant. +This article describes how to configure your Elastic SAN to allow access from your Azure virtual network infrastructure. -You must enable a [Service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the SAN that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the Elastic SAN to access the data. +To configure network access to your Elastic SAN: -Each volume group supports up to 200 virtual network rules. +> [!div class="checklist"] +> - [Configure a virtual network endpoint](#configure-a-virtual-network-endpoint). +> - [Configure client connections](#configure-client-connections). ++## Configure a virtual network endpoint ++You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets may belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Azure Active Directory tenant. ++You can allow access to your Elastic SAN volume group from two types of Azure virtual network endpoints: ++- [Private endpoints](../../private-link/private-endpoint-overview.md) +- [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) ++A private endpoint uses one or more private IP addresses from your virtual network subnet to access an Elastic SAN volume group over the Microsoft backbone network. With a private endpoint, traffic between your virtual network and the volume group are secured over a private link. ++Virtual network service endpoints are public and accessible via the internet. You can [Configure virtual network rules](#configure-virtual-network-rules) to control access to your volume group when using storage service endpoints. ++Network rules only apply to the public endpoints of a volume group, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, do not enable service endpoints for the volume group. ++To decide which type of endpoint works best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints). ++Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. ++The process for enabling each type of endpoint follows: ++- [Configure a private endpoint](#configure-a-private-endpoint) +- [Configure an Azure Storage service endpoint](#configure-an-azure-storage-service-endpoint) ++### Configure a private endpoint > [!IMPORTANT]-> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group. +> - For Elastic SANs using [locally-redundant storage (LRS)](elastic-san-planning.md#redundancy) as their redundancy option, private endpoints are supported in all regions that Elastic SAN is available. Private endpoints aren't currently supported for elastic SANs using [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) as their redundancy option. +> +> - Before you can create a private endpoint connection to a volume group, it must contain at least one volume. -## Enable Storage service endpoint +There are two steps involved in configuring a private endpoint connection: -In your virtual network, enable the Storage service endpoint on your subnet. This ensures traffic is routed optimally to your Elastic SAN. To enable service point for Azure Storage, you must have the appropriate permissions for the virtual network. This operation can be performed by a user that has been given permission to the Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role. An Elastic SAN and the virtual networks granted access may be in different subscriptions, including subscriptions that are a part of a different Azure AD tenant. +> [!div class="checklist"] +> - Creating the endpoint and the associated connection. +> - Approving the connection. -> [!NOTE] -> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. +You can also use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to refine access control over private endpoints. ++To create a private endpoint for an Elastic SAN volume group, you must have the [Elastic SAN Volume Group Owner](../../role-based-access-control/built-in-roles.md#elastic-san-volume-group-owner) role. To approve a new private endpoint connection, you must have permission to the [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftelasticsan) `Microsoft.ElasticSan/elasticSans/PrivateEndpointConnectionsApproval/action`. Permission for this operation is included in the [Elastic SAN Network Admin](../../role-based-access-control/built-in-roles.md#elastic-san-owner) role, but it can also be granted via a custom Azure role. ++If you create the endpoint from a user account that has all of the necessary roles and permissions required for creation and approval, the process can be completed in one step. If not, it will require two separate steps by two different users. ++The Elastic SAN and the virtual network may be in different resource groups, regions and subscriptions, including subscriptions that belong to different Azure AD tenants. In these examples, we are creating the private endpoint in the same resource group as the virtual network. # [Portal](#tab/azure-portal)-1. Navigate to your virtual network and select **Service Endpoints**. -1. Select **+ Add** and for **Service** select **Microsoft.Storage**. -1. Select any policies you like, and the subnet you deploy your Elastic SAN into and select **Add**. +Currently, you can only configure a private endpoint using PowerShell or the Azure CLI. # [PowerShell](#tab/azure-powershell) -```powershell -$resourceGroupName = "yourResourceGroup" -$vnetName = "yourVirtualNetwork" -$subnetName = "yourSubnet" +Deploying a private endpoint for an Elastic SAN Volume group using PowerShell involves these steps: ++1. Get the subnet from which applications will connect. +1. Get the Elastic SAN Volume Group. +1. Create a private link service connection using the volume group as input. +1. Create the private endpoint using the subnet and the private link service connection as input. +1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection. -$virtualNetwork = Get-AzVirtualNetwork -ResourceGroupName $resourceGroupName -Name $vnetName +Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace all placeholder text with your own values: -$subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $virtualNetwork -Name $subnetName +| Placeholder | Description | +|-|-| +| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. | +| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. | +| `<VnetName>` | The name of the virtual network that includes the subnet. | +| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. | +| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. | +| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. | +| `<PrivateEndpointName>` | The name of the new private endpoint. | +| `<Location>` | The region where the new private endpoint will be created. | +| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. | -$virtualNetwork | Set-AzVirtualNetworkSubnetConfig -Name $subnetName -AddressPrefix $subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork +```powershell +# Set the resource group name. +$RgName = "<ResourceGroupName>" ++# Get the virtual network and subnet, which is input to creating the private endpoint. +$VnetName = "<VnetName>" +$SubnetName = "<SubnetName>" ++$Vnet = Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RgName +$Subnet = $Vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName} ++# Get the Elastic SAN, which is input to creating the private endpoint service connection. +$EsanName = "<ElasticSanName>" +$EsanVgName = "<ElasticSanVolumeGroupName>" ++$Esan = Get-AzElasticSan -Name $EsanName -ResourceGroupName $RgName ++# Create the private link service connection, which is input to creating the private endpoint. +$PLSvcConnectionName = "<PrivateLinkSvcConnectionName>" +$EsanPlSvcConn = New-AzPrivateLinkServiceConnection -Name $PLSvcConnectionName -PrivateLinkServiceId $Esan.Id -GroupId $EsanVgName ++# Create the private endpoint. +$EndpointName = '<PrivateEndpointName>' +$Location = '<Location>' +$PeArguments = @{ + Name = $EndpointName + ResourceGroupName = $RgName + Location = $Location + Subnet = $Subnet + PrivateLinkServiceConnection = $EsanPlSvcConn +} +New-AzPrivateEndpoint @PeArguments # -ByManualRequest # (Uncomment the `-ByManualRequest` parameter if you are using the two-step process). +``` ++Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample: ++```powershell +# Get the private endpoint and associated connection. +$PrivateEndpoint = Get-AzPrivateEndpoint -Name $EndpointName -ResourceGroupName $RgName +$PeConnArguments = @{ + ServiceName = $EsanName + ResourceGroupName = $RgName + PrivateLinkResourceType = "Microsoft.ElasticSan/elasticSans" +} +$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments | +Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)} ++# Approve the private link service connection. +$ApprovalDesc="<ApprovalDesc>" +Approve-AzPrivateEndpointConnection @PeConnArguments -Name $EndpointConnection.Name -Description $ApprovalDesc ++# Get the private endpoint connection anew and verify the connection status. +$EndpointConnection = Get-AzPrivateEndpointConnection @PeConnArguments | +Where-Object {($_.PrivateEndpoint.Id -eq $PrivateEndpoint.Id)} +$EndpointConnection.PrivateLinkServiceConnectionState ``` # [Azure CLI](#tab/azure-cli) +Deploying a private endpoint for an Elastic SAN Volume group using the Azure CLI involves three steps: ++1. Get the private connection resource ID of the Elastic SAN. +1. Create the private endpoint using inputs: + 1. Private connection resource ID + 1. Volume group name + 1. Resource group name + 1. Subnet name + 1. Vnet name +1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection. ++Use this sample code to create a private endpoint for your Elastic SAN volume group with the Azure CLI. Uncomment the `--manual-request` parameter if you are using the two-step process. Replace all placeholder text with your own values: ++| Placeholder | Description | +|-|-| +| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. | +| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. | +| `<VnetName>` | The name of the virtual network that includes the subnet. | +| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. | +| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. | +| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. | +| `<PrivateEndpointName>` | The name of the new private endpoint. | +| `<Location>` | The region where the new private endpoint will be created. | +| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. | ++```azurecli +# Define some variables. +RgName="<ResourceGroupName>" +VnetName="<VnetName>" +SubnetName="<SubnetName>" +EsanName="<ElasticSanName>" +EsanVgName="<ElasticSanVolumeGroupName>" +EndpointName="<PrivateEndpointName>" +PLSvcConnectionName="<PrivateLinkSvcConnectionName>" +Location="<Location>" +ApprovalDesc="<ApprovalDesc>" ++# Get the id of the Elastic SAN. +id=$(az elastic-san show \ + --elastic-san-name $EsanName \ + --resource-group $RgName \ + --query 'id' \ + --output tsv) ++# Create the private endpoint. +az network private-endpoint create \ + --connection-name $PLSvcConnectionName \ + --name $EndpointName \ + --private-connection-resource-id $id \ + --resource-group $RgName \ + --vnet-name $VnetName \ + --subnet $SubnetName \ + --location $Location \ + --group-id $EsanVgName # --manual-request ++# Verify the status of the private endpoint connection. +PLConnectionName=$(az network private-endpoint-connection list \ + --name $EsanName \ + --resource-group $RgName \ + --type Microsoft.ElasticSan/elasticSans \ + --query "[?properties.groupIds[0]=='$EsanVgName'].name" -o tsv) ++az network private-endpoint-connection show \ + --resource-name $EsanName \ + --resource-group $RgName \ + --type Microsoft.ElasticSan/elasticSans \ + --name $PLConnectionName +``` ++Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample: + ```azurecli-az network vnet subnet update --resource-group "myresourcegroup" --vnet-name "myvnet" --name "mysubnet" --service-endpoints "Microsoft.Storage.Global" +az network private-endpoint-connection approve \ + --resource-name $EsanName \ + --resource-group $RgName \ + --name $PLConnectionName \ + --type Microsoft.ElasticSan/elasticSans \ + --description $ApprovalDesc ```+ -### Available virtual network regions +### Configure an Azure Storage service endpoint ++To configure an Azure Storage service endpoint from the virtual network where access is required, you must have permission to the `Microsoft.Network/virtualNetworks/subnets/joinViaServiceEndpoint/action` [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftnetwork) via a custom Azure role to configure a service endpoint. -Service endpoints for Azure Storage work between virtual networks and service instances in any region. They also work between virtual networks and service instances in [paired regions](../../availability-zones/cross-region-replication-azure.md) to allow continuity during a regional failover. When planning for disaster recovery during a regional outage, you should create the virtual networks in the paired region in advance. Enable service endpoints for Azure Storage, with network rules granting access from these alternative virtual networks. Then apply these rules to your zone-redundant SANs. +Virtual network service endpoints are public and accessible via the internet. You can [Configure virtual network rules](#configure-virtual-network-rules) to control access to your volume group when using storage service endpoints. -#### Azure Storage cross-region service endpoints +> [!NOTE] +> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Azure Active Directory tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal. ++# [Portal](#tab/azure-portal) ++1. Navigate to your virtual network and select **Service Endpoints**. +1. Select **+ Add**. +1. On the **Add service endpoints** screen: + 1. For **Service** select **Microsoft.Storage.Global** to add a [cross-region service endpoint](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints). ++ > [!NOTE] + > You might see **Microsoft.Storage** listed as an available storage service endpoint. That option is for intra-region endpoints which exist for backward compatibility only. Always use cross-region endpoints unless you have a specific reason for using intra-region ones. ++1. For **Subnets** select all the subnets where you want to allow access. +1. Select **Add**. +++# [PowerShell](#tab/azure-powershell) ++Use this sample code to create a storage service endpoint for your Elastic SAN volume group with PowerShell. ++```powershell +# Define some variables +$RgName = "<ResourceGroupName>" +$VnetName = "<VnetName>" +$SubnetName = "<SubnetName>" ++# Get the virtual network and subnet +$Vnet = Get-AzVirtualNetwork -ResourceGroupName $RgName -Name $VnetName +$Subnet = Get-AzVirtualNetworkSubnetConfig -VirtualNetwork $Vnet -Name $SubnetName ++# Enable the storage service endpoint +$Vnet | Set-AzVirtualNetworkSubnetConfig -Name $SubnetName -AddressPrefix $Subnet.AddressPrefix -ServiceEndpoint "Microsoft.Storage.Global" | Set-AzVirtualNetwork +``` ++# [Azure CLI](#tab/azure-cli) -Cross-region service endpoints for Azure became generally available in April of 2023. With cross-region service endpoints, subnets will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect. +Use this sample code to create a storage service endpoint for your Elastic SAN volume group with the Azure CLI. ++```azurecli +# Define some variables +RgName="<ResourceGroupName>" +VnetName="<VnetName>" +SubnetName="<SubnetName>" ++# Enable the storage service endpoint +az network vnet subnet update --resource-group $RgName --vnet-name $VnetName --name $SubnetName --service-endpoints "Microsoft.Storage.Global" +``` ++ -To use cross-region service endpoints, it might be necessary to delete existing **Microsoft.Storage** endpoints and recreate them as cross-region (**Microsoft.Storage.Global**). +#### Configure virtual network rules -## Managing virtual network rules +All incoming requests for data over a service endpoint are blocked by default. Only applications that request data from allowed sources that you configure in your network rules will be able to access your data. You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI. -> [!NOTE] +> [!IMPORTANT] > If you want to enable access to your storage account from a virtual network/subnet in another Azure AD tenant, you must use PowerShell or the Azure CLI. The Azure portal does not show subnets in other Azure AD tenants.+> +> If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group. ### [Portal](#tab/azure-portal) You can manage virtual network rules for volume groups through the Azure portal, - List virtual network rules. ```azurepowershell- $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $rgName -ElasticSanName $sanName -Name $volGroupName + $Rules = Get-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $sanName -Name $volGroupName $Rules.NetworkAclsVirtualNetworkRule ``` You can manage virtual network rules for volume groups through the Azure portal, - Add a network rule for a virtual network and subnet. ```azurepowershell- $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $subnet.Id -Action Allow + $rule = New-AzElasticSanVirtualNetworkRuleObject -VirtualNetworkResourceId $Subnet.Id -Action Allow - Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule + Add-AzElasticSanVolumeGroupNetworkRule -ResourceGroupName $RgName -ElasticSanName $sanName -VolumeGroupName $volGroupName -NetworkAclsVirtualNetworkRule $rule ``` > [!TIP] You can manage virtual network rules for volume groups through the Azure portal, - List information from a particular volume group, including their virtual network rules. ```azurecli- az elastic-san volume-group show -e $sanName -g $resourceGroupName -n $volumeGroupName + az elastic-san volume-group show -e $sanName -g $RgName -n $volumeGroupName ``` - Enable service endpoint for Azure Storage on an existing virtual network and subnet. You can manage virtual network rules for volume groups through the Azure portal, ```azurecli # First, get the current length of the list of virtual networks. This is needed to ensure you append a new network instead of replacing existing ones.- virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $resourceGroupName --query 'length(networkAcls.virtualNetworkRules)' + virtualNetworkListLength = az elastic-san volume-group show -e $sanName -n $volumeGroupName -g $RgName --query 'length(networkAcls.virtualNetworkRules)' - az elastic-san volume-group update -e $sanName -g $resourceGroupName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/vnetName/subnets/default, action:Allow}]}" + az elastic-san volume-group update -e $sanName -g $RgName --name $volumeGroupName --network-acls virtual-network-rules[$virtualNetworkListLength] "{virtualNetworkRules:[{id:/subscriptions/subscriptionID/resourceGroups/RGName/providers/Microsoft.Network/virtualNetworks/$VnetName/subnets/default, action:Allow}]}" ``` - Remove a network rule. The following command removes the first network rule, modify it to remove the network rule you'd like. ```azurecli- az elastic-san volume-group update -e $sanName -g $resourceGroupName -n $volumeGroupName --network-acls virtual-network-rules[1]=null + az elastic-san volume-group update -e $sanName -g $RgName -n $volumeGroupName --network-acls virtual-network-rules[1]=null ``` -++## Configure client connections ++After you have enabled the desired endpoints and granted access in your network rules, you are ready to configure your clients to connect to the appropriate Elastic SAN volumes. ++> [!NOTE] +> If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart. ## Next steps -[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md) +- [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md) +- [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md) +- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md) |
storage | Elastic San Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md | description: Understand planning for an Azure Elastic SAN deployment. Learn abou Previously updated : 05/02/2023 Last updated : 06/09/2023 Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Sa ## Networking -In Preview, Elastic SAN supports public access from selected virtual networks, restricting access to specified virtual networks. You configure volume groups to allow network access only from specific vnet subnets. Once a volume group is configured to allow access from a subnet, this configuration is inherited by all volumes belonging to the volume group. You can then mount volumes from any clients in the subnet, with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. You must enable [service endpoint for Azure Storage](../../virtual-network/virtual-network-service-endpoints-overview.md) in your virtual network before setting up the network rule on volume group. +In the Elastic SAN Preview, you can configure access to volume groups over both public [Azure Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. -If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart. +To allow network access, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. ## Redundancy Elastic SAN supports the [internet Small Computer Systems Interface](https://en. - VERIFY (16) - SYNCHRONIZE CACHE (10) - SYNCHRONIZE CACHE (16)+- RESERVE +- RELEASE +- PERSISTENT RESERVE IN +- PERSISTENT RESERVE OUT The following iSCSI features aren't currently supported: - CHAP authorization The following iSCSI features aren't currently supported: For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san). +[Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md) [Deploy an Elastic SAN Preview](elastic-san-create.md) |
storage | Elastic San Shared Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-shared-volumes.md | + + Title: Use clustered applications on Azure Elastic SAN +description: Learn more about using clustered applications on an Elastic SAN volume and sharing volumes between compute clients. +++ Last updated : 08/15/2023+++++# Use clustered applications on Azure Elastic SAN ++Azure Elastic SAN volumes can be simultaneously attached to multiple compute clients, allowing you to deploy or migrate cluster applications to Azure. You need to use a cluster manager to share an Elastic SAN volume, like Windows Server Failover Cluster (WSFC), or Pacemaker. The cluster manager handles cluster node communications and write locking. Elastic SAN doesn't natively offer a fully managed filesystem that can be accessed over SMB or NFS. ++When used as a shared volume, elastic SAN volumes can be shared across availability zones or regions. If you share a volume across availability zones, you should select [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) when deploying your SAN. Sharing a volume in a local-redundant storage SAN across zones reduces your performance due to increased latency between the volume and clients. ++## Limitations ++- Volumes in an Elastic SAN using [ZRS](elastic-san-planning.md#redundancy) can't be used as shared volumes. +- Elastic SAN connection scripts can be used to attach shared volumes to virtual machines in Virtual Machine Scale Sets or virtual machines in Availability Sets. Fault domain alignment isn't supported. +- The maximum number of sessions a shared volume supports is 128. + - An individual client can create multiple sessions to an individual volume for increased performance. For example, if you create 32 sessions on each of your clients, only four clients could connect to a single volume. ++See [Support for Azure Storage features](elastic-san-introduction.md#support-for-azure-storage-features) for other limitations of Elastic SAN. ++## Regional availability ++All regions that Elastic SAN is available in can use shared volumes. ++## How it works ++Elastic SAN shared volumes use [SCSI-3 Persistent Reservations](https://www.t10.org/members/w_spc3.htm) to allow initiators (clients) to control access to a shared elastic SAN volume. This protocol enables an initiator to reserve access to an elastic SAN volume, limit write (or read) access by other initiators, and persist the reservation on a volume beyond the lifetime of a session by default. ++SCSI-3 PR has a pivotal role in maintaining data consistency and integrity within shared volumes in cluster scenarios. Compute nodes in a cluster can read or write to their attached elastic SAN volumes based on the reservation chosen by their cluster applications. ++## Persistent reservation flow ++The following diagram illustrates a sample 2-node clustered database application that uses SCSI-3 PR to enable failover from one node to the other. +++The flow is as follows: ++1. The clustered application running on both Azure VM1 and VM2 registers its intent to read or write to the elastic SAN volume. +1. The application instance on VM1 then takes an exclusive reservation to write to the volume. +1. This reservation is enforced on your volume and the database can now exclusively write to the volume. Any writes from the application instance on VM2 fail. +1. If the application instance on VM1 goes down, the instance on VM2 can initiate a database failover and take over control of the volume. +1. This reservation is now enforced on the volume, and it won't accept writes from VM1. It only accepts writes from VM2. +1. The clustered application can complete the database failover and serve requests from VM2. ++The following diagram illustrates another common clustered workload consisting of multiple nodes reading data from an elastic SAN volume for running parallel processes, such as training of machine learning models. +++The flow is as follows: +1. The clustered application running on all VMs registers its intent to read or write to the elastic SAN volume. +1. The application instance on VM1 takes an exclusive reservation to write to the volume while opening up reads to the volume from other VMs. +1. This reservation is enforced on the volume. +1. All nodes in the cluster can now read from the volume. Only one node writes back results to the volume, on behalf of all nodes in the cluster. ++## Supported SCSI PR commands ++The following commands are supported with Elastic SAN volumes: ++To interact with the volume, start with the appropriate persistent reservation action: +- PR_REGISTER_KEY +- PR_REGISTER_AND_IGNORE +- PR_GET_CONFIGURATION +- PR_RESERVE +- PR_PREEMPT_RESERVATION +- PR_CLEAR_RESERVATION +- PR_RELEASE_RESERVATION ++When using PR_RESERVE, PR_PREEMPT_RESERVATION, or PR_RELEASE_RESERVATION, provide one of the following persistent reservation type: +- PR_NONE +- PR_WRITE_EXCLUSIVE +- PR_EXCLUSIVE_ACCESS +- PR_WRITE_EXCLUSIVE_REGISTRANTS_ONLY +- PR_EXCLUSIVE_ACCESS_REGISTRANTS_ONLY +- PR_WRITE_EXCLUSIVE_ALL_REGISTRANTS +- PR_EXCLUSIVE_ACCESS_ALL_REGISTRANTS ++Persistent reservation type determines access to the volume from each node in the cluster. ++|Persistent Reservation Type |Reservation Holder |Registered |Others | +||||| +|NO RESERVATION |N/A |Read-Write |Read-Write | +|WRITE EXCLUSIVE |Read-Write |Read-Only |Read-Only | +|EXCLUSIVE ACCESS |Read-Write |No Access |No Access | +|WRITE EXCLUSIVE - REGISTRANTS ONLY |Read-Write |Read-Write |Read-Only | +|EXCLUSIVE ACCESS - REGISTRANTS ONLY |Read-Write |Read-Write |No Access | +|WRITE EXCLUSIVE - ALL REGISTRANTS |Read-Write |Read-Write |Read-Only | +|EXCLUSIVE ACCESS - ALL REGISTRANTS |Read-Write |Read-Write |No Access | ++You also need to provide a persistent-reservation-key when using: +- PR_RESERVE +- PR_REGISTER_AND_IGNORE +- PR_REGISTER_KEY +- PR_PREEMPT_RESERVATION +- PR_CLEAR_RESERVATION +- PR_RELEASE-RESERVATION. |
storage | Files Data Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-data-protection-overview.md | Azure Files gives you many tools to protect your data, including soft delete, sh :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube.com/embed/TOHaNJpAOfc" title="How Azure Files can help protect against ransomware and accidental data loss" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> + > [!VIDEO https://www.youtube.com/embed/TOHaNJpAOfc] :::column-end::: :::column::: Watch this video to learn how Azure Files advanced data protection helps enterprises stay protected against ransomware and accidental data loss while delivering greater business continuity. |
storage | Geo Redundant Storage For Large File Shares | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md | description: Azure Files geo-redundancy for large file shares (preview) signific Previously updated : 08/13/2023 Last updated : 08/28/2023 Azure Files geo-redundancy for large file shares preview is currently available - China North 2 - China North 3 - East Asia+- East US - East US 2 - France Central - France South Azure Files geo-redundancy for large file shares preview is currently available - Korea Central - Korea South - North Central US+- North Europe - Norway East - Norway West - South Africa North Azure Files geo-redundancy for large file shares preview is currently available - US Gov Texas - US Gov Virginia - West Central US+- West Europe - West India+- West US - West US 2+- West US 3 ## Pricing |
storage | Storage Files Migration Nas Cloud Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md | To save time, you should proceed with this phase while you wait for your DataBox :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> + > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ] :::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br> |
storage | Storage Files Migration Robocopy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-robocopy.md | With the information in this phase, you will be able to decide how your servers :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> + > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ] :::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br> |
storage | Storage Files Migration Storsimple 8000 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-storsimple-8000.md | Your registered on-premises Windows Server instance must be ready and connected :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> + > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ] :::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps.</br> |
storage | Storage Files Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-monitoring.md | description: Learn how to monitor the performance and availability of Azure File -+ Last updated 08/07/2023 ms.devlang: csharp |
storage | Storage Files Networking Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-networking-overview.md | Configuring public and private endpoints for Azure Files is done on the top-leve :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/jd49W33DxkQ" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> + > [!VIDEO https://www.youtube-nocookie.com/embed/jd49W33DxkQ] :::column-end::: :::column::: This video is a guide and demo for how to securely expose Azure file shares directly to information workers and apps in five simple steps. The sections below provide links and additional context to the documentation referenced in the video. |
storage | Storage How To Create File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md | To create an Azure file share, you need to answer three questions about how you Premium file shares are available with local redundancy and zone redundancy in a subset of regions. To find out if premium file shares are available in your region, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=storage). For more information, see [Azure Files redundancy](files-redundancy.md). - **What size file share do you need?** - In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB. + In local and zone redundant storage accounts, Azure file shares can span up to 100 TiB. However, in geo- and geo-zone redundant storage accounts, Azure file shares can span only up to 5 TiB unless you sign up for [Geo-redundant storage for large file shares (preview)](geo-redundant-storage-for-large-file-shares.md). For more information on these three choices, see [Planning for an Azure Files deployment](storage-files-planning.md). To create a FileStorage storage account, ensure the **Performance** radio button :::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-performance-premium.png" alt-text="A screenshot of the performance radio button with premium selected and account kind with FileStorage selected."::: The other basics fields are independent from the choice of storage account:-- **Storage account name**: The name of the storage account resource to be created. This name must be globally unique. The storage account name will be used as the server name when you mount an Azure file share via SMB. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only.+- **Storage account name**: The name of the storage account resource to be created. This name must be globally unique. The storage account name will be used as the server name when you mount an Azure file share via SMB. Storage account names must be between 3 and 24 characters in length. They may contain numbers and lowercase letters only. - **Location**: The region for the storage account to be deployed into. This can be the region associated with the resource group, or any other available region.-- **Replication**: Although this is labeled replication, this field actually means **redundancy**; this is the desired redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-access geo-zone redundancy (RA-GZRS), which do not apply to Azure file shares; any file share created in a storage account with these selected will actually be either geo-redundant or geo-zone-redundant, respectively. +- **Replication**: Although this is labeled replication, this field actually means **redundancy**; this is the desired redundancy level: locally redundancy (LRS), zone redundancy (ZRS), geo-redundancy (GRS), and geo-zone-redundancy (GZRS). This drop-down list also contains read-access geo-redundancy (RA-GRS) and read-access geo-zone redundancy (RA-GZRS), which don't apply to Azure file shares; any file share created in a storage account with these selected will be either geo-redundant or geo-zone-redundant, respectively. #### Networking The networking section allows you to configure networking options. These settings are optional for the creation of the storage account and can be configured later if desired. For more information on these options, see [Azure Files networking considerations](storage-files-networking-overview.md). The advanced section contains several important settings for Azure file shares: :::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-secure-transfer.png" alt-text="A screenshot of secure transfer enabled in the advanced settings for the storage account."::: -- **Large file shares**: This field enables the storage account for file shares spanning up to 100 TiB. Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options. Once a GPv2 storage account has been enabled for large file shares, you cannot disable the large file share capability. FileStorage storage accounts (storage accounts for premium file shares) don't have this option, as all premium file shares can scale up to 100 TiB. +- **Large file shares**: This field enables the storage account for file shares spanning up to 100 TiB. Enabling this feature will limit your storage account to only locally redundant and zone redundant storage options. Once a GPv2 storage account has been enabled for large file shares, you can't disable the large file share capability. FileStorage storage accounts (storage accounts for premium file shares) don't have this option, as all premium file shares can scale up to 100 TiB. :::image type="content" source="media/storage-how-to-create-file-share/files-create-smb-share-large-file-shares.png" alt-text="A screenshot of the large file share setting in the storage account's advanced blade."::: The other settings that are available in the advanced tab (hierarchical namespac Tags are name/value pairs that enable you to categorize resources and view consolidated billing by applying the same tag to multiple resources and resource groups. These are optional and can be applied after storage account creation. #### Review + create-The final step to create the storage account is to select the **Create** button on the **Review + create** tab. This button won't be available if all of the required fields for a storage account are not filled. +The final step to create the storage account is to select the **Create** button on the **Review + create** tab. This button won't be available unless all the required fields for a storage account are filled. # [PowerShell](#tab/azure-powershell)-To create a storage account using PowerShell, we will use the `New-AzStorageAccount` cmdlet. This cmdlet has many options; only the required options are shown. To learn more about advanced options, see the [`New-AzStorageAccount` cmdlet documentation](/powershell/module/az.storage/new-azstorageaccount). +To create a storage account using PowerShell, use the `New-AzStorageAccount` cmdlet. This cmdlet has many options; only the required options are shown. To learn more about advanced options, see the [`New-AzStorageAccount` cmdlet documentation](/powershell/module/az.storage/new-azstorageaccount). -To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish; however, note that the storage account name must be globally unique. +To simplify creating the storage account and subsequent file share, we'll store several parameters in variables. You may replace the variable contents with whatever values you wish; however, note that the storage account name must be globally unique. ```powershell $resourceGroupName = "myResourceGroup" $storageAccountName = "mystorageacct$(Get-Random)" $region = "westus2" ``` -To create a storage account capable of storing standard Azure file shares, we will use the following command. The `-SkuName` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, you must also remove the `-EnableLargeFileShare` parameter. +To create a storage account capable of storing standard Azure file shares, use the following command. The `-SkuName` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, remove the `-EnableLargeFileShare` parameter. ```powershell $storAcct = New-AzStorageAccount ` $storAcct = New-AzStorageAccount ` -EnableLargeFileShare ``` -To create a storage account capable of storing premium Azure file shares, we will use the following command. Note that the `-SkuName` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant (`LRS`). The `-Kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account. +To create a storage account capable of storing premium Azure file shares, use the following command. Note that the `-SkuName` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant storage (`LRS`). The `-Kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account. ```powershell $storAcct = New-AzStorageAccount ` $storAcct = New-AzStorageAccount ` ``` # [Azure CLI](#tab/azure-cli)-To create a storage account using Azure CLI, we will use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the [`az storage account create` command documentation](/cli/azure/storage/account). +To create a storage account using Azure CLI, use the az storage account create command. This command has many options; only the required options are shown. To learn more about the advanced options, see the [`az storage account create` command documentation](/cli/azure/storage/account). -To simplify the creation of the storage account and subsequent file share, we will store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique. +To simplify the creation of the storage account and subsequent file share, we'll store several parameters in variables. You may replace the variable contents with whatever values you wish, however note that the storage account name must be globally unique. ```azurecli resourceGroupName="myResourceGroup" storageAccountName="mystorageacct$RANDOM" region="westus2" ``` -To create a storage account capable of storing standard Azure file shares, we will use the following command. The `--sku` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, you must also remove the `--enable-large-file-share` parameter. +To create a storage account capable of storing standard Azure file shares, use the following command. The `--sku` parameter relates to the type of redundancy desired; if you desire a geo-redundant or geo-zone-redundant storage account, remove the `--enable-large-file-share` parameter. ```azurecli az storage account create \ az storage account create \ --output none ``` -To create a storage account capable of storing premium Azure file shares, we will use the following command. Note that the `--sku` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant (`LRS`). The `--kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account. +To create a storage account capable of storing premium Azure file shares, use the following command. Note that the `--sku` parameter has changed to include both `Premium` and the desired redundancy level of locally redundant storage (`LRS`). The `--kind` parameter is `FileStorage` instead of `StorageV2` because premium file shares must be created in a FileStorage storage account instead of a GPv2 storage account. ```azurecli az storage account create \ az storage account update --name <yourStorageAccountName> -g <yourResourceGroup> ## Create a file share Once you've created your storage account, you can create your file share. This process is mostly the same regardless of whether you're using a premium file share or a standard file share. You should consider the following differences: -Standard file shares may be deployed into one of the standard tiers: transaction optimized (default), hot, or cool. This is a per file share tier that is not affected by the **blob access tier** of the storage account (this property only relates to Azure Blob storage - it does not relate to Azure Files at all). You can change the tier of the share at any time after it has been deployed. Premium file shares cannot be directly converted to any standard tier. +Standard file shares can be deployed into one of the standard tiers: transaction optimized (default), hot, or cool. This is a per file share tier that isn't affected by the **blob access tier** of the storage account (this property only relates to Azure Blob storage - it doesn't relate to Azure Files at all). You can change the tier of the share at any time after it has been deployed. Premium file shares can't be directly converted to any standard tier. > [!Important] > You can move file shares between tiers within GPv2 storage account types (transaction optimized, hot, and cool). Share moves between tiers incur transactions: moving from a hotter tier to a cooler tier will incur the cooler tier's write transaction charge for each file in the share, while a move from a cooler tier to a hotter tier will incur the cool tier's read transaction charge for each file the share. The **quota** property means something slightly different between premium and standard file shares: -- For standard file shares, it's an upper boundary of the Azure file share, beyond which end-users cannot go. If a quota isn't specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property is not set for a storage account). If you did not create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-file-shares-on-an-existing-account) for how to enable 100 TiB file shares.+- For standard file shares, it's an upper boundary of the Azure file share. If a quota isn't specified, standard file shares can span up to 100 TiB (or 5 TiB if the large file shares property isn't set for a storage account). If you didn't create your storage account with large file shares enabled, see [Enable large files shares on an existing account](#enable-large-file-shares-on-an-existing-account) for how to enable 100 TiB file shares. - For premium file shares, quota means **provisioned size**. The provisioned size is the amount that you will be billed for, regardless of actual usage. The IOPS and throughput available on a premium file share is based on the provisioned size. For more information on how to plan for a premium file share, see [provisioning premium file shares](understanding-billing.md#provisioned-model). |
storage | Storage How To Use Files Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md | You don't need to mount the Azure file share to a particular drive letter to use `\\storageaccountname.file.core.windows.net\myfileshare` -You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share. +You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share. If you do not get prompted for credentials you can add the credentials using the following command: ++`cmdkey /add:StorageAccountName.file.core.windows.net /user:localhost\StorageAccountName /pass:StorageAccountKey` For Azure Government Cloud, simply change the servername to: |
storage | Understanding Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md | Azure Files provides two distinct billing models: provisioned and pay-as-you-go. :::row::: :::column:::- <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/m5_-GsKv4-o" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> + > [!VIDEO https://www.youtube-nocookie.com/embed/m5_-GsKv4-o] :::column-end::: :::column::: This video is an interview that discusses the basics of the Azure Files billing model. It covers how to optimize Azure file shares to achieve the lowest costs possible, and how to compare Azure Files to other file storage offerings on-premises and in the cloud. |
storage | Assign Azure Role Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/assign-azure-role-data-access.md | Title: Assign an Azure role for access to queue data description: Learn how to assign permissions for queue data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 07/13/2021-+ |
storage | Authorize Access Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-access-azure-active-directory.md | Title: Authorize access to queues using Active Directory description: Authorize access to Azure queues using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account.-+ Last updated 03/17/2023-+ |
storage | Authorize Data Operations Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-cli.md | Title: Choose how to authorize access to queue data with Azure CLI description: Specify how to authorize data operations against queue data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token. -+ -+ Last updated 02/10/2021 |
storage | Authorize Data Operations Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-portal.md | Title: Choose how to authorize access to queue data in the Azure portal description: When you access queue data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ -+ Last updated 12/13/2021 |
storage | Authorize Data Operations Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-powershell.md | Title: Run PowerShell commands with Azure AD credentials to access queue data description: PowerShell supports signing in with Azure AD credentials to run commands on Azure Queue Storage data. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ -+ Last updated 02/10/2021 |
storage | Scalability Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/scalability-targets.md | Title: Scalability and performance targets for Queue Storage description: Learn about scalability and performance targets for Queue Storage.-+ -+ Last updated 12/18/2019 |
storage | Security Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md | Title: Security recommendations for Queue Storage description: Learn about security recommendations for Queue Storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ -+ Last updated 05/12/2022 |
storage | Komprise Quick Start Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/komprise-quick-start-guide.md | Title: Analyze and migrate your file data to Azure with Komprise Intelligent Data Manager -description: Getting started guide to implement Komprise Intelligent Data Manager. Guide shows how to analyze your file infrastructure, and migrate your data to Azure Files, Azure NetApp Files, Azure Blob Storage, or any available ISV NAS solution -- Previously updated : 05/20/2021-+description: Getting started guide to implement Komprise Intelligent Data Manager. This guide shows how to analyze your file infrastructure, and migrates your data to Azure Files, Azure NetApp Files, Azure Blob Storage, or any available ISV NAS solution ++ Last updated : 06/01/2023 -+ -# Analyze and migrate to Azure with Komprise +# Quickstart analyze and migrate to Azure with Komprise -This article helps you integrate the Komprise Intelligent Data Management infrastructure with Azure storage services. It includes considerations and implementation guidance on how to analyze, and migrate your data. +This article describes using Komprise Intelligent Data Management to identify and place the right data in the right Azure Storage Service. -Komprise provides analytics and insights into file, and object data stored in network attached storage systems (NAS), and object stores, both on-premises and in the cloud. It enables migration of data to Azure storage services like Azure Files, Azure NetApp Files, Azure Blob Storage, or other ISV NAS solution. Learn more on [verified partner solutions for primary and secondary storage](../primary-secondary-storage/partner-overview.md). +Moving data can be intimidating. There are often numerous challenges, beginning with identifying what to move, matching data value to proper storage class, then moving it promptly all while minimizing end-user impacts. -Common use cases for Komprise include: +Komprise makes it easy to move your data to Azure storage services like Azure Files, Azure NetApp Files, Azure Blob Storage or other ISV NAS solutions. -- Analysis of unstructured file and object data to gain insights for data management, movement, positioning, archiving, protection, and confinement,-- Migration of file data to Azure Files, Azure NetApp Files, or ISV NAS solution,-- Policy based tiering and archiving of file data to Azure Blob Storage while retaining transparent access from the original NAS solution and allowing native object access in Azure,-- Copy file data to Azure Blob Storage on configurable schedules while retaining native object access in Azure-- Migration of object data to Azure Blob Storage,-- Tiering and data lifecycle management of objects across Hot, Cool, and Archive tiers of Azure Blob Storage based on last access time+Learn more about other ISV NAS in the [verified partner solutions article](/azure/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview) -## Reference architecture +This article reviews where to get started, considerations and recommendations when moving data to Azure. Use the following links to connect to what is important. +- [Know first, move smarter analyze, tier, move what matters](#know-first-move-smarter-analyze-tier-move-what-matters) +- [Assessing network and storage performance](#assessing-network-and-storage-performance) +- [Intelligent data management architecture](#intelligent-data-management-architecture) +- [Getting started with Komprise](#getting-started-with-komprise) +- [Getting started with Azure](#getting-started-with-azure) +- [Migration guide](#migration-guide) +- [Deployment instructions for migrating object data](#deployment-instructions-for-migrating-object-data) +- [Migration API](#migration-api) +- [Next steps](#next-steps) -The following diagram provides a reference architecture for on-premises to Azure and in-Azure deployments. +## Where to start +Visit [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.azure_data_migration_program?tab=PlansAndPrice) to learn more about Komprise and Azure together. Learn how you can get an introduction, reach out to ask questions, arrange to meet your local Komprise field team or sign up for a trial. -The following diagram provides a reference architecture for migrating cloud and on-premises object workloads to Azure Blob Storage. +[Visit Komprise directly](https://www.komprise.com/azure-migration) for more information about our solution, including white papers and reference architectures! +## Know first, move smarter (analyze, tier, move what matters) -Komprise is a software solution that is easily deployed in a virtual environment. The solutions consist of: -- **Director** - The administration console for the Komprise Grid. It is used to configure the environment, monitor activities, view reports and graphs, and set policies.-- **Observers** - Manage and analyze shares, summarize reports, communicate with the Director, and handle object and NFS data traffic.-- **Proxies** - Simplify and accelerate SMB/CIFS data flow, easily scale to meet performance requirements of a growing environment.+Komprise provides quick insights into your unstructured data across all storage platforms with Plan Analysis and Deep Analytics capabilities. Plan Analysis immediately gives summary results with usage graphs and the Analysis Activities page surfaces important file system issues discovered. Deep Analytics allows customers to dig deeper in to understanding their data with custom querying capabilities and graphs to find select data sets, orphaned files and more. -## Before you begin +Understanding your data is the first step in selecting the appropriate Azure storage service. It's important to know the type of data, amount, file count, owners, and other information to help determine if the data should be in Azure Files or Azure NetApp Files. This information can also help you understand if the data should be migrated or tiered to Azure Blob for long-term storage and significant cost savings. -Upfront planning will help in migrating the data with less risk. +With a quick install of a local Komprise data Observer, in 30 minutes or less you can see: -### Get started with Azure +- Immediate results on capacity, file count and temperature with Komprise heat map. The data can be filtered to show results for all shares, groups or individual shares. +- Komprise includes a cost comparison tool with the ability to edit cost models of current on-premises storage and Azure Storage Solutions costs to determine the best savings and return on investment +- Usage graphs provide quick summary/comparisons of file types, file sizes, file counts, top owners, groups, shares and directories. Use this information to determine the order of the migration and assess the business impact of migrating data. -Microsoft offers a framework to follow to get you started with Azure. The [Cloud Adoption Framework](/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and comprehensive guide to planning a production grade cloud adoption. The CAF includes a step-by-step [Azure setup guide](/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and running quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications, and free training resources to put you on the path to Azure expertise. + :::image type="content" source="./media/komprise-quick-start-guide-v2/sample-analysis-charts.png" alt-text="Analysis by file type and storage consumed" lightbox="./media/komprise-quick-start-guide-v2/sample-analysis-charts.png"::: -### Considerations for migrations +- Look for opportunities to clean up expired data, which reduces the migration effort and the cost of the destination storage. +- Identify cold data, not accessed in six months or more, that could be cost-effectively tiered or moved to Azure Blob storage. +- Analysis Activity page helps identify potential issues upfront, before moving data. The issues you donΓÇÖt want to encounter after starting to move data include: + - Files and/or Directories, with restricted access or resolution issues + - Date set too large for destination storage service in file count or capacity + - Data sets with an exceedingly large number of tiny files or with a large number of empty directories + - Slow-performing shares + - Lack of destination support for sparce files or symbolic links -Several aspects are important when considering migrations of file data to Azure. Before proceeding learn more: +Komprise knows it can be challenging to find just the right data across billions of files. Komprise Deep Analytics builds a Global File Index of all your fileΓÇÖs metadata, giving a unified way to search, tag and create select data sets across storage silos. You can identify orphan data, data by name, location, owner, date, application type or extension. Administrators can use these queries and tagged data sets to move, copy, confine, or feed your data pipelines. They can also set data workflow policies. This allows business to use other Azure cloud data services like personal data identification, running cloud data analytics, and culling and feeding edge data to cloud data lakes. -- [Storage migration overview](../../../common/storage-migration-overview.md)-- latest supported features by Komprise Intelligent Data Management in [migration tools comparison matrix](./migration-tools-comparison.md).+Learn more at [Komprise Deep Analytics](https://www.komprise.com/use-cases/deep-analytics/) -Remember, you'll require enough network capacity to support migrations without impacting production applications. This section outlines the tools and techniques that are available to assess your network needs. -#### Determine unutilized internet bandwidth +Use all this information when selecting the appropriate Azure storage service. Komprise helps identify key factors like shares, protocol, logical size, file count, data type and performance type. -It's important to know how much typically unutilized bandwidth (or *headroom*) you have available on a day-to-day basis. To help you assess whether you can meet your goals for: +- Azure Files + - [Azure Files Documentation Site](/azure/storage/files/) + - [Planning for an Azure Files deployment](/azure/storage/files/storage-files-planning) +- Azure Block Blob + - [Azure Blob Documentation Site](/azure/storage/blobs/) + - [Access Tiers for Azure Blob](/azure/storage/blobs/access-tiers-overview?source=recommendations) +- Azure Storage Accounts + - [Azure Storage Account Overview](/azure/storage/common/storage-account-overview?toc=/azure/storage/blobs/toc.json) + - [Create a Storage Account](/azure/storage/common/storage-account-create) +- Azure NetApp Files + - [Azure NetApp Files Documentation Site](/azure/azure-netapp-files/) + - [Service Levels for Azure NetApp Files](/azure/azure-netapp-files/azure-netapp-files-service-levels) -- initial time for migrations when you're not using Azure Data Box for offline method-- time required to do incremental resync before final switch-over to the target file service+## Assessing network and storage performance -Use the following methods to identify the bandwidth headroom to Azure that is free to consume. +Migrations move only as fast as the infrastructure allows. ItΓÇÖs vital to know the combined performance abilities of the network and storage systems together. Measuring networks and storage performance individually may not reveal hidden limitations in port configurations, routing, file system overloading and more. -- If you're an existing Azure ExpressRoute customer, view your [circuit usage](../../../../expressroute/expressroute-monitoring-metrics-alerts.md#circuits-metrics) in the Azure portal.-- Contact your ISP and request reports to show your existing daily and monthly utilization.-- There are several tools that can measure utilization by monitoring your network traffic at the router/switch level:- - [SolarWinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS) - - [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring) - - [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html) - - [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring) +Komprise assesses the network and storage performance, combined, to identify any connectivity issues between your datacenter and Azure storage. + +The Komprise Assessment of Customer Environment (ACE) is easy to deploy and run. The tool simulates a series of data movement scenarios between on-premises source NAS shares and destination Azure NAS storage services like Azure Files and Azure NetApp Files. It performs a set of reading, writing and checksum operations collecting overall performance numbers. The results can highlight potential performance losses to investigate. This list details some tools and services to isolate issues. -## Migration planning guide +- [SolarWinds Bandwidth Analyzer Pack](https://www.solarwinds.com/network-bandwidth-analyzer-pack?CMP=ORG-BLG-DNS) +- [Paessler PRTG](https://www.paessler.com/bandwidth_monitoring) +- [Cisco Network Assistant](https://www.cisco.com/c/en/us/products/cloud-systems-management/network-assistant/https://docsupdatetracker.net/index.html) +- [WhatsUp Gold](https://www.whatsupgold.com/network-traffic-monitoring) -Komprise is simple to set up and enables running multiple migrations simultaneously in three steps: +If you're using a public network connection, consider changing to a private VPN or contracting with an Azure Express Route service provider. Making this change can improve security, performance, and providing greater opportunity to identify and resolve any connectivity issues. -1. Analyze your data to identify files and objects to migrate or archive, -1. Define policies to migrate, move, or copy unstructured data to Azure Storage, -1. Activate policies that automatically move your data. +To learn more about Express Routes: +- [What is Azure ExpressRoute?](/azure/expressroute/expressroute-introduction) +- [ExpressRoute connectivity models](/azure/expressroute/expressroute-connectivity-models) +- [Extend an on-premises network using ExpressRoute](/azure/architecture/reference-architectures/hybrid-networking/expressroute) -The first step is critical in finding and prioritizing the right data to migrate. Komprise analysis provides: +Other performance items to investigate with secure networks: +- Existing Azure ExpressRoute customers, review [circuit usage](/azure/expressroute/expressroute-monitoring-metrics-alerts#circuits-metrics) in the Azure portal +- Work with your ISP and request reports showing existing daily and monthly utilization -- Information on access time to identify:- - Less frequently accessed files that you can cache on-premises or store on fast file service - - Cold data you can archive to blob storage -- Information on top users, groups, or shares to determine the order of the migration and the most impacted group within the organization to assess business impact-- Number of files, or capacity per file type to determine type of stored files and if there are any possibilities to clean up the content. Cleaning up will reduce the migration effort, and reduce the cost of the target storage. Similar analytics is available for object data.-- Number of files, or capacity per file size to determine the duration of migration. Large number of small files will take longer to migrate than small number of large files. Similar analytics is available for object data.-- Cost of objects by storage tier to determine if cold data is incorrectly placed in expensive tiers, or hot data is incorrectly placed in cheaper tiers with high access costs. Right placing data based on access patterns enables optimizing overall cloud storage costs.+## Intelligent data management architecture - :::image type="content" source="./media/komprise-quick-start-guide/komprise-analyze-1.png" alt-text="Analysis by file type and access time"::: +Komprise provides a highly scalable infrastructure to meet every need. Begin assessing your environment with one data Observer then rapidly scale up and out to move terabytes to petabytes of data with more data movers. +Example Komprise architecture overview - :::image type="content" source="./media/komprise-quick-start-guide/komprise-analyze-shares.png" alt-text="Example of share analysis"::: -- Custom query capability filter to filter exact set of files and objects for your specific needs+Komprise software is easy to set up in virtual environments for complete resource flexibility. For optimum performance, flexibility and cost control, Komprise data managers (Observers) and data movers (Proxies) can be deployed on-premises or in the cloud to fit your unique requirements. +- Director - The administration console for the Komprise Grid. It's used to configure the environment, monitor activities, view reports and graphs and set policies. +- Observers ΓÇôKomprise data managers analyze storage systems, summarize reports, communicate with the Director, manage migrations and handles data movement. +- Proxies ΓÇôThese scalable data movers simplify and accelerate SMB/CIFS data flow. Proxy data movers can easily scale to meet the performance requirements of a growing environment or tight timeline. - :::image type="content" source="./media/komprise-quick-start-guide/komprise-analyze-custom.png" alt-text="Analysis for custom query"::: -## Deployment guide +## Getting started with Komprise +1. Contact Komprise, and meet the local team who will set up your own Komprise Director console and assist with a preinstallation call and installation. With preparation, installation should be ~30 minutes from power up to see the first analysis results. + Sign up at [https://www.komprise.com/azure-migration](https://www.komprise.com/azure-migration) +2. After logging in with the Director, the wizard Install page will provide links to Download the Komprise Observer virtual appliance. Power up the Observer VM and configure it with static IP, general network and domain information. The last step in the setup script is to sign-in to the director to establish communication. -Before deploying Komprise, the target service must be deployed. You can learn more here: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-komprise-download-page.png" alt-text="Screenshot of the Komprise download page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-komprise-download-page.png"::: -- How to create [Azure File Share](../../../files/storage-how-to-create-file-share.md)-- How to create an [SMB volume](../../../../azure-netapp-files/azure-netapp-files-create-volumes-smb.md) or [NFS export](../../../../azure-netapp-files/azure-netapp-files-create-volumes.md) in Azure NetApp Files+3. Add shares for analysis on the Specify Shares page. Use Discover shares to identify a NAS system and automatically import all share information. + - Enter File System Information: + - A platform for the source NAS + - Hostname or IP address + - Display Name + - Credentials (for SMB shares) -The Komprise Grid is deployed in a virtual environment (Hyper-V, VMware, KVM) for speed, scalability, and resilience. Alternatively, you may set up the environment in your Azure subscription using [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management). + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-enter-credentials.png" alt-text="Screenshot of the dialog box to enter credentials" lightbox="./media/komprise-quick-start-guide-v2/screenshot-enter-credentials.png"::: -1. Open the Azure portal, and search for **storage accounts**. + - Repeat these steps to add other source and destination systems. From Menu choose Shares > Sources > Add File Server + - Once a File Server is added, drill down to the share level and Enable share to start an analysis. See the Plan page for analysis results - :::image type="content" source="./media/komprise-quick-start-guide/azure-locate-storage-account.png" alt-text="Shows where you've typed storage in the search box of the Azure portal."::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-plan-page.png" alt-text="Screenshot of the Komprise Plan page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-plan-page.png"::: - You can also click on the default **Storage accounts** icon. + - Pause to Analyze the newly added shares reviewing the Plan page, Usage graphs and Analysis Activities results to uncover any issues to address, size and select appropriate Azure Storage Services. See next section, Getting Started with Azure, to create the destination Azure storage services. + - Use the Komprise ACE tool to identify and resolve any infrastructure network and storage performance issues before engaging Komprise migration engines. Once everything looks good continue to the next step with adding Azure Storage Services as destination sources for Komprise Migration. + - Add Azure Files as a migration destination and configure it on the Sources Tab, not the Targets tab. Target systems are for Komprise Plan operations like seamless tiering with Komprise Transparent Movement TechnologyΓäó (TMT) and Deep Analytics Actions. - :::image type="content" source="./media/komprise-quick-start-guide/azure-portal.png" alt-text="Shows adding a storage account in the Azure portal."::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-server-analysis.png" alt-text="Screenshot of the Add Server to Sources page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-server-analysis.png"::: -2. Select **Create** to add an account: - 1. Select existing resource group or **Create new** - 2. Provide a unique name for your storage account - 3. Choose the region - 4. Select **Standard** or **Premium** performance, depending on your needs. If you select **Premium**, select **File shares** under **Premium account type**. - 5. Choose the **[Redundancy](../../../common/storage-redundancy.md)** that meets your data protection requirements - - :::image type="content" source="./media/komprise-quick-start-guide/azure-account-create-1.png" alt-text="Shows storage account settings in the portal."::: + Example of adding Azure Files as a migration destination on the Sources tab: -3. Next, we recommend the default settings from the **Advanced** screen. If you are migrating to Azure Files, we recommend enabling **Large file shares** if available. + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-server-destination.png" alt-text="Screenshot of the Add Destination to Sources page" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-server-destination.png"::: - :::image type="content" source="./media/komprise-quick-start-guide/azure-account-create-2.png" alt-text="Shows Advanced settings tab in the portal."::: +## Getting started with Azure +Microsoft offers a framework to get you started with Azure. The [Cloud Adoption Framework](/azure/architecture/cloud-adoption/) (CAF) is a detailed approach to enterprise digital transformation and a comprehensive guide to planning a production-grade cloud adoption. The CAF includes a step-by-step [Azure setup guide](/azure/cloud-adoption-framework/ready/azure-setup-guide/) to help you get up and run quickly and securely. You can find an interactive version in the [Azure portal](https://portal.azure.com/?feature.quickstart=true#blade/Microsoft_Azure_Resources/QuickstartCenterBlade). You'll find sample architectures, specific best practices for deploying applications and free training resources to put you on the path to Azure expertise. -4. Keep the default networking options for now and move on to **Data protection**. You can choose to enable soft delete, which allows you to recover an accidentally deleted data within the defined retention period. Soft delete offers protection against accidental or malicious deletion. +Before starting your project, the target service must be deployed. You can learn more here: +- How to create [Azure File Share](/azure/storage/files/storage-how-to-create-file-share) +- How to create an [SMB volume](/azure/azure-netapp-files/azure-netapp-files-create-volumes-smb) or [NFS export](/azure/azure-netapp-files/azure-netapp-files-create-volumes) in Azure NetApp Files - :::image type="content" source="./media/komprise-quick-start-guide/azure-account-create-3.png" alt-text="Shows the Data Protection settings in the portal."::: +The Komprise Grid is deployed in a virtual environment (Hyper-V, VMware, KVM) for speed, scalability, and resilience. Alternatively, you may set up the environment in your Azure subscription using [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management). -5. Add tags for organization if you use tagging and **Create** your account. - -6. Two quick steps are all that are now required before you can add the account to your Komprise environment. Navigate to the account you created in the Azure portal and select File shares under the File service menu. Add a File share and choose a meaningful name. Then, navigate to the Access keys item under Settings and copy the Storage account name and one of the two access keys. If the keys are not showing, click on the **Show keys**. +1. Open the Azure portal and search for storage accounts - :::image type="content" source="./media/komprise-quick-start-guide/azure-access-key.png" alt-text="Shows access key settings in the portal."::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-portal-search.png" alt-text="Screenshot of the Azure Portal Search Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-portal-search.png"::: -7. Navigate to the **Properties** of the Azure File share. Write down the URL address, it will be required to add the Azure connection into the Komprise target file share: + You can also click on the default Storage accounts icon - :::image type="content" source="./media/komprise-quick-start-guide/azure-files-endpoint.png" alt-text="Find Azure files endpoint."::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-storage-accounts.png" alt-text="Screenshot of the Azure Storage Account Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-storage-accounts.png"::: -8. (_Optional_) You can add extra layers of security to your deployment. - - 1. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](../../../common/authorization-resource-provider.md#built-in-roles-for-management-operations). - - 2. Restrict access to the account to specific network segments with [storage firewall settings](../../../common/storage-network-security.md). Configure firewall settings to prevent access from outside of your corporate network. +2. Select Create to add an account: + a. Select an existing resource group or Create New. + b. Provide a unique name for your storage account. + c. Choose the region. + d. Select Standard or Premium performance, depending on your needs. If you select Premium, select File shares under Premium account type. + e. Choose the [Redundancy](/azure/storage/common/storage-redundancy) that meets your data protection requirements - :::image type="content" source="./media/komprise-quick-start-guide/azure-storage-firewall.png" alt-text="Shows storage firewall settings in the portal."::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account.png" alt-text="Screenshot of the Azure Create Storage Account Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account.png"::: - 3. Set a [delete lock](../../../../azure-resource-manager/management/lock-resources.md) on the account to prevent accidental deletion of the storage account. +3. Next, consider keeping the recommended default settings from the Advanced screen. If you're migrating to Azure Files, it's recommended to enable large file shares if available - :::image type="content" source="./media/komprise-quick-start-guide/azure-resource-lock.png" alt-text="Shows setting a delete lock in the portal."::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-advanced.png" alt-text="Screenshot of the Azure Create Storage Account Advanced Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-advanced.png"::: - 4. Configure extra [security best practices](../../../blobs/security-recommendations.md). +4. Keep the default networking options for now and move on to data protection. You can choose to enable soft delete, which allows you to recover accidentally deleted data within the defined retention period. Soft delete offers protection against accidental or malicious deletion. -### Deployment instructions for managing file data + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-data-protection.png" alt-text="Screenshot of the Azure Create Storage Account Data Protection Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-create-storage-account-data-protection.png"::: -1. **Download** the Komprise Observer virtual appliance from the Director, deploy it to your hypervisor and configure it with the network and domain. Director is provided as a cloud service managed by Komprise. Information needed to access Director is sent with the welcome email once you purchase the solution. +5. Add tags for an organization if you use tagging and create your account - :::image type="content" source="./media/komprise-quick-start-guide/komprise-setup-1.png" alt-text="Download appropriate image for Komprise Observer from Director"::: +6. Two quick steps are all that is now required before you can add the account to your Komprise environment. Navigate to the account you created in the Azure portal and select File shares under the File Service menu. Add a File share providing a meaningful name. Then, navigate to the Access keys item under Settings and copy the Storage account name and one of the two access keys. If the keys aren't showing, select Show keys -1. To add the shares to analyze and migrate, you have two options: - 1. **Discover** all the shares in your storage environment by entering: - - Platform for the source NAS - - Hostname or IP address - - Display name - - Credentials (for SMB shares) + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-manage-access-keys.png" alt-text="Screenshot of the Manage Access Keys dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-manage-access-keys.png"::: - :::image type="content" source="./media/komprise-quick-start-guide/komprise-setup-2.png" alt-text="Specify NAS system to discover"::: +7. Navigate to the Properties of the Azure File share. Write down the URL address, which is required to add the Azure connection into the Komprise target file share - 1. **Specify** a file share by entering: - - Storage information - - Protocol - - Path - - Display Name - - Credentials (for SMB shares) - - :::image type="content" source="./media/komprise-quick-start-guide/komprise-setup-3.png" alt-text="Specify NAS solutions to discover"::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-azure-file-share-properties.png" alt-text="Screenshot of Azure File Share Properties dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-azure-file-share-properties.png"::: - This step must be repeated to add other source and destination shares. To add Azure Files as a destination, you need to provide the Azure storage account and file share details: +8. (Optional) You can add extra layers of security to your deployment + + a. Configure role-based access to limit who can make changes to your storage account. For more information, see [Built-in roles for management operations](/azure/storage/common/authorization-resource-provider#built-in-roles-for-management-operations) + + b. Restrict access to the account to specific network segments with [storage firewall settings](/azure/storage/common/storage-network-security). Configure firewall settings to prevent access from outside of your corporate network - :::image type="content" source="./media/komprise-quick-start-guide/komprise-azure-files-1.png" alt-text="Select Azure Files as a target service"::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-network-security.png" alt-text="Screenshot of Azure Network Security dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-network-security.png"::: - :::image type="content" source="./media/komprise-quick-start-guide/komprise-azure-files-2.png" alt-text="Enter details for Azure Files"::: + c. Set a [delete lock](/azure/azure-resource-manager/management/lock-resources) on the account to prevent accidental deletion of the storage account. + + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-delete-lock.png" alt-text="Screenshot of Azure Delete Lock dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-delete-lock.png"::: -### Deployment instructions for managing object data + d. Review this document for other [security best practices](/azure/storage/blobs/security-recommendations) -Managing object provides different experience. The Director and Observer are provided as a cloud services, managed by Komprise. If you only need to analyze and archive data in Azure Blob Storage, no further deployment is required. If you need to perform migrations into Azure Blob Storage, get the Komprise Observer virtual appliance sent with the welcome email, and deploy it in a Linux virtual machine in your Azure cloud infrastructure. After deploying, follow the steps on the Komprise Director. -1. Navigate to **Data Stores** and **Add New Object Store**. Select **Microsoft Azure** as the provider. +## Migration guide +### Organizing the migration +Simplify migration planning tasks by organizing them into a few operational classes. Review the number of files, capacity per file size, file ages and the time required to complete the initial analysis to identify where to begin. Starting with the easy and building to the complex helps with building experience and confidence and confirm the cutover processes before tackling the harder migrations. These steps can be summarized as: +- Tiering type: data that can move at any time, since the data is typically cold data no one is accessing it could be sent to Azure Blog Archive for long-term storage. Data included could be an entire share, or part of a share. With Transparent Tiering, Komprise leaves a symbolic link so end users never lose access to their files and data. +- Easy type: fairly static shares with few users that move in one or two iterations. Minimal migration time and short cutover time required. +- Moderate type: little to moderately active individual shares of average file size (~1 MB). Should need minimal migration time; may require scheduling specific cutover window. +- Active type: shares with active data change daily, which can have a significant effect on data verification, operations, costs, Observers and Proxy systems placement (on-premises or in the cloud), and final cutover time. It may require multiple migration iterations and scheduling longer final cutover times +- Complex type: represents moving shares with various dependencies from multiple shares migrating in unison, to shares with many small files, or shares with many empty directories. Complex shares may require advance coordination, possibly several iterations and longer cutover windows depending on the situation. ++### Migration administration +Komprise provides live migration, where end users and applications have continuous data access while the data is moving. With Komprise elastic migrations, multiple migration activities automatically use the full architecture for maximum parallelization. The Director console simplifies the administration of all the migration tasks with one interface. +KompriseΓÇÖs migration process automates moving directories, files, and links from a source to a destination. At each step, data integrity is checked. All attributes, permissions and access controls from the source are applied. In an object migration, objects, prefixes, and metadata of each object are migrated too. +To configure and run a migration, follow these steps: +1. Once you have completed your Analysis and confirmed that the Storage and Network performance are optimally configured you're ready to start with the Archive and Easy migration types. +2. Navigate to Migrate and select Add Migration - :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-object-store.png" alt-text="Screenshot that shows adding new object store"::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-migration-dialog.png" alt-text="Screenshot of Komprise Add Migration Task" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-migration-dialog.png"::: -1. Add shares to analyze and migrate. These steps must be repeated for every source, and target share, or container. There are two options to perform the same action: - 1. **Discover** all the containers by entering: - - Storage account name - - Primary access key - - Display name - - :::image type="content" source="./media/komprise-quick-start-guide/komprise-discover-storage-account.png" alt-text="Screenshot that shows how to discover containers in storage account"::: +3. Add migration task by selecting proper source and destination shares. Provide a migration name. Once configured, select Start Migration. This step is slightly different for file and object data migrations as you're selecting data stores instead of shares. Review the following steps. +You may also choose to verify each data transfer using MD5 checksum. Depending in the position of Komprise data movement components, egress costs may occur when cloud objects are retrieved to calculate the MD5 values. - Required information can be found in **[Azure Portal](https://portal.azure.com/)** by navigating to the **Access keys** item under **Settings** for the storage account. If the keys are not showing, click on the **Show keys**. + File Migration - 1. **Specify** a container by entering: - - Container name - - Storage account name - - Primary access key - - Display name + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-file-migration-dialog.png" alt-text="Screenshot of Komprise Add File Migration Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-file-migration-dialog.png"::: - :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-container.png" alt-text="Screenshot that shows how to add containers in storage account"::: + File migration provides options to preserve access time and SMB ACLs on the destination. This option depends on the selected source and destination file service and protocol. - Container name represents the target container for the migration and needs to be created before migration. Other required information can be found in **[Azure Portal](https://portal.azure.com/)** by navigating to the **Access keys** item under **Settings** for the storage account. If the keys are not showing, click on the **Show keys**. + Object Migration -## Migration guide + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-object-migration-dialog.png" alt-text="Screenshot of Komprise Add Object Migration Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-object-migration-dialog.png"::: -Komprise provides live migration, where end users and applications are not disrupted and can continue to access data during the migration. The migration process automates migrating directories, files, and links from a source to a destination. At each step data integrity is checked. All attributes, permissions, and access controls from the source are applied. In an object migration, objects, prefixes, and metadata of each object are migrated. + Object migration provides options to choose the destination Azure storage tier (Hot, Cool, Archive). -To configure and run a migration, follow these steps: +4. Once the migration started, you can go to Migrate to monitor the progress. -1. Log into your Komprise console. Information needed to access the console is sent with the welcome email once you purchase the solution. -1. Navigate to **Migrate** and click on **Add Migration**. + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-migration-management-dialog.png" alt-text="Screenshot of Komprise Migration Management Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-migration-management-dialog.png"::: - :::image type="content" source="./media/komprise-quick-start-guide/komprise-new-migrate.png" alt-text="Add new migration job"::: +5. Once all changes have been migrated, run one final migration by clicking on Actions and selecting Start final iteration. Before final migration, we recommend stopping access to source file shares or moving them to read-only mode (for users and applications). This step makes sure no changes happen on the source. -1. Add migration task by selecting proper source and destination share. Provide a migration name. Once configured, click on **Start Migration**. This step is slightly different for file and object data migrations. - - 1. File migration + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-migration-overview.png" alt-text="Screenshot of Komprise Migration Management Overview" lightbox="./media/komprise-quick-start-guide-v2/screenshot-migration-overview.png"::: ++ Once the final migration finishes, transition all users and applications to the destination share. Switching over to the new file service usually requires changing the configuration of DNS servers and DFS servers or changing the mount points to the new destination. ++6. As the last step, mark the migration completed. ++7. There is a full migration audit folder containing all the information about files moved and deleted, attributes and errors encountered for every iteration. The data is written to the ".komprise-audit" folder on the destination, or in a specified system, log folder configured in System | Settings of the console. - :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-migration.png" alt-text="Specify details for the migration job"::: - File migration provides options to preserve access time and SMB ACLs on the destination. This option depends on the selected source and destination file service and protocol. - 1. Object migration +## Deployment instructions for migrating object data +Migrating Object storage systems to Azure Blob is an easy process as well. The Director and Observer are provisioned by Komprise as cloud services. Similar to on-premises deployment, you can analyze and understand the data on the sources system, identify any issues and then efficiently move data to Azure Blob Storage. +The flexibility of the Komprise architecture allows deploying the Observers where they provide the highest performance while keeping data movement costs/charges low. +To get started, sign-in to the director and do the following: +1. Navigate to Data Stores and Add Object Store. Here you can choose the add systems by Add Account or by Add Bucket. - :::image type="content" source="./media/komprise-quick-start-guide/komprise-add-object-migration.png" alt-text="Screenshot that shows adding object migration"::: + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-object-store.png" alt-text="Screenshot of Komprise Add Object Store Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-object-store.png"::: - Object migration provides options to choose the destination Azure storage tier (Hot, Cool, Archive). You may also choose to verify each data transfer using MD5 checksum. Egress costs can occur with MD5 checksums as cloud objects must be retrieved to calculate the MD5 checksum. +2. Continue adding Source data stores +3. Enable buckets for Analysis. Reviewing the data stores to build a migration plan. +4. Add Azure Blob Destination data stores, either by Account or Bucket. -2. Once the migration started, you can go to **Migrate** to monitor the progress. + :::image type="content" source="./media/komprise-quick-start-guide-v2/screenshot-add-object-destination.png" alt-text="Screenshot of Komprise Add Object Destination Dialog" lightbox="./media/komprise-quick-start-guide-v2/screenshot-add-object-destination.png"::: - :::image type="content" source="./media/komprise-quick-start-guide/komprise-monitor-migrations.png" alt-text="Monitor all migration jobs"::: + With Add Account, discover all the containers by entering: + - Storage account name + - Primary access key + - Display Name -3. Once all changes have been migrated, run one final migration by clicking on **Actions** and selecting **Start final iteration**. Before final migration, we recommend stopping access to source file shares or moving them to read-only mode (for users and applications). This step will make sure no changes happen on the source. + Required information can be found in [Azure portal](https://portal.azure.com/) by navigating to the Access keys item under Settings for the storage account. If the keys aren't showing, select on the Show keys. - :::image type="content" source="./media/komprise-quick-start-guide/komprise-final-migration.png" alt-text="Do one last migration before switching over"::: + Or, specify a container by entering: + - Container Name + - Storage Account Name + - Primary Access Key + - Display Name - Once the final migration finishes, transition all users and applications to the destination share. Switch over to the new file service usually requires changing the configuration of DNS servers, DFS servers, or changing the mount points to the new destination. +The container name represents the destination container for the migration and needs to be created before migration. Other required information can be found in [Azure portal](https://portal.azure.com/) by navigating to the Access keys item under Settings for the storage account. If the keys aren't showing, select on the Show keys. -4. As a last step, mark the migration completed. +5. Migrating Object Data Stores uses the same iterative process to move data as the NAS migration steps described previously. -## Support -To open a case with Komprise, sign in to the [Komprise support site](https://komprise.freshdesk.com/) +## Migration API +Komprise has full migration API support so everything described in the document can be controlled via scripts. Komprise has an example script our customers use to move large numbers of shares effectively. Review with your Komprise team if you require the API. -## Marketplace +### Maximize your data value with Azure and Komprise +Komprise helps you plan and execute your file and object data migrations to Azure. Once your migrations are complete, you can use the full Komprise Intelligent Data Management service to manage data lifecycle, seamlessly tier data from on-premises to Azure and to search, find and execute new data workflows. -Get Komprise listing on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview). ## Next steps -Various resources are available to learn more: +### Marketplace ++Get Komprise Data Migration listing on [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview). +Get Komprise full suite listing onΓÇ»[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=Overview). ++### Education -- [Storage migration overview](../../../common/storage-migration-overview.md)-- Features supported by Komprise Intelligent Data Management in [migration tools comparison matrix](./migration-tools-comparison.md)+Various resources are available to learn more: +- [Storage migration overview](/azure/storage/common/storage-migration-overview) +- Features supported by Komprise Intelligent Data Management in [migration tools comparison matrix](/azure/storage/solution-integration/validated-partners/data-management/migration-tools-comparison) - [Komprise compatibility matrix](https://www.komprise.com/partners/microsoft-azure/)+ |
storage | Migration Tools Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md | Title: Azure Storage migration tools comparison - Unstructured data description: Basic functionality and comparison between tools used for migration of unstructured data--++ Previously updated : 02/21/2022 Last updated : 08/25/2023 +> [!TIP] +> Azure File Sync can be utilized for migrating data to Azure Files, even if you don't intend to use a hybrid solution for on-premises caching or syncing. This migration process is efficient and causes no downtime. To use Azure File Sync as a migration tool, [simply deploy it](../../../file-sync/file-sync-deployment-guide.md) and, after the migration is finished, [remove the server endpoint](../../../file-sync/file-sync-server-endpoint-delete.md). Ideally Azure File Sync would be used long-term, while Storage Mover and AzCopy are intended for migration focused activities. + ## Supported Azure services -| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | -| |--|--|||| -| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | -| **Support provided by** | Microsoft | [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| -| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes | -| **Azure NetApp Files support** | No | Yes | Yes | Yes | Yes | -| **Azure Blob Hot / Cool support** | No | Yes (via NFS ) | Yes | Yes | Yes | -| **Azure Blob Archive tier support** | No | No | No | Yes | Yes | -| **Azure Data Lake Storage support** | No | No | Yes | Yes | No | -| **Supported Sources** | Windows Server 2012 R2 and up | NAS & cloud file systems | Any NAS, and S3 | Any NAS, Cloud File Storage, or S3 | Any NAS, S3, PFS, and Swift | +| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) | +| |--|--|--|||| +| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | +| **Support provided by** | Microsoft | Microsoft | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| [Datadobi](https://support.datadobi.com/s/)<sub>1</sub> | +| **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes | Yes | +| **Azure NetApp Files support** | No | No | Yes | Yes | Yes | Yes | +| **Azure Blob Hot / Cool support** | Yes | Yes | Yes | Yes | Yes | Yes (via NFS) | +| **Azure Blob Archive tier support** | Yes | Yes | No | Yes | Yes | No | +| **Azure Data Lake Storage support** | Yes | No | Yes | Yes | No | No | +| **Supported Sources** | Any NAS, Azure Blob, Azure Files, Google Cloud Storage, and AWS S3 | NAS & cloud file systems | Any NAS, and S3 | Any NAS, Cloud File Storage, or S3 | Any NAS, S3, PFS, and Swift | NAS & cloud file systems | ## Supported protocols (source / destination) -| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | -| |--|--|||| -| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)| -| **SMB 2.1** | Yes | Yes | Yes | Yes | Yes | -| **SMB 3.0** | Yes | Yes | Yes | Yes | Yes | -| **SMB 3.1** | Yes | Yes | Yes | Yes | Yes | -| **NFS v3** | No | Yes | Yes | Yes | Yes | -| **NFS v4.1** | No | Yes | No | Yes | Yes | -| **Blob REST API** | No | No | Yes | Yes | Yes | -| **S3** | No | Yes | Yes | Yes | Yes | +| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) | +| |--|--|--|||| +| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | +| **SMB 2.1** | Source | Source | Yes | Yes | Yes | Yes | +| **SMB 3.0** | Source | Source | Yes | Yes | Yes | Yes | +| **SMB 3.1** | Source/Destination (Azure Files SMB) | Source/Destination (Azure Files SMB) | Yes | Yes | Yes | Yes | +| **NFS v3** | Source/Destination (Azure Blob NFSv3) | Source/Destination (Azure Blob NFSv3) | Yes | Yes | Yes | Yes | +| **NFS v4.1** | Source | Source | Yes | No | Yes | Yes | +| **Blob REST API** | Yes | Destination | Yes | Yes | Yes | No | +| **S3** | Source | No | Yes | Yes | Yes | Yes | +| **Google Cloud Storage** | Source | No | Yes | Yes | Yes | Yes | ## Extended features -| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | -| |--|--|||| -| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)| -| **UID / SID remapping** | No | Yes | Yes | No | No | -| **Protocol ACL remapping** | No | No | No | No | No | -| **DFS Support** | Yes | Yes | Yes | Yes | No | -| **Throttling support** | Yes | Yes | Yes | Yes | Yes | -| **File pattern exclusions** | No | Yes | Yes | Yes | Yes | -| **Support for selective file attributes** | Yes | Yes | Yes | Yes | Yes | -| **Delete propagations** | Yes | Yes | Yes | Yes | Yes | -| **Follow NTFS junctions** | No | Yes | No | Yes | Yes | -| **Override SMB Owner and Group Owner** | Yes | Yes | Yes | No | Yes | -| **Chain of custody reporting** | No | Yes | Yes | Yes | Yes | -| **Support for alternate data streams** | No | Yes | Yes | No | Yes | -| **Scheduling for migration** | No | Yes | Yes | Yes | Yes | -| **Preserving ACL** | Yes | Yes | Yes | Yes | Yes | -| **DACL support** | Yes | Yes | Yes | Yes | Yes | -| **SACL support** | Yes | Yes | Yes | No | Yes | -| **Preserving access time** | Yes | Yes | Yes | Yes | Yes | -| **Preserving modified time** | Yes | Yes | Yes | Yes | Yes | -| **Preserving creation time** | Yes | Yes | Yes | Yes | Yes | -| **Azure Data Box support** | Yes | Yes | Yes | No | Yes | -| **Migration of snapshots** | No | Manual | Yes | No | No | -| **Symbolic link support** | No | Yes | No | Yes | Yes | -| **Hard link support** | No | Migrated as separate files | Yes | Yes | Yes | -| **Support for open / locked files** | Yes | Yes | Yes | Yes | Yes | -| **Incremental migration** | Yes | Yes | Yes | Yes | Yes | -| **Switchover support** | No | Yes | Yes | No (manual only) | Yes | -| **[Other features](#other-features)** | [Link](#azure-file-sync)| [Link](#datadobi-dobimigrate) | [Link](#data-dynamics-data-mobility-and-migration) | [Link](#komprise-elastic-data-migration) | [Link](#atempo-miria) | +| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) | +| |--|--|--|||| +| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | +| **UID / SID remapping** | No | No | Yes | No | No | Yes | +| **Protocol ACL remapping** | No | No | No | No | No | No | +| **Azure Data Lake Storage Gen2** | Yes | No | Yes | Yes | No | Yes | +| **Throttling support** | Yes | No | Yes | Yes | Yes | Yes | +| **File pattern exclusions** | Yes | No | Yes | Yes | Yes | Yes | +| **Support for selective file attributes** | No | No | Yes | Yes | Yes | Yes | +| **Delete propagations** | No | No | Yes | Yes | Yes | Yes | +| **Follow NTFS junctions** | No | No | No | Yes | Yes | Yes | +| **Override SMB Owner and Group Owner** | No | No | Yes | No | Yes | Yes | +| **Chain of custody reporting** | No | No | Yes | Yes | Yes | Yes | +| **Support for alternate data streams** | No | No | Yes | No | Yes | Yes | +| **Scheduling for migration** | No | No | Yes | Yes | Yes | Yes | +| **Preserving ACL** | Yes | Yes | Yes | Yes | Yes | Yes | +| **DACL support** | Yes | Yes | Yes | Yes | Yes | Yes | +| **SACL support** | Yes | Yes | Yes | No | Yes | Yes | +| **Preserving access time** | Yes (Azure Files) | Yes | Yes | Yes | Yes | Yes | +| **Preserving modified time** | Yes (Azure Files) | Yes | Yes | Yes | Yes | Yes | +| **Preserving creation time** | Yes (Azure Files) | Yes | Yes | Yes | Yes | Yes | +| **Azure Data Box support** | Yes | No | Yes | No | Yes | Yes | +| **Migration of snapshots** | No | No | Yes | No | No | Manual | +| **Symbolic link support** | Yes | Yes | No | Yes | Yes | Yes | +| **Hard link support** | Migrated as separate files | Migrated as separate files | Yes | Yes | Yes | Migrated as separate files | +| **Support for open / locked files** | No | No | Yes | Yes | Yes | Yes | +| **Incremental migration** | Yes | No | Yes | Yes | Yes | Yes | +| **Switchover support** | No | No | Yes | No (manual only) | Yes | Yes | +| **[Other features](#other-features)** | [Link](#azcopy)| | [Link](#data-dynamics-data-mobility-and-migration) | [Link](#komprise-elastic-data-migration) | [Link](#atempo-miria) | [Link](#datadobi-dobimigrate) | ## Assessment and reporting -| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | -| |--|--|||| -| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)| -| **Capacity** | No | Yes | Yes | Yes | Yes | -| **# of files / folders** | No | Yes | Yes | Yes | Yes | -| **Age distribution over time** | No | Yes | Yes | Yes | Yes | -| **Access time** | No | Yes | Yes | Yes | Yes | -| **Modified time** | No | Yes | Yes | Yes | Yes | -| **Creation time** | No | Yes | Yes | Yes | Yes | -| **Per file / object report status** | Partial | Yes | Yes | Yes | Yes | +| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) | +| |--|--|--|||| +| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | +| **Capacity** | No | Reporting | Yes | Yes | Yes | Yes | +| **# of files / folders** | Yes | Reporting | Yes | Yes | Yes | Yes | +| **Age distribution over time** | No | No | Yes | Yes | Yes | Yes | +| **Access time** | No | No | Yes | Yes | Yes | Yes | +| **Modified time** | No | No | Yes | Yes | Yes | Yes | +| **Creation time** | No | No | Yes | Yes | Yes | Yes | +| **Per file / object report status** | Yes | Reporting | Yes | Yes | Yes | Yes | ## Licensing -| | [Microsoft](https://www.microsoft.com/) | [Datadobi](https://www.datadobi.com) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | -| |--|--||| | -| **Solution name** | [Azure File Sync](../../../file-sync/file-sync-deployment-guide.md) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview ) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Atempo](https://www.atempo.com/support-en/contacting-support/)| -| **BYOL** | N / A | Yes | Yes | Yes | Yes | -| **Azure Commitment** | Yes | Yes | Yes | Yes | No | +| | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Datadobi](https://www.datadobi.com) | +| |--|--|--|||| +| **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [DobiMigrate](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/datadobi1602192408529.datadobi_license_purchase?tab=Overview) | +| **BYOL** | N / A | N / A | Yes | Yes | Yes | Yes | +| **Azure Commitment** | N / A | Yes | Yes | Yes | No | Yes | ## Other features -### Azure File Sync +### AzCopy -- Internal hash validation+- Multi-platform support +- Windows 32-bit / 64-bit +- Linux x86-64 and ARM64 +- macOS Intel and ARM64 +- Benchmarking [azcopy bench](/azure/storage/common/storage-ref-azcopy-bench) +- Supports block blobs, page blobs, and append blobs +- MD5 checks for downloads +- Customizable transfer rate to preserve bandwidth on the client +- Tagging -> [!TIP] -> Azure File Sync can be utilized for migrating data to Azure Files, even if you don't intend to use a hybrid solution for on-premises caching or syncing. This migration process is efficient and causes no downtime. To use Azure File Sync as a migration tool, [simply deploy it](../../../file-sync/file-sync-deployment-guide.md) and, after the migration is finished, [remove the server endpoint](../../../file-sync/file-sync-server-endpoint-delete.md). --### Datadobi DobiMigrate --- Migration pre checks-- Migration Planning-- Dry Run for cut over testing-- Detect and alert on target side user activity prior to cut over-- Policy driven migrations-- Scheduled copy iterations-- Configurable options for handling root directory security-- On-demand verification runs-- Data read back verification on source and destination-- Graphical, interactive error handling workflow-- Ability to restrict certain operations from propagating like deletes and updates-- Ability to preserve access time on the source (in addition to destination)-- Ability to execute rollback to source during migration switchover-- Ability to migrate selected SMB file attributes-- Ability to clean NTFS security descriptors-- Ability to override NFSv3 permissions and write new mode bits to target-- Ability to convert NFSv3 POSIX draft ACLS to NFSv4 ACLS-- SMB 1 (CIFS)-- Browser-based access-- REST API support for configuration, and migration management-- Support 24 x 7 x 365 ### Data Dynamics Data Mobility and Migration The following comparison matrix shows basic functionality of different tools tha - Petabyte-scale data movements - Hash validation +### Datadobi DobiMigrate ++- Migration pre checks +- Migration Planning +- Dry Run for cut over testing +- Detect and alert on target side user activity prior to cut over +- Policy driven migrations +- Scheduled copy iterations +- Configurable options for handling root directory security +- On-demand verification runs +- Data read back verification on source and destination +- Graphical, interactive error handling workflow +- Ability to restrict certain operations from propagating like deletes and updates +- Ability to preserve access time on the source (in addition to destination) +- Ability to execute rollback to source during migration switchover +- Ability to migrate selected SMB file attributes +- Ability to clean NTFS security descriptors +- Ability to override NFSv3 permissions and write new mode bits to target +- Ability to convert NFSv3 POSIX draft ACLS to NFSv4 ACLS +- SMB 1 (CIFS) +- Browser-based access +- REST API support for configuration, and migration management +- Support 24 x 7 x 365 + > [!NOTE]-> List was last verified on February, 21st 2022. +> List was last verified on August 24, 2023 ## See also |
storage | Assign Azure Role Data Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/assign-azure-role-data-access.md | Title: Assign an Azure role for access to table data description: Learn how to assign permissions for table data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 03/03/2022-+ |
storage | Authorize Access Azure Active Directory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/authorize-access-azure-active-directory.md | Title: Authorize access to tables using Active Directory description: Authorize access to Azure tables using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account. -+ Last updated 02/09/2023-+ |
storage | Scalability Targets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/scalability-targets.md | Title: Scalability and performance targets for Table storage description: Learn about scalability and performance targets for Table storage. -+ Last updated 03/09/2020-+ # Scalability and performance targets for Table storage |
storage | Table Storage Design For Modification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-for-modification.md | Title: Design Azure Table storage for data modification description: Design tables for data modification in Azure Table storage. Optimize insert, update, and delete operations. Ensure consistency in your stored entities. --++ Last updated 04/23/2018 |
storage | Table Storage Design For Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-for-query.md | Title: Design Azure Table storage for queries description: Design tables for queries in Azure Table storage. Choose an appropriate partition key, optimize queries, and sort data for the Table service. --++ Last updated 05/19/2023 |
storage | Table Storage Design Guidelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-guidelines.md | Title: Guidelines for Azure storage table design description: Understand guidelines for designing your Azure storage table service to support read and write operations efficiently. --++ Last updated 04/23/2018 |
storage | Table Storage Design Modeling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-modeling.md | Title: Modeling relationships in Azure Table storage design description: Understand the modeling process when designing your Azure Table storage solution. Read about one-to-many, one-to-one, and inheritance relationships. --++ Last updated 04/23/2018 |
storage | Table Storage Design Patterns | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design-patterns.md | Title: Azure storage table design patterns description: Review design patterns that are appropriate for use with Table service solutions in Azure. Address issues and trade-offs that are discussed in other articles. -+ Last updated 06/24/2021-+ ms.devlang: csharp |
storage | Table Storage Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-design.md | Title: Design scalable and performant tables in Azure Table storage. description: Learn to design scalable and performant tables in Azure Table storage. Review table partitions, Entity Group Transactions, and capacity and cost considerations. --++ Last updated 03/09/2020 |
storage | Table Storage How To Use Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-how-to-use-powershell.md | Title: Perform Azure Table storage operations with PowerShell description: Learn how to run common tasks such as creating, querying, deleting data from Azure Table storage account by using PowerShell.-+ Last updated 06/23/2022-+ |
storage | Table Storage Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-overview.md | Title: Introduction to Table storage - Object storage in Azure description: Store structured data in the cloud using Azure Table storage, a NoSQL data store. --++ Last updated 05/27/2021 |
storage | Table Storage Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/table-storage-quickstart-portal.md | Title: Create a table in the Azure portal description: Learn how to use the Azure portal to create a new table in Azure Table storage. -+ -+ Last updated 01/25/2023 |
storsimple | Storsimple Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storsimple/storsimple-overview.md | The following resources are available to help you migrate backup files or to cop Use the following steps to copy data to your environment and then decommission your StorSimple 8000 appliance. If your data has already been migrated to your own environment, you can proceed with decommissioning your appliance. -**Step 1. Copy backup files or live data to your own environment.** +**Step 1: Copy backup files or live data to your own environment.** - **Backup files.** If you have backup files, use the Azure StorSimple 8000 Series Copy Utility to migrate backup files to your environment. For more information, see [Copy Utility documentation](https://aka.ms/storsimple-copy-utility-docs). - **Live data.** If you have live data to copy, you can access and copy live data to your environment via iSCSI. -**Step 2. Decommission your device.** +**Step 2: Decommission your device.** After you complete your data migration, use the following steps to decommission the device. Before you decommission your device, make sure to copy all data from your appliance, using either local host copy operations or using the Utility. Decommission operations can't be undone. We recommend that you complete your dat The system reboots multiple times. You're notified when the reset has successfully completed. Depending on the system model, it can take 45-60 minutes for an 8100 device and 60-90 minutes for an 8600 to finish this process. -**Step 3. Shut down the device.** +**Step 3: Shut down the device.** This section explains how to shut down a running or a failed StorSimple device from a remote computer. A device is turned off after both the device controllers are shut down. A device shutdown is complete when the device is physically moved or is taken out of service. -**Step 3.1** - Use the following steps to identify and shut down the passive controller on your device. Perform this operation in Windows PowerShell for StorSimple. +**Step 3a:** Use the following steps to identify and shut down the passive controller on your device. Perform this operation in Windows PowerShell for StorSimple. 1. Access the device via the serial console or a telnet session from a remote computer. To connect to Controller 0 or Controller 1, follow these steps to use PuTTY to connect to the device serial console. This section explains how to shut down a running or a failed StorSimple device f This restarts the controller you're connected to. When you restart the active controller, it fails over to the passive controller before the restart. -**Step 3.2** - Repeat the previous step to shut down the active controller. +**Step 3b:** Repeat the previous step to shut down the active controller. -**Step 3.3** - You must now look at the back plane of the device. After the two controllers are shut down, the status LEDs on both the controllers should be blinking red. To turn off the device completely at this time, flip the power switches on both Power and Cooling Modules (PCMs) to the OFF position. This turns off the device. +**Step 3c:** You must now look at the back plane of the device. After the two controllers are shut down, the status LEDs on both the controllers should be blinking red. To turn off the device completely at this time, flip the power switches on both Power and Cooling Modules (PCMs) to the OFF position. This turns off the device. ## Create a support request |
stream-analytics | Capture Event Hub Data Parquet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md | -This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in Parquet format. You have the flexibility of specifying a time or size interval. +This article explains how to use the no code editor to automatically capture streaming data in Event Hubs in an Azure Data Lake Storage Gen2 account in the Parquet format. ## Prerequisites -- Your Azure Event Hubs and Azure Data Lake Storage Gen2 resources must be publicly accessible and can't be behind a firewall or secured in an Azure Virtual Network.-- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format.+- An Azure Event Hubs namespace with an event hub and an Azure Data Lake Storage Gen2 account with a container to store the captured data. These resources must be publicly accessible and can't be behind a firewall or secured in an Azure virtual network. ++ If you don't have an event hub, create one by following instructions from [Quickstart: Create an event hub](../event-hubs/event-hubs-create.md). ++ If you don't have a Data Lake Storage Gen2 account, create one by following instructions from [Create a storage account](../storage/blobs/create-data-lake-storage-account.md) +- The data in your Event Hubs must be serialized in either JSON, CSV, or Avro format. For testing purposes, select **Generate data (preview)** on the left menu, select **Stocks data** for dataset, and then select **Send**. ++ :::image type="content" source="./media/capture-event-hub-data-parquet/stocks-data.png" alt-text="Screenshot showing the Generate data page to generate sample stocks data." lightbox="./media/capture-event-hub-data-parquet/stocks-data.png"::: ## Configure a job to capture data Use the following steps to configure a Stream Analytics job to capture data in Azure Data Lake Storage Gen2. 1. In the Azure portal, navigate to your event hub. -1. Select **Features** > **Process Data**, and select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card. +1. On the left menu, select **Process Data** under **Features**. Then, select **Start** on the **Capture data to ADLS Gen2 in Parquet format** card. + :::image type="content" source="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" alt-text="Screenshot showing the Process Event Hubs data start cards." lightbox="./media/capture-event-hub-data-parquet/process-event-hub-data-cards.png" :::-1. Enter a **name** to identify your Stream Analytics job. Select **Create**. - :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" ::: -1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job will use to connect to Event Hubs. Then select **Connect**. +1. Enter a **name** for your Stream Analytics job, and then select **Create**. ++ :::image type="content" source="./media/capture-event-hub-data-parquet/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." ::: +1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job uses to connect to Event Hubs. Then select **Connect**. + :::image type="content" source="./media/capture-event-hub-data-parquet/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/capture-event-hub-data-parquet/event-hub-configuration.png" :::-1. When the connection is established successfully, you'll see: +1. When the connection is established successfully, you see: - Fields that are present in the input data. You can choose **Add field** or you can select the three dot symbol next to a field to optionally remove, rename, or change its name. - A live sample of incoming data in the **Data preview** table under the diagram view. It refreshes periodically. You can select **Pause streaming preview** to view a static view of the sample input. + :::image type="content" source="./media/capture-event-hub-data-parquet/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-parquet/edit-fields.png" ::: 1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration. 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps: 1. Select the subscription, storage account name and container from the drop-down menu. 1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in. + 1. Select **Parquet** for **Serialization** format. + + :::image type="content" source="./media/capture-event-hub-data-parquet/job-top-settings.png" alt-text="Screenshot showing the Data Lake Storage Gen2 configuration page." lightbox="./media/capture-event-hub-data-parquet/job-top-settings.png"::: 1. For streaming blobs, the directory path pattern is expected to be a dynamic value. It's required for the date to be a part of the file path for the blob ΓÇô referenced as `{date}`. To learn about custom path patterns, see to [Azure Stream Analytics custom blob output partitioning](stream-analytics-custom-path-patterns-blob-storage-output.md). + :::image type="content" source="./media/capture-event-hub-data-parquet/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-parquet/blob-configuration.png" ::: 1. Select **Connect**-1. When the connection is established, you'll see fields that are present in the output data. +1. When the connection is established, you see fields that are present in the output data. 1. Select **Save** on the command bar to save your configuration.++ :::image type="content" source="./media/capture-event-hub-data-parquet/save-configuration.png" alt-text="Screenshot showing the Save button selected on the command bar." ::: 1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window: 1. Choose the output start time.+ 1. Select the pricing plan. 1. Select the number of Streaming Units (SU) that the job runs with. SU represents the computing resources that are allocated to execute a Stream Analytics job. For more information, see [Streaming Units in Azure Stream Analytics](stream-analytics-streaming-unit-consumption.md).- 1. In the **Choose Output data error handling** list, select the behavior you want when the output of the job fails due to data error. Select **Retry** to have the job retry until it writes successfully or select another option. + :::image type="content" source="./media/capture-event-hub-data-parquet/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-parquet/start-job.png" :::+1. You should see the Stream Analytic job in the **Stream Analytics job** tab of the **Process data** page for your event hub. -## Verify output -Verify that the Parquet files are generated in the Azure Data Lake Storage container. -+ :::image type="content" source="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" alt-text="Screenshot showing the Stream Analytics job on the Process data page." lightbox="./media/capture-event-hub-data-parquet/process-data-page-jobs.png" ::: + +## Verify output -The new job is shown on the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it. +1. On the Event Hubs instance page for your event hub, select **Generate data**, select **Stocks data** for dataset, and then select **Send** to send some sample data to the event hub. +1. Verify that the Parquet files are generated in the Azure Data Lake Storage container. + :::image type="content" source="./media/capture-event-hub-data-parquet/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the ADLS container." lightbox="./media/capture-event-hub-data-parquet/verify-captured-data.png" ::: +1. Select **Process data** on the left menu. Switch to the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it. -Here's an example screenshot of metrics showing input and output events. + :::image type="content" source="./media/capture-event-hub-data-parquet/open-metrics-link.png" alt-text="Screenshot showing Open Metrics link selected." lightbox="./media/capture-event-hub-data-parquet/open-metrics-link.png" ::: + + Here's an example screenshot of metrics showing input and output events. + :::image type="content" source="./media/capture-event-hub-data-parquet/job-metrics.png" alt-text="Screenshot showing metrics of the Stream Analytics job." lightbox="./media/capture-event-hub-data-parquet/job-metrics.png" ::: ## Next steps |
stream-analytics | No Code Transform Filter Ingest Sql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md | |
stream-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
stream-analytics | Powerbi Output Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/powerbi-output-managed-identity.md | -[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it is no longer required for a user to interactively log in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you will not need to periodically reauthorize the job. +[Managed Identity authentication](../active-directory/managed-identities-azure-resources/overview.md) for output to Power BI gives Stream Analytics jobs direct access to a workspace within your Power BI account. This feature allows for deployments of Stream Analytics jobs to be fully automated, since it's no longer required for a user to interactively sign in to Power BI via the Azure portal. Additionally, long running jobs that write to Power BI are now better supported, since you won't need to periodically reauthorize the job. This article shows you how to enable Managed Identity for the Power BI output(s) of a Stream Analytics job through the Azure portal and through an Azure Resource Manager deployment. +> [!NOTE] +> Only **system-assigned** managed identities are supported with the Power BI output. Currently, using user-assigned managed identities with the Power BI output isn't supported. + ## Prerequisites -The following are required for using this feature: +You must have the following prerequisites before you use this feature: - A Power BI account with a [Pro license](/power-bi/service-admin-purchasing-power-bi-pro).--- An upgraded workspace within your Power BI account. See [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/) of this feature for more details.+- An upgraded workspace within your Power BI account. For more information, see [Power BI's announcement](https://powerbi.microsoft.com/blog/announcing-new-workspace-experience-general-availability-ga/). ## Create a Stream Analytics job using the Azure portal -1. Create a new Stream Analytics job or open an existing job in the Azure portal. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Configure**. Ensure that "Use System-assigned Managed Identity" is selected and then select the **Save** button on the bottom of the screen. +1. Create a new Stream Analytics job or open an existing job in the Azure portal. +1. From the menu bar located on the left side of the screen, select **Managed Identity** located under **Settings**. - ![Configure Stream Analytics managed identity](./media/common/stream-analytics-enable-managed-identity.png) + :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png" alt-text="Screenshot showing the Managed Identity page with Select identity button selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/managed-identity-select-button.png"::: +1. On the **Select identity** page, select **System assigned identity***. If you select the latter option, specify the managed identity you want to use. Then, select **Save**. + :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png" alt-text="Screenshot showing the Select identity page with System assigned identity selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/system-assigned-identity.png"::: +1. On the **Managed identity** page, confirm that you see the **Principal ID** and **Principal name** assigned to your Stream Analytics job. The principal name should be same as your Stream Analytics job name. 2. Before configuring the output, give the Stream Analytics job access to your Power BI workspace by following the directions in the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article.+3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and sign in with your Power BI account. -3. Navigate to the **Outputs** section of your Stream Analytic's job, select **+ Add**, and then choose **Power BI**. Then, select the **Authorize** button and log in with your Power BI account. -- ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png) + [ ![Authorize with Power BI account](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-authorize-powerbi.png#lightbox) 4. Once authorized, a dropdown list will be populated with all of the workspaces you have access to. Select the workspace that you authorized in the previous step. Then select **Managed Identity** as the "Authentication mode". Finally, select the **Save** button. - ![Configure Power BI output with Managed Identity](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png) + :::image type="content" source="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png" alt-text="Screenshot showing the Power BI output configuration with Managed identity authentication mode selected." lightbox="./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-configure-powerbi-with-managed-id.png"::: ## Azure Resource Manager deployment Azure Resource Manager allows you to fully automate the deployment of your Strea } ``` - If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned "principalId". + If you plan to use the Power BI REST API to add the Stream Analytics job to your Power BI workspace, make note of the returned `principalId`. 3. Now that the job is created, continue to the [Give the Stream Analytics job access to your Power BI workspace](#give-the-stream-analytics-job-access-to-your-power-bi-workspace) section of this article. Now that the Stream Analytics job has been created, it can be given access to a ### Use the Power BI UI > [!Note]- > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. See [Get started with a service principal](/power-bi/developer/embed-service-principal) for more details. + > In order to add the Stream Analytics job to your Power BI workspace using the UI, you also have to enable service principal access in the **Developer settings** in the Power BI admin portal. For more information, see [Get started with a service principal](/power-bi/developer/embed-service-principal). -1. Navigate to the workspace's access settings. See this article for more details: [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace). +1. Navigate to the workspace's access settings. For more information, see [Give access to your workspace](/power-bi/service-create-the-new-workspaces#give-access-to-your-workspace). 2. Type the name of your Stream Analytics job in the text box and select **Contributor** as the access level. 3. Select **Add** and close the pane. - ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png) + [ ![Add Stream Analytics job to Power BI workspace](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png) ](./media/stream-analytics-powerbi-output-managed-identity/stream-analytics-add-job-to-powerbi-workspace.png#lightbox) ### Use the Power BI PowerShell cmdlets Now that the Stream Analytics job has been created, it can be given access to a > [!Important] > Please ensure you are using version 1.0.821 or later of the cmdlets. -```powershell -Install-Module -Name MicrosoftPowerBIMgmt -``` --2. Log in to Power BI. --```powershell -Login-PowerBI -``` + ```powershell + Install-Module -Name MicrosoftPowerBIMgmt + ``` +2. Sign in to Power BI. + ```powershell + Login-PowerBI + ``` 3. Add your Stream Analytics job as a Contributor to the workspace. -```powershell -Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor -``` + ```powershell + Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -PrincipalType App -AccessRight Contributor + ``` ### Use the Power BI REST API Request Body ### Use a Service Principal to grant permission for an ASA job's Managed Identity -For automated deployments, using an interactive login to give an ASA job access to a Power BI workspace is not possible. This can be done be using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell: +For automated deployments, using an interactive sign-in to give an ASA job access to a Power BI workspace isn't possible. It can be done using service principal to grant permission for an ASA job's managed identity. This is possible using PowerShell: ```powershell Connect-PowerBIServiceAccount -ServicePrincipal -TenantId "<tenant-id>" -CertificateThumbprint "<thumbprint>" -ApplicationId "<app-id>" Add-PowerBIWorkspaceUser -WorkspaceId <group-id> -PrincipalId <principal-id> -Pr ## Remove Managed Identity -The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There is no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again. +The Managed Identity created for a Stream Analytics job is deleted only when the job is deleted. There's no way to delete the Managed Identity without deleting the job. If you no longer want to use the Managed Identity, you can change the authentication method for the output. The Managed Identity will continue to exist until the job is deleted, and will be used if you decide to used Managed Identity authentication again. ## Limitations Below are the limitations of this feature: -- Classic Power BI workspaces are not supported.+- Classic Power BI workspaces aren't supported. - Azure accounts without Azure Active Directory. -- Multi-tenant access is not supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and cannot be used with a resource that resides in a different Azure Active Directory tenant.+- Multi-tenant access isn't supported. The Service principal created for a given Stream Analytics job must reside in the same Azure Active Directory tenant in which the job was created, and can't be used with a resource that resides in a different Azure Active Directory tenant. -- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) is not supported. This means you are not able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics.+- [User Assigned Identity](../active-directory/managed-identities-azure-resources/overview.md) isn't supported. This means you aren't able to enter your own service principal to be used by their Stream Analytics job. The service principal must be generated by Azure Stream Analytics. ## Next steps |
stream-analytics | Quick Create Azure Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-resource-manager.md | |
stream-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
stream-analytics | Sql Reference Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-reference-data.md | Use the following steps to add Azure SQL Database as a reference input source us 1. Create a Stream Analytics job. -2. Create a storage account to be used by the Stream Analytics job. +2. Create a storage account to be used by the Stream Analytics job. + > [!IMPORTANT] + > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job. -3. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job. +4. Create your Azure SQL Database with a data set to be used as reference data by the Stream Analytics job. ### Define SQL Database reference data input Use the following steps to add Azure SQL Database as a reference input source us 2. Become familiar with the [Stream Analytics tools for Visual Studio](stream-analytics-quick-create-vs.md) quickstart. 3. Create a storage account.+ > [!IMPORTANT] + > The Azure Stream Analytics retains snapshots within this storage account. When configuring the retention policy, it is imperative to ensure that the chosen timespan effectively encompasses the desired recovery duration for your Stream Analytics job. ### Create a SQL Database table |
stream-analytics | Stream Analytics Get Started With Azure Stream Analytics To Process Data From Iot Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-get-started-with-azure-stream-analytics-to-process-data-from-iot-devices.md | In this article, you learn how to create stream-processing logic to gather data ## Scenario -Contoso, which is a company in the industrial automation space, has completely automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data. +Contoso, a company in the industrial automation space, has automated its manufacturing process. The machinery in this plant has sensors that are capable of emitting streams of data in real time. In this scenario, a production floor manager wants to have real-time insights from the sensor data to look for patterns and take actions on them. You can use Stream Analytics Query Language (SAQL) over the sensor data to find interesting patterns from the incoming stream of data. -In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format and looks like the following: +In this example, the data is generated from a Texas Instruments sensor tag device. The payload of the data is in JSON format as shown in the following sample snippet: ```json { In this example, the data is generated from a Texas Instruments sensor tag devic } ``` -In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or Iot Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md). +In a real-world scenario, you could have hundreds of these sensors generating events as a stream. Ideally, a gateway device would run code to push these events to [Azure Event Hubs](https://azure.microsoft.com/services/event-hubs/) or [Azure IoT Hubs](https://azure.microsoft.com/services/iot-hub/). Your Stream Analytics job would ingest these events from Event Hubs or IoT Hubs and run real-time analytics queries against the streams. Then, you could send the results to one of the [supported outputs](stream-analytics-define-outputs.md). -For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you will learn how to connect your job to inputs and outputs and deploy them to the Azure service. +For ease of use, this getting started guide provides a sample data file, which was captured from real sensor tag devices. You can run queries on the sample data and see results. In subsequent tutorials, you learn how to connect your job to inputs and outputs and deploy them to the Azure service. ## Create a Stream Analytics job -1. In the [Azure portal](https://portal.azure.com), select **+ Create a resource** from the left navigation menu. Then, select **Stream Analytics job** from **Analytics**. +1. Navigate to the [Azure portal](https://portal.azure.com). +1. On the left navigation menu, select **All services**, select **Analytics**, hover the mouse over **Stream Analytics jobs**, and then select **Create**. - ![Create a new Stream Analytics job](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png) --1. Enter a unique job name and verify the subscription is the correct one for your job. Create a new resource group or select an existing one from your subscription. --1. Select a location for your job. Use the same location for your resource group and all resources to increased processing speed and reduced of costs. After you've made the configurations, select **Create**. + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png" alt-text="Screenshot that shows the selection of Create button for a Stream Analytics job." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-02.png"::: +1. On the **New Stream Analytics job** page, follow these steps: + 1. For **Subscription**, select your **Azure subscription**. + 1. For **Resource group**, select an existing resource group or create a resource group. + 1. For **Name**, enter a unique name for the Stream Analytics job. + 1. Select the **Region** in which you want to deploy the Stream Analytics job. Use the same location for your resource group and all resources to increase the processing speed and reduce costs. + 1. Select **Review + create**. - ![Create a new Stream Analytics job details](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png) + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-03.png" alt-text="Screenshot that shows the New Stream Analytics job page."::: +1. On the **Review + create** page, review settings, and select **Create**. +1. After the deployment succeeds, select **Go to resource** to navigate to the **Stream Analytics job** page for your Stream Analytics job. ## Create an Azure Stream Analytics query-The next step after your job is created is to write a query. You can test queries against sample data without connecting an input or output to your job. --Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json -) from GitHub. Then, navigate to your Azure Stream Analytics job in the Azure portal. --Select **Query** under **Job topology** from the left menu. Then select **Upload sample input**. Upload the `HelloWorldASA-InputStream.json` file, and select **Ok**. +After your job is created, write a query. You can test queries against sample data without connecting an input or output to your job. -![Stream Analytics dashboard query tile](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png) +1. Download the [HelloWorldASA-InputStream.json](https://github.com/Azure/azure-stream-analytics/blob/master/Samples/GettingStarted/HelloWorldASA-InputStream.json +) from GitHub. +1. On the **Azure Stream Analytics job** page in the Azure portal, select **Query** under **Job topology** from the left menu. +1. Select **Upload sample input**, select the `HelloWorldASA-InputStream.json` file you downloaded, and select **OK**. -Notice that a preview of the data is automatically populated in the **Input preview** table. + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png" alt-text="Screenshot that shows the **Query** page with **Upload sample input** selected." lightbox="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-05.png"::: +1. Notice that a preview of the data is automatically populated in the **Input preview** table. -![Preview of sample input data](./media/stream-analytics-get-started-with-iot-devices/input-preview.png) + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/input-preview.png" alt-text="Screenshot that shows sample input data in the Input preview tab."::: ### Query: Archive your raw data The simplest form of query is a pass-through query that archives all input data to its designated output. This query is the default query populated in a new Azure Stream Analytics job. -```sql -SELECT - * -INTO - Output -FROM - InputStream -``` +1. In the **Query** window, enter the following query, and then select **Test query** on the toolbar. -Select **Test query** and view the results in the **Test results** table. + ```sql + SELECT + * + INTO + youroutputalias + FROM + yourinputalias + ``` +2. View the results in the **Test results** tab in the bottom pane. -![Test results for Stream Analytics query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png) + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-07.png" alt-text="Screenshot that shows the sample query and its results."::: ### Query: Filter the data based on a condition -Let's try to filter the results based on a condition. We would like to show results for only those events that come from "sensorA." --```sql -SELECT - time, - dspl AS SensorName, - temp AS Temperature, - hmdt AS Humidity -INTO - Output -FROM - InputStream -WHERE dspl='sensorA' -``` +Let's update the query to filter the results based on a condition. For example, the following query shows events that come from `sensorA`." ++1. Update the query with the following sample: -Paste the query in the editor and select **Test query** to review the results. + ```sql + SELECT + time, + dspl AS SensorName, + temp AS Temperature, + hmdt AS Humidity + INTO + youroutputalias + FROM + yourinputalias + WHERE dspl='sensorA' + ``` +2. Select **Test query** to see the results of the query. -![Filtering a data stream](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png) + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-08.png" alt-text="Screenshot that shows the query results with the filter."::: ### Query: Alert to trigger a business workflow Let's make our query more detailed. For every type of sensor, we want to monitor average temperature per 30-second window and display results only if the average temperature is above 100 degrees. -```sql -SELECT - System.Timestamp AS OutputTime, - dspl AS SensorName, - Avg(temp) AS AvgTemperature -INTO - Output -FROM - InputStream TIMESTAMP BY time -GROUP BY TumblingWindow(second,30),dspl -HAVING Avg(temp)>100 -``` +1. Update the query to: ++ ```sql + SELECT + System.Timestamp AS OutputTime, + dspl AS SensorName, + Avg(temp) AS AvgTemperature + INTO + youroutputalias + FROM + yourinputalias TIMESTAMP BY time + GROUP BY TumblingWindow(second,30),dspl + HAVING Avg(temp)>100 + ``` +1. Select **Test query** to see the results of the query. -![30-second filter query](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png) + :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-10.png" alt-text="Screenshot that shows the query with a tumbling window."::: -You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics). + You should see results that contain only 245 rows and names of sensors where the average temperate is greater than 100. This query groups the stream of events by **dspl**, which is the sensor name, over a **Tumbling Window** of 30 seconds. Temporal queries must state how you want time to progress. By using the **TIMESTAMP BY** clause, you have specified the **OUTPUTTIME** column to associate times with all temporal calculations. For detailed information, read about [Time Management](/stream-analytics-query/time-management-azure-stream-analytics) and [Windowing functions](/stream-analytics-query/windowing-azure-stream-analytics). ### Query: Detect absence of events -How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then did not send events for the next 5 seconds. --```sql -SELECT - t1.time, - t1.dspl AS SensorName -INTO - Output -FROM - InputStream t1 TIMESTAMP BY time -LEFT OUTER JOIN InputStream t2 TIMESTAMP BY time -ON - t1.dspl=t2.dspl AND - DATEDIFF(second,t1,t2) BETWEEN 1 and 5 -WHERE t2.dspl IS NULL -``` +How can we write a query to find a lack of input events? Let's find the last time that a sensor sent data and then didn't send events for the next 5 seconds. ++1. Update the query to: ++ ```sql + SELECT + t1.time, + t1.dspl AS SensorName + INTO + youroutputalias + FROM + yourinputalias t1 TIMESTAMP BY time + LEFT OUTER JOIN yourinputalias t2 TIMESTAMP BY time + ON + t1.dspl=t2.dspl AND + DATEDIFF(second,t1,t2) BETWEEN 1 and 5 + WHERE t2.dspl IS NULL + ``` +2. Select **Test query** to see the results of the query. ++ :::image type="content" source="./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png" alt-text="Screenshot that shows the query that detects absence of events."::: -![Detect absence of events](./media/stream-analytics-get-started-with-iot-devices/stream-analytics-get-started-with-iot-devices-11.png) -Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is very useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics). + Here we use a **LEFT OUTER** join to the same data stream (self-join). For an **INNER** join, a result is returned only when a match is found. For a **LEFT OUTER** join, if an event from the left side of the join is unmatched, a row that has NULL for all the columns of the right side is returned. This technique is useful to find an absence of events. For more information, see [JOIN](/stream-analytics-query/join-azure-stream-analytics). ## Conclusion -The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this is just to get you started. Stream Analytics supports a variety of inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md). +The purpose of this article is to demonstrate how to write different Stream Analytics Query Language queries and see results in the browser. However, this article is just to get you started. Stream Analytics supports various inputs and outputs and can even use functions in Azure Machine Learning to make it a robust tool for analyzing data streams. For more information about how to write queries, read the article about [common query patterns](stream-analytics-stream-analytics-query-patterns.md). |
stream-analytics | Stream Analytics Streaming Unit Consumption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-streaming-unit-consumption.md | There's an automatic conversion of Streaming Units which occurs from REST API la | ... | ... | ... | -## Understanding consumption and memory utilization +## Understand consumption and memory utilization To achieve low latency stream processing, Azure Stream Analytics jobs perform all processing in memory. When running out of memory, the streaming job fails. As a result, for a production job, itΓÇÖs important to monitor a streaming jobΓÇÖs resource usage, and make sure there's enough resource allocated to keep the jobs running 24/7. The SU % utilization metric, which ranges from 0% to 100%, describes the memory consumption of your workload. For a streaming job with minimal footprint, this metric is usually between 10% to 20%. If SU% utilization is high (above 80%), or if input events get backlogged (even with a low SU% utilization since it doesn't show CPU usage), your workload likely requires more compute resources, which requires you to increase the number of streaming units. It's best to keep the SU metric below 80% to account for occasional spikes. To react to increased workloads and increase streaming units, consider setting an alert of 80% on the SU Utilization metric. Also, you can use watermark delay and backlogged events metrics to see if there's an impact. |
stream-analytics | Stream Analytics User Assigned Managed Identity Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-user-assigned-managed-identity-overview.md | With support for both system-assigned identity and user-assigned identity, here 2. You can switch from an existing user-assigned identity to a newly created user-assigned identity. The previous identity is not removed from storage access control list. 3. You cannot add multiple identities to your stream analytics job. 4. Currently we do not support deleting an identity from a stream analytics job. You can replace it with another user-assigned or system-assigned identity.+5. You cannot use user-assigned identity to authenticate via allow-trusted services. ## Next steps |
synapse-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
synapse-analytics | Quickstart Apache Spark Notebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-apache-spark-notebook.md | To ensure the Spark instance is shut down, end any connected sessions(notebooks) In this quickstart, you learned how to create a serverless Apache Spark pool and run a basic Spark SQL query. - [Azure Synapse Analytics](overview-what-is.md)-- [.NET for Apache Spark documentation](/dotnet/spark)+- [.NET for Apache Spark documentation](/previous-versions/dotnet/spark/what-is-apache-spark-dotnet) |
synapse-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
synapse-analytics | How To Set Up Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/how-to-set-up-access-control.md | This document uses standard names to simplify instructions. Replace them with na | **Container** | `container1` | The container in STG1 that the workspace will use by default. | | **Active directory tenant** | `contoso` | the active directory tenant name.| -## STEP 1: Set up security groups +## Step 1: Set up security groups >[!Note] >During the preview, you were encouraged to create security groups and to map them to Azure Synapse **Synapse SQL Administrator** and **Synapse Apache Spark Administrator** roles. With the introduction of new finer-grained Synapse RBAC roles and scopes, you are now encouraged to use newer options to control access to your workspace. They give you greater configuration flexibility and they acknowledge that developers often use a mix of SQL and Spark to create analytics applications. So developers may need access to individual resources rather than an entire workspace. [Learn more](./synapse-workspace-synapse-rbac.md) about Synapse RBAC. These five groups are sufficient for a basic setup. Later, you can add security >[!Tip] >Individual Synapse users can use Azure Active Directory in the Azure portal to view their group memberships. This allows them to determine which roles they've been granted. -## STEP 2: Prepare your ADLS Gen2 storage account +## Step 2: Prepare your ADLS Gen2 storage account Synapse workspaces use default storage containers for: - Storage of backing data files for Spark tables Identify the following information about your storage: ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) -## STEP 3: Create and configure your Synapse workspace +## Step 3: Create and configure your Synapse workspace In Azure portal, create a Synapse workspace: In Azure portal, create a Synapse workspace: - Assign the **Synapse Contributor** role to `workspace1_SynapseContributors` - Assign the **Synapse Compute Operator** role to `workspace1_SynapseComputeOperators` -## STEP 4: Grant the workspace MSI access to the default storage container +## Step 4: Grant the workspace MSI access to the default storage container To run pipelines and perform system tasks, Azure Synapse requires managed service identity (MSI) to have access to `container1` in the default ADLS Gen2 account, for the workspace. For more information, see [Azure Synapse workspace managed identity](../../data-factory/data-factory-service-identity.md?context=/azure/synapse-analytics/context/context&tabs=synapse-analytics). To run pipelines and perform system tasks, Azure Synapse requires managed servic ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) -## STEP 5: Grant Synapse administrators an Azure Contributor role for the workspace +## Step 5: Grant Synapse administrators an Azure Contributor role for the workspace To create SQL pools, Apache Spark pools and Integration runtimes, users need an Azure Contributor role for the workspace, at minimum. A Contributor role also allows users to manage resources, including pausing and scaling. To use Azure portal or Synapse Studio to create SQL pools, Apache Spark pools and Integration runtimes, you need a Contributor role at the resource group level. To create SQL pools, Apache Spark pools and Integration runtimes, users need an ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png) -## STEP 6: Assign an SQL Active Directory Admin role +## Step 6: Assign an SQL Active Directory Admin role The *workspace creator* is automatically assigned as *SQL Active Directory Admin* for the workspace. Only a single user or a group can be granted this role. In this step, you assign the SQL Active Directory Admin for the workspace to the `workspace1_SQLAdmins` security group. This gives the group highly privileged admin access to all SQL pools and databases in the workspace. The *workspace creator* is automatically assigned as *SQL Active Directory Admin >[!Note] >Step 6 is optional. You might choose to grant the `workspace1_SQLAdmins` group a less privileged role. To assign `db_owner` or other SQL roles, you must run scripts on each SQL database. -## STEP 7: Grant access to SQL pools +## Step 7: Grant access to SQL pools The Synapse Administrator is by default given the SQL `db_owner` role for serverless SQL pools in the workspace as well. Access to SQL pools for other users is controlled by SQL permissions. Assigning > [!TIP] >You can grant access to all SQL databases by taking the following steps for **each** SQL pool. Section [Configure-Workspace-scoped permissions](#configure-workspace-scoped-permissions) is an exception to the rule and it allows you to assign a user a sysadmin role at the workspace level. -### STEP 7.1: Serverless SQL pool, Built-in +### Step 7a: Serverless SQL pool, Built-in You can use the script examples in this section to give users permission to access an individual database or all databases in the serverless SQL pool, `Built-in`. CREATE LOGIN [alias@domain.com] FROM EXTERNAL PROVIDER; ALTER SERVER ROLE sysadmin ADD MEMBER [alias@domain.com]; ``` -### STEP 7.2: configure Dedicated SQL pools +### Step 7b: configure Dedicated SQL pools You can grant access to a **single**, dedicated, SQL pool database. Use these steps in the Azure Synapse SQL script editor: You can grant access to a **single**, dedicated, SQL pool database. Use these st You can run queries to confirm that serverless SQL pools can query storage accounts, after you have created your users. -## STEP 8: Add users to security groups +## Step 8: Add users to security groups The initial configuration for your access control system is now complete. You can now add and remove users to the security groups you've set up, to manage access to them. You can manually assign users to Azure Synapse roles, but this sets permissions inconsistently. Instead, only add or remove users to your security groups. -## STEP 9: Network security +## Step 9: Network security As a final step to secure your workspace, you should secure network access, using the [workspace firewall](./synapse-workspace-ip-firewall.md). As a final step to secure your workspace, you should secure network access, usin - Access from public networks can be controlled by enabling the [public network access feature](connectivity-settings.md#public-network-access) or the [workspace firewall](./synapse-workspace-ip-firewall.md). - Alternatively, you can connect to your workspace using a [managed private endpoint](synapse-workspace-managed-private-endpoints.md) and [private Link](/azure/azure-sql/database/private-endpoint-overview). Azure Synapse workspaces without the [Azure Synapse Analytics Managed Virtual Network](synapse-workspace-managed-vnet.md) do not have the ability to connect via managed private endpoints. -## STEP 10: Completion +## Step 10: Completion Your workspace is now fully configured and secured. |
synapse-analytics | Synapse Workspace Synapse Rbac Roles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-synapse-rbac-roles.md | The following table describes the built-in roles and the scopes at which they ca |Synapse Administrator |Full Synapse access to SQL pools, Data Explorer pools, Apache Spark pools, and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts. Includes Compute Operator, Linked Data Manager, and Credential User permissions on the workspace system identity credential. Includes assigning Synapse RBAC roles. In addition to Synapse Administrator, Azure Owners can also assign Synapse RBAC roles. Azure permissions are required to create, delete, and manage compute resources. </br></br>_Can read and write artifacts</br> Can do all actions on Spark activities.</br> Can view Spark pool logs</br> Can view saved notebook and pipeline output </br> Can use the secrets stored by linked services or credentials</br>Can assign and revoke Synapse RBAC roles at current scope_|Workspace </br> Spark pool<br/>Integration runtime </br>Linked service</br>Credential | |Synapse Apache Spark Administrator</br>|Full Synapse access to Apache Spark Pools. Create, read, update, and delete access to published Spark job definitions, notebooks and their outputs, and to libraries, linked services, and credentials.  Includes read access to all other published code artifacts. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can do all actions on Spark artifacts</br>Can do all actions on Spark activities_|Workspace</br>Spark pool| |Synapse SQL Administrator|Full Synapse access to serverless SQL pools. Create, read, update, and delete access to published SQL scripts, credentials, and linked services.  Includes read access to all other published code artifacts.  Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>*Can do all actions on SQL scripts<br/>Can connect to SQL serverless endpoints with SQL `db_datareader`, `db_datawriter`, `connect`, and `grant` permissions*|Workspace|-|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime| -|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace +|Synapse Contributor|Full Synapse access to Apache Spark pools and Integration runtimes. Includes create, read, update, and delete access to all published code artifacts and their outputs, including scheduled pipelines, credentials and linked services.  Includes compute operator permissions. Doesn't include permission to use credentials and run pipelines. Doesn't include granting access. </br></br>_Can read and write artifacts</br>Can view saved notebook and pipeline output</br>Can do all actions on Spark activities</br>Can view Spark pool logs_|Workspace </br> Spark pool<br/> Integration runtime| +|Synapse Artifact Publisher|Create, read, update, and delete access to published code artifacts and their outputs, including scheduled pipelines. Doesn't include permission to run code or pipelines, or to grant access. </br></br>_Can read published artifacts and publish artifacts</br>Can view saved notebook, Spark job, and pipeline output_|Workspace |Synapse Artifact User|Read access to published code artifacts and their outputs. Can create new artifacts but can't publish changes or run code without additional permissions.|Workspace |Synapse Compute Operator |Submit Spark jobs and notebooks and view logs.  Includes canceling Spark jobs submitted by any user. Requires additional use credential permissions on the workspace system identity to run pipelines, view pipeline runs and outputs. </br></br>_Can submit and cancel jobs, including jobs submitted by others</br>Can view Spark pool logs_|Workspace</br>Spark pool</br>Integration runtime| |Synapse Monitoring Operator |Read published code artifacts, including logs and outputs for pipeline runs and completed notebooks. Includes ability to list and view details of Apache Spark pools, Data Explorer pools, and Integration runtimes. Requires additional permissions to run/cancel pipelines, Spark notebooks, and Spark jobs.|Workspace | |
synapse-analytics | Synapse Workspace Understand What Role You Need | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-understand-what-role-you-need.md | You can pause or scale a dedicated SQL pool, configure a Spark pool, or an integ With access to Synapse Studio, you can create new code artifacts, such as SQL scripts, KQL scripts, notebooks, spark jobs, linked services, pipelines, dataflows, triggers, and credentials. These artifacts can be published or saved with additional permissions. -If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts. +If you're a Synapse Artifact User, Synapse Artifact Publisher, Synapse Contributor, or Synapse Administrator you can list, open, and edit already published code artifacts, including scheduled pipelines. ### Execute your code |
synapse-analytics | Apache Spark Development Using Notebooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md | We provide rich operations to develop notebooks: + [Collapse a cell output](#collapse-a-cell-output) + [Notebook outline](#notebook-outline) +> [!NOTE] +> +> In the notebooks, there is a SparkSession automatically created for you, stored in a variable called `spark`. Also there is a variable for SparkContext which is called `sc`. Users can access these variables directly and should not change the values of these variables. ++ <h3 id="add-a-cell">Add a cell</h3> There are multiple ways to add a new cell to your notebook. Select the **Undo** / **Redo** button or press **Z** / **Shift+Z** to revoke the ![Screenshot of Synapse undo cells of aznb](./media/apache-spark-development-using-notebooks/synapse-undo-cells-aznb.png) Supported undo cell operations:-+ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content will be kept along with the cell. ++ Insert/Delete cell: You could revoke the delete operations by selecting **Undo**, the text content is kept along with the cell. + Reorder cell. + Toggle parameter. + Convert between Code cell and Markdown cell. Select the **Cancel All** button to cancel the running cells or cells waiting in ### Notebook reference -You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You will receive an exception if the statement depth is larger than **five**. +You can use ```%run <notebook path>``` magic command to reference another notebook within current notebook's context. All the variables defined in the reference notebook are available in the current notebook. ```%run``` magic command supports nested calls but not support recursive calls. You receive an exception if the statement depth is larger than **five**. Example: ``` %run /<path>/Notebook1 { "parameterInt": 1, "parameterFloat": 2.5, "parameterBool": true, "parameterString": "abc" } ```. Notebook reference works in both interactive mode and Synapse pipeline. ### Variable explorer -Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables will show up automatically as they are defined in the code cells. Clicking on each column header will sort the variables in the table. +Synapse notebook provides a built-in variables explorer for you to see the list of the variables name, type, length, and value in the current Spark session for PySpark (Python) cells. More variables show up automatically as they are defined in the code cells. Clicking on each column header sorts the variables in the table. You can select the **Variables** button on the notebook command bar to open or hide the variable explorer. Parameterized session configuration allows you to replace the value in %%configu } ``` -Notebook will use default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity. +Notebook uses default value if run a notebook in interactive mode directly or no parameter that match "activityParameterName" is given from Pipeline Notebook activity. During the pipeline run mode, you can configure pipeline Notebook activity settings as below: ![Screenshot of parameterized session configuration](./media/apache-spark-development-using-notebooks/parameterized-session-config.png) You can access data in the primary storage account directly. There's no need to ## IPython Widgets -Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (e.g. Scala, SQL, C#) yet. +Widgets are eventful Python objects that have a representation in the browser, often as a control like a slider, textbox etc. IPython Widgets only works in Python environment, it's not supported in other languages (for example, Scala, SQL, C#) yet. ### To use IPython Widget 1. You need to import `ipywidgets` module first to use the Jupyter Widget framework. Widgets are eventful Python objects that have a representation in the browser, o slider ``` -3. Run the cell, the widget will display at the output area. +3. Run the cell, the widget displays at the output area. ![Screenshot of ipython widgets slider](./media/apache-spark-development-using-notebooks/ipython-widgets-slider.png) -4. You can use multiple `display()` calls to render the same widget instance multiple times, but they will remain in sync with each other. +4. You can use multiple `display()` calls to render the same widget instance multiple times, but they remain in sync with each other. ```python slider = widgets.IntSlider() Widgets are eventful Python objects that have a representation in the browser, o |`widgets.jslink()`|You can use `widgets.link()` function to link two similar widgets.| |`FileUpload` widget| Not support yet.| -2. Global `display` function provided by Synapse does not support displaying multiple widgets in 1 call (i.e. `display(a, b)`), which is different from IPython `display` function. +2. Global `display` function provided by Synapse does not support displaying multiple widgets in one call (that is, `display(a, b)`), which is different from IPython `display` function. 3. If you close a notebook that contains IPython Widget, you will not be able to see or interact with it until you execute the corresponding cell again. Available cell magics: <h2 id="reference-unpublished-notebook">Reference unpublished notebook</h2> -Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run will fetch the current content in web cache, if you run a cell including a reference notebooks statement, you will reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process. +Reference unpublished notebook is helpful when you want to debug "locally", when enabling this feature, notebook run fetches the current content in web cache, if you run a cell including a reference notebooks statement, you reference the presenting notebooks in the current notebook browser instead of a saved versions in cluster, that means the changes in your notebook editor can be referenced immediately by other notebooks without having to be published(Live mode) or committed(Git mode), by leveraging this approach you can easily avoid common libraries getting polluted during developing or debugging process. You can enable Reference unpublished notebook from Properties panel: You can reuse your notebook sessions conveniently now without having to start ne ![Screenshot of notebook-manage-sessions](./media/apache-spark-development-using-notebooks/synapse-notebook-manage-sessions.png) -In the **Active sessions** list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session will be detached from the previous notebook (if it's not idle) then attach to the current one. +In the **Active sessions**, list you can see the session information and the corresponding notebook that is currently attached to the session. You can operate Detach with notebook, Stop the session, and View in monitoring from here. Moreover, you can easily connect your selected notebook to an active session in the list started from another notebook, the session is detached from the previous notebook (if it's not idle) then attach to the current one. ![Screenshot of notebook-sessions-list](./media/apache-spark-development-using-notebooks/synapse-notebook-sessions-list.png) To parameterize your notebook, select the ellipses (...) to access the **more co -Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine will add a new cell beneath the parameters cell with input parameters in order to overwrite the default values. +Azure Data Factory looks for the parameters cell and treats this cell as defaults for the parameters passed in at execution time. The execution engine adds a new cell beneath the parameters cell with input parameters in order to overwrite the default values. ### Assign parameters values from a pipeline |
synapse-analytics | Apache Spark Secure Credentials With Tokenlibrary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md | While Azure Synapse Analytics supports a variety of linked service connections ( - Azure SQL Data Warehouse (Dedicated and Serverless) - Azure Storage - #### mssparkutils.credenials.getToken() + #### mssparkutils.credentials.getToken() When you need an OAuth bearer token to access services directly, you can use the `getToken` method. The following resources are supported: | Service Name | String literal to be used in API call | |
synapse-analytics | Sql Data Warehouse Manage Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-manage-monitor.md | All queries executed on SQL pool are logged to [sys.dm_pdw_exec_requests](/sql/r Here are steps to follow to investigate query execution plans and times for a particular query. -### STEP 1: Identify the query you wish to investigate +### Step 1: Identify the query you wish to investigate ```sql -- Monitor active queries FROM sys.dm_pdw_exec_requests WHERE [label] = 'My Query'; ``` -### STEP 2: Investigate the query plan +### Step 2: Investigate the query plan Use the Request ID to retrieve the query's distributed SQL (DSQL) plan from [sys.dm_pdw_request_steps](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-request-steps-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) To investigate further details about a single step, inspect the `operation_type` * For **SQL operations** (OnOperation, RemoteOperation, ReturnOperation), proceed with [STEP 3](#step-3-investigate-sql-on-the-distributed-databases) * For **Data Movement operations** (ShuffleMoveOperation, BroadcastMoveOperation, TrimMoveOperation, PartitionMoveOperation, MoveOperation, CopyOperation), proceed with [STEP 4](#step-4-investigate-data-movement-on-the-distributed-databases). -### STEP 3: Investigate SQL on the distributed databases +### Step 3: Investigate SQL on the distributed databases Use the Request ID and the Step Index to retrieve details from [sys.dm_pdw_sql_requests](/sql/t-sql/database-console-commands/dbcc-pdw-showexecutionplan-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true), which contains execution information of the query step on all of the distributed databases. When the query step is running, [DBCC PDW_SHOWEXECUTIONPLAN](/sql/t-sql/database DBCC PDW_SHOWEXECUTIONPLAN(1, 78); ``` -### STEP 4: Investigate data movement on the distributed databases +### Step 4: Investigate data movement on the distributed databases Use the Request ID and the Step Index to retrieve information about a data movement step running on each distribution from [sys.dm_pdw_dms_workers](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-dms-workers-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true). |
synapse-analytics | Develop Storage Files Storage Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-storage-files-storage-access-control.md | To query a file located in Azure Storage, your serverless SQL pool endpoint need To grant the ability manage credentials: -- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to the user. For example:+- To allow a user to create or drop a server-level credential, an administrator must grant the `ALTER ANY CREDENTIAL` permission to its login in the master database. For example: ```sql- GRANT ALTER ANY CREDENTIAL TO [user_name]; + GRANT ALTER ANY CREDENTIAL TO [login_name]; ``` -- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the user. For example:+- To allow a user to create or drop a database scoped credential, an administrator must grant the `CONTROL` permission on the database to the database user in the user database. For example: ```sql GRANT CONTROL ON DATABASE::[database_name] TO [user_name]; To grant the ability manage credentials: Database users who access external storage must have permission to use credentials. To use the credential, a user must have the `REFERENCES` permission on a specific credential. -To grant the `REFERENCES` permission on a server-level credential for a user, use the following T-SQL query: +To grant the `REFERENCES` permission on a server-level credential for a login, use the following T-SQL query in the master database: ```sql-GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [user]; +GRANT REFERENCES ON CREDENTIAL::[server-level_credential] TO [login_name]; ``` -To grant a `REFERENCES` permission on a database-scoped credential for a user, use the following T-SQL query: +To grant a `REFERENCES` permission on a database-scoped credential for a database user, use the following T-SQL query in the user database: ```sql-GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user]; +GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[database-scoped_credential] TO [user_name]; ``` ## Server-level credential These articles help you learn how query different folder types, file types, and - [Query Parquet files](query-parquet-files.md) - [Create and use views](create-use-views.md) - [Query JSON files](query-json-files.md)-- [Query Parquet nested types](query-parquet-nested-types.md)+- [Query Parquet nested types](query-parquet-nested-types.md) |
synapse-analytics | Get Started Power Bi Professional | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-power-bi-professional.md | Open the Power BI desktop application and select the **Get data** option. ![Open Power BI desktop application and select get data.](./media/get-started-power-bi-professional/step-0-open-powerbi.png) -### Step 1 - Select data source +### Step 1: Select data source Select **Azure** in the menu and then **Azure SQL Database**. ![Select data source.](./media/get-started-power-bi-professional/step-1-select-data-source.png) -### Step 2 - Select database +### Step 2: Select database Write the URL for the database and the name of the database where the view resides. ![Select database on the endpoint.](./media/get-started-power-bi-professional/step-2-db.png) |
synapse-analytics | How To Pause Resume Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/how-to-pause-resume-pipelines.md | Evaluate the desired state, Pause or Resume, and the current status, Online, or 1. On the Activities tab, select **+ Add Case**. Add the cases `Paused-Resume` and `Online-Pause`. ![Check status condition of the dedicated SQL pool](./media/how-to-pause-resume-pipelines/check-condition.png) -### Step 5c: Pause or Resume dedicated SQL pools +## Step 5c: Pause or Resume dedicated SQL pools The final and only relevant step for some requirements, is to initiate the pause or resume of your dedicated SQL pool. This step again uses a Web activity, calling the [Pause or Resume compute REST API for Azure Synapse](../sql-data-warehouse/sql-data-warehouse-manage-compute-rest-api.md#pause-compute). 1. Select the activity edit pencil and add a **Web** activity to the State-PauseorResume canvas. |
time-series-insights | Time Series Insights How To Scale Your Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-how-to-scale-your-environment.md | However, changing the pricing tier SKU is not allowed. For example, an environme - For more information, review [Understanding retention in Azure Time Series Insights](time-series-insights-concepts-retention.md). -- Learn about [configuring data retention in Azure Azure Time Series Insights](time-series-insights-how-to-configure-retention.md).+- Learn about [configuring data retention in Azure Time Series Insights](time-series-insights-how-to-configure-retention.md). - Learn about [planning out your environment](time-series-insights-environment-planning.md). |
update-center | Assessment Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/assessment-options.md | Title: Assessment options in update management center (preview). -description: The article describes the assessment options available in Update management center (preview). -+ Title: Assessment options in Update Manager (preview). +description: The article describes the assessment options available in Update Manager (preview). + Last updated 05/23/2023 -# Assessment options in update management center (preview) +# Assessment options in Update Manager (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article provides an overview of the assessment options available by update management center (preview). +This article provides an overview of the assessment options available by Update Manager (preview). -Update management center (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. +Update Manager (preview) provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. ## Periodic assessment - Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by update management center (preview). We recommend that you enable this property on your machines as it allows update management center (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). + Periodic assessment is an update setting on a machine that allows you to enable automatic periodic checking of updates by Update Manager (preview). We recommend that you enable this property on your machines as it allows Update Manager (preview) to fetch latest updates for your machines every 24 hours and enables you to view the latest compliance status of your machines. You can enable this setting using update settings flow as detailed [here](manage-update-settings.md#configure-settings-on-single-vm) or enable it at scale by using [Policy](periodic-assessment-at-scale.md). :::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png"::: ## Check for updates now/On-demand assessment -Update management center (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from update management center (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md). +Update Manager (preview) allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager (preview) and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md). ## Update assessment scan You can initiate a software updates compliance scan on a machine to get a current list of operating system updates available. In the **Scheduling** section, you can either **create a maintenance configurati ## Next steps -* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Configure Wu Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/configure-wu-agent.md | Title: Configure Windows Update settings in Update management center (Preview) -description: This article tells how to configure Windows update settings to work with Update management center (Preview). -+ Title: Configure Windows Update settings in Azure Update Manager (preview) +description: This article tells how to configure Windows update settings to work with Azure Update Manager (preview). + Last updated 05/02/2023 -# Configure Windows update settings for update management center (preview) +# Configure Windows update settings for Azure Update Manager (preview) -Update management center (Preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by: +Azure Update Manager (preview) relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. Many of these settings can be managed by: - Local Group Policy Editor - Group Policy - PowerShell - Directly editing the Registry -The Update management center (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update management center (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window. +The Update Manager (preview) respects many of the settings specified to control the Windows Update client. If you use settings to enable non-Windows updates, the Update Manager (preview) will also manage those updates. If you want to enable downloading of updates before an update deployment occurs, update deployment can be faster, more efficient, and less likely to exceed the maintenance window. For additional recommendations on setting up WSUS in your Azure subscription and to secure your Windows virtual machines up to date, review [Plan your deployment for updating Windows virtual machines in Azure using WSUS](/azure/architecture/example-scenario/wsus). ## Pre-download updates -To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, update management center (Preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in update management center (preview). +To configure the automatic downloading of updates without automatically installing them, you can use Group Policy to [configure the Automatic Updates setting](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates) to 3. This setting enables downloads of the required updates in the background, and notifies you that the updates are ready to install. In this way, Update Manager (preview) remains in control of schedules, but allows downloading of updates outside the maintenance window. This behavior prevents `Maintenance window exceeded` errors in Update Manager (preview) You can enable this setting in PowerShell: By default, the Windows Update client is configured to provide updates only for Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.+- For Servers configured to patch on a schedule from Update Manager (preview) (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change. ```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") Use one of the following options to perform the settings change at scale: $ServiceManager.AddService2($ServiceId,7,"") ``` -- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).+- For servers running Server 2016 or later which are not using Update Manager (preview) scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store). ## Make WSUS configuration settings -Update management center (Preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS. +Update Manager (preview) supports WSUS settings. You can specify sources for scanning and downloading updates using instructions in [Specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings#specify-intranet-microsoft-update-service-location). By default, the Windows Update client is configured to download updates from Windows Update. When you specify a WSUS server as a source for your machines, the update deployment fails, if the updates aren't approved in WSUS. To restrict machines to the internal update service, see [do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#do-not-connect-to-any-windows-update-internet-locations). |
update-center | Deploy Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/deploy-updates.md | Title: Deploy updates and track results in update management center (preview). -description: The article details how to use update management center (preview) in the Azure portal to deploy updates and view results for supported machines. -+ Title: Deploy updates and track results in Azure Update Manager (preview). +description: The article details how to use Azure Update Manager (preview) in the Azure portal to deploy updates and view results for supported machines. + Last updated 08/08/2023 -# Deploy updates now and track results with update management center (preview) +# Deploy updates now and track results with Azure Update Manager (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -The article describes how to perform an on-demand update on a single VM or multiple VMs using update management center (preview). +The article describes how to perform an on-demand update on a single VM or multiple VMs using Update Manager (preview). See the following sections for detailed information: - [Install updates on a single VM](#install-updates-on-single-vm) See the following sections for detailed information: ## Supported regions -Update management center (preview) is available in all [Azure public regions](support-matrix.md#supported-regions). +Update Manager (preview) is available in all [Azure public regions](support-matrix.md#supported-regions). ++## Configure reboot settings ++The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment. ## Install updates on single VM >[!NOTE]-> You can install the updates from the Overview or Machines blade in update management center (preview) page or from the selected VM. +> You can install the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM. # [From Overview blade](#tab/install-single-overview) To install one time updates on a single VM, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates. +1. In **Update Manager (preview)**, **Overview**, choose your **Subscription** and select **One-time update** to install updates. :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png"::: To install one time updates on a single VM, follow these steps: - In **Select resources**, choose the machine and select **Add**. -1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, its necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine. +1. In **Updates**, specify the updates to include in the deployment. For each product, select or deselect all supported update classifications and specify the ones to include in your update deployment. If your deployment is meant to apply only for a select set of updates, it's necessary to deselect all the pre-selected update classifications when configuring the **Inclusion/exclusion** updates described below. This ensures only the updates you've specified to include in this deployment are installed on the target machine. > [!NOTE]- > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in update center management (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category. - > - Update management center (preview) doesn't support driver updates. + > - Selected Updates shows a preview of OS updates which may be installed based on the last OS update assessment information available. If the OS update assessment information in Update Manager (preview) is obsolete, the actual updates installed would vary. Especially if you have chosen to install a specific update category, where the OS updates applicable may vary as new packages or KB Ids may be available for the category. + > - Update Manager (preview) doesn't support driver updates. - Select **+Include update classification**, in the **Include update classification** select the appropriate classification(s) that must be installed on your machines. :::image type="content" source="./media/deploy-updates/include-update-classification-inline.png" alt-text="Screenshot on including update classification." lightbox="./media/deploy-updates/include-update-classification-expanded.png"::: - - Select **Include KB ID/package** to include in the updates. Enter a comma-separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update management center (preview) shows a preview of OS updates under the **Selected Updates** section. + - Select **Include KB ID/package** to include in the updates. Enter a comma separated list of Knowledge Base article ID numbers to include or exclude for Windows updates. For example, `3103696, 3134815`. For Windows, you can refer to the [MSRC link](https://msrc.microsoft.com/update-guide/deployments) to get the details of the latest Knowledge Base released. For supported Linux distros, you specify a comma separated list of packages by the package name, and you can include wildcards. For example, `kernel*, glibc, libc=1.0.1`. Based on the options specified, update Manager (preview) shows a preview of OS updates under the **Selected Updates** section. - To exclude updates that you don't want to install, select **Exclude KB ID/package**. We recommend checking this option because updates that are not displayed here might be installed, as newer updates might be available. - - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date , choose the date and select **Add** and **Next**. + - To ensure that the updates published are on or before a specific date, select **Include by maximum patch publish date** and in the Include by maximum patch publish date, choose the date and select **Add** and **Next**. :::image type="content" source="./media/deploy-updates/include-patch-publish-date-inline.png" alt-text="Screenshot on including patch publish date." lightbox="./media/deploy-updates/include-patch-publish-date-expanded.png"::: To install one time updates on a single VM, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates. +1. In **Update Manager (Preview)**, **Machine**, choose your **Subscription**, choose your machine and select **One-time update** to install updates. 1. Select to **Install now** to proceed with installing updates. To install one time updates on a single VM, follow these steps: 1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.-1. In **Updates**, select **Go to Updates using Update Center**. +1. In **Updates**, select **Go to Updates using Azure Update Manager**. 1. In **Updates (Preview)**, select **One-time update** to install the updates. 1. In **Install one-time updates** page, the selected machine appears. Choose the machine, select **Next** and follow the procedure from step 4 listed in **From Overview blade** of [Install updates on single VM](#install-updates-on-single-vm). You can schedule updates 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates. +1. In **Update Manager (Preview)**, **Overview**, choose your **Subscription**, select **One-time update**, and **Install now** to install updates. :::image type="content" source="./media/deploy-updates/install-updates-now-inline.png" alt-text="Example of installing one-time updates." lightbox="./media/deploy-updates/install-updates-now-expanded.png"::: A notification appears to inform you the activity has started and another is cre You can browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions. For more information, see [Update deployment history](manage-multiple-machines.md#update-deployment-history). -After your scheduled deployment starts, you can see it's status on the **History** tab. It displays the total number of deployments including the successful and failed deployments. +After your scheduled deployment starts, you can see its status on the **History** tab. It displays the total number of deployments including the successful and failed deployments. :::image type="content" source="./media/deploy-updates/updates-history-inline.png" alt-text="Screenshot showing updates history." lightbox="./media/deploy-updates/updates-history-expanded.png"::: > [!NOTE]-> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update management center (preview)** > **Manage** > **History**. +> The **Windows update history** currently doesn't show the updates summary that are installed from Azure Update Management. To view a summary of the updates applied on your machines, go to **Update manager (preview)** > **Manage** > **History**. A list of the deployments created are shown in the update deployment grid and include relevant information about the deployment. Every update deployment has a unique GUID, represented as **Operation ID**, which is listed along with **Status**, **Updates Installed** and **Time** details. You can filter the results listed in the grid. Select any one of the update deployments from the list to open the **Update depl ## Next steps -* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Dynamic Scope Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/dynamic-scope-overview.md | Title: An overview of dynamic scoping (preview) description: This article provides information about dynamic scoping (preview), its purpose and advantages.-+ Last updated 07/05/2023 The criteria will be evaluated at the scheduled run time, which will be the fina > [!NOTE] > You can associate one dynamic scope to one schedule. -## Prerequisites [!INCLUDE [dynamic-scope-prerequisites.md](includes/dynamic-scope-prerequisites.md)] |
update-center | Guidance Migration Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/guidance-migration-azure.md | + + Title: Patching guidance overview for Microsoft Configuration Manager to Azure +description: Patching guidance overview for Microsoft Configuration Manager to Azure. View on how to get started with Azure Update Manager, mapping capabilities of MCM software and FAQs. +++ Last updated : 08/23/2023++++# Guidance on patching while migrating from Microsoft Configuration Manager to Azure ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides the details on how to patch your migrated virtual machines on Azure. ++Microsoft Configuration Manager (MCM) helps you to manage PCs and servers, keep software up-to-date, set configuration and security policies, and monitor system status. ++ The [Azure Migration tool](https://learn.microsoft.com/mem/configmgr/core/support/azure-migration-tool) helps you to programmatically create Azure virtual machines (VMs) for Configuration Manager and installs the various site roles with default settings. The validation of new roles and removal of the on-premises site system role enables MCM to provide all the on-premises capabilities and experiences in Azure. ++Additionally, you can use the native [Azure Update Manager](overview.md) to manage and govern update compliance for Windows and Linux machines across your deployments in Azure, on-premises, and on the other cloud platforms, from a single dashboard, with no operational cost for managing the patching infrastructure. Azure Update Manager is similar to the update management component of MCM that is designed as a standalone Azure service to provide SaaS experience on Azure to manage hybrid environments. ++The MCM in Azure and Azure Update Manager can fulfill your patching requirements as per your requirement. +- Using MCM, you can continue with the existing investments in MCM and the processes to maintain the patch update management cycle for Windows VMs. +- Using Azure Update Manager, you can achieve a consistent management of VMs and operating system updates across your cloud and hybrid environments. You don't need to maintain Azure virtual machines for hosting the different Configuration Manager roles and don't need an MCM license thereby reducing the total cost for maintaining the patch update management cycle for all the machines in your environment. [Learn more](https://techcommunity.microsoft.com/t5/windows-it-pro-blog/what-s-uup-new-update-style-coming-next-week/ba-p/3773065). +++## Manage software updates using Azure Update Manager ++1. Sign in to the [Azure portal](https://portal.azure.com) and search for Azure Update Manager (preview). ++ :::image type="content" source="./media/guidance-migration-azure/update-manager-service-selection-inline.png" alt-text="Screenshot of selecting the Azure Update Manager from Azure portal." lightbox="./media/guidance-migration-azure/update-manager-service-selection-expanded.png"::: ++1. In the **Azure Update Manager (Preview)** home page, under **Manage** > **Machines**, select your subscription to view all your machines. +1. Filter as per the available options to know the status of your specific machines. ++ :::image type="content" source="./media/guidance-migration-azure/filter-machine-status-inline.png" alt-text="Screenshot of selecting the filters in Azure Update Manager to view the machines." lightbox="./media/guidance-migration-azure/filter-machine-status-expanded.png"::: ++1. Select the suitable [assessment](assessment-options.md) and [patching](updates-maintenance-schedules.md) options as per your requirement. ++## Map MCM capabilities to Azure Update Manager ++The following table explains the mapping capabilities of MCM software Update Management to Azure Update Manager. ++| **Capability** | **Microsoft Configuration Manager** | **Azure Update Manager**| +| | | | +|Synchronize software updates between sites(Central Admin site, Primary, Secondary sites)| The top site (either central admin site or stand-alone primary site) connects to Microsoft Update to retrieve software update. [Learn more](https://learn.microsoft.com/mem/configmgr/sum/understand/software-updates-introduction). After the top sites are synchronized, the child sites are synchronized. | There's no hierarchy of machines in Azure and therefore all machines connected to Azure receive updates from the source repository. | +|Synchronize software updates/check for updates (retrieve patch metadata). | You can scan for updates periodically by setting configuration on the Software update point. [Learn more](https://learn.microsoft.com/mem/configmgr/sum/get-started/synchronize-software-updates#to-schedule-software-updates-synchronization). | You can enable periodic assessment to enable scan of patches every 24 hours. [Learn more](assessment-options.md). | +|Configuring classifications/products to synchronize/scan/assess | You can choose the update classifications (security or critical updates) to synchronize/scan/assess. [Learn more](https://learn.microsoft.com/mem/configmgr/sum/get-started/configure-classifications-and-products). | There's no such capability here. The entire software metadata is scanned.| +|Deploy software updates (install patches)| Provides three modes of deploying updates: </br> Manual deployment </br> Automatic deployment </br> Phased deployment [Learn more](https://learn.microsoft.com/mem/configmgr/sum/deploy-use/deploy-software-updates).| Manual deployment is mapped to deploying [one-time updates](deploy-updates.md) and Automatic deployment is mapped to [scheduled updates](scheduled-patching.md). (The [Automatic Deployment Rules (ADRs)](https://learn.microsoft.com/mem/configmgr/sum/deploy-use/automatically-deploy-software-updates#BKMK_CreateAutomaticDeploymentRule) can be mapped to schedules). There's no phased deployment option. | ++## Limitations in Azure Update Manager (preview) ++The following are the current limitations: ++- **Orchestration groups with Pre/Post scripts** - [Orchestration groups](https://learn.microsoft.com/mem/configmgr/sum/deploy-use/orchestration-groups) can't be created in Azure Update Manager to specify a maintenance sequence, allow some machines for updates at the same time and so on. (The orchestration groups allow you to use the pre/post scripts to run tasks before and after a patch deployment). ++### Patching machines +After you set up configurations for assessment and patching, you can deploy/install either through [on-demand updates](deploy-updates.md) (one time or manual update) or [schedule updates](scheduled-patching.md) (automatic update) only. You can also deploy updates using [Azure Update Manager's API](manage-vms-programmatically.md). ++## Frequently asked questions ++### Where does Azure Update Manager get its updates from? ++Azure Update Manager refers to the repository that the machines point to. Most Windows machines by default point to the Windows Update catalog and Linux machines are configured to get updates from the `apt` or `yum` repositories. If the machines point to another repository such as [WSUS](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or a local repository then Azure Update Manager gets the updates from that repository. ++### Can Azure Update Manager patch OS, SQL and Third party software? ++Azure Update Manager refers to the repositories that the VMs point to. If the repository contains third party and SQL patches, Azure Update Manager can install SQL and third party patches. +> [!NOTE] +> By default, Windows VMs point to Windows Update repository that does not contain SQL and third party patches. If the VMs point to Microsoft Update, Azure Update Manager will patch OS, SQL, and third party updates. ++### Do I need to configure WSUS to use Azure Update Manager? ++You don't need WSUS to deploy patches in Azure Update Manager. Typically, all the machines connect to the internet repository to get updates (unless the machines point to WSUS or local repository that isn't connected to the internet). [Learn more](https://learn.microsoft.com/mem/configmgr/sum/). + +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [Check update compliance](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-center | Manage Arc Enabled Servers Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-arc-enabled-servers-programmatically.md | Title: Programmatically manage updates for Azure Arc-enabled servers in Update management center (preview) -description: This article tells how to use Update management center (preview) using REST API with Azure Arc-enabled servers. -+ Title: Programmatically manage updates for Azure Arc-enabled servers in Azure Update Manager (preview) +description: This article tells how to use Azure Update Manager (preview) using REST API with Azure Arc-enabled servers. + Last updated 06/15/2023-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with update management (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md). +This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure Arc-enabled servers with Azure Update Manager (preview) in Azure. If you're new to Azure Update Manager (preview) and you want to learn more, see [overview of Update Manager (preview)](overview.md). To use the Azure REST API to manage Azure virtual machines, see [How to programmatically work with Azure virtual machines](manage-vms-programmatically.md). -Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure). +Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure) and [Azure CLI](/cli/azure). -Support for Azure REST API to manage Azure Arc-enabled servers is available through the update management center (preview) virtual machine extension. +Support for Azure REST API to manage Azure Arc-enabled servers is available through the Update Manager (preview) virtual machine extension. ## Update assessment DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur ## Next steps -* To view update assessment and deployment logs generated by Update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Manage Dynamic Scoping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-dynamic-scoping.md | Title: Manage various operations of dynamic scoping (preview). description: This article describes how to manage dynamic scoping (preview) operations -+ Last updated 07/05/2023 -## Prerequisites - [!INCLUDE [dynamic-scope-prerequisites.md](includes/dynamic-scope-prerequisites.md)] ## Add a Dynamic scope (preview) To add a Dynamic scope to an existing configuration, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to add a Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** > **Add a dynamic scope**. To add a Dynamic scope to an existing configuration, follow these steps: To view the list of Dynamic scopes (preview) associated to a given maintenance configuration, follow these steps: -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update management center (preview)**. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to **Update Manager (preview)**. 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to view the Dynamic scope. 1. In the given maintenance configuration page, select **Dynamic scopes** to view all the Dynamic scopes that are associated with the maintenance configuration. ## Edit a Dynamic scope (preview) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to edit. Under **Actions** column, select the edit icon. To view the list of Dynamic scopes (preview) associated to a given maintenance c ## Delete a Dynamic scope (preview) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). 1. Select **Machines** > **Browse maintenance configurations** > **Maintenance configurations**. 1. In the **Maintenance configurations** page, select the name of the maintenance configuration for which you want to edit an existing Dynamic scope. 1. In the given maintenance configuration page > select **Dynamic scopes** and select the scope you want to delete. Select **Remove dynamic scope** and then select **Ok**. ## View patch history of a Dynamic scope (preview) -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). 1. Select **History** > **Browse maintenance configurations** > **Maintenance configurations** to view the patch history of a dynamic scope. Obtaining consent to apply updates is an important step in the workflow of dynam #### [From Update Settings](#tab/us) -1. In **Update management center**, go to **Overview** > **Update settings**. +1. In **Update Manager**, go to **Overview** > **Update settings**. 1. In **Change Update settings**, select **+Add machine** to add the machines. 1. In the list of machines sorted as per the operating system, go to the **Patch orchestration** option and select **Azure-orchestrated with user managed schedules (Preview)** to confirm that: Obtaining consent to apply updates is an important step in the workflow of dynam * [Deploy updates now (on-demand) for single machine](deploy-updates.md) * [Schedule recurring updates](scheduled-patching.md) * [Manage update settings via Portal](manage-update-settings.md)-* [Manage multiple machines using update management center](manage-multiple-machines.md) +* [Manage multiple machines using update Manager](manage-multiple-machines.md) |
update-center | Manage Multiple Machines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-multiple-machines.md | Title: Manage multiple machines in update management center (preview) -description: The article details how to use Update management center (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal. -+ Title: Manage multiple machines in Azure Update Manager (preview) +description: The article details how to use Azure Update Manager (preview) in Azure to manage multiple supported machines and view their compliance state in the Azure portal. + Last updated 05/02/2023 -# Manage multiple machines with update management center (Preview) +# Manage multiple machines with Azure Update Manager (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article describes the various features that update management center (Preview) offers to manage the system updates on your machines. Using the update management center (preview), you can: +This article describes the various features that Update Manager (Preview) offers to manage the system updates on your machines. Using the Update Manager (preview), you can: - Quickly assess the status of available operating system updates. - Deploy updates. This article describes the various features that update management center (Previ Instead of performing these actions from a selected Azure VM or Arc-enabled server, you can manage all your machines in the Azure subscription. -## View update management center (Preview) status +## View update Manager (preview) status 1. Sign in to the [Azure portal](https://portal.azure.com). -1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update management center(Preview)**. +1. To view update assessment across all machines, including Azure Arc-enabled servers navigate to **Update Manager(preview)**. - :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update management center overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png"::: + :::image type="content" source="./media/manage-multiple-machines/overview-page-inline.png" alt-text="Screenshot of update manager overview page in the Azure portal." lightbox="./media/manage-multiple-machines/overview-page-expanded.png"::: In the **Overview** page - the summary tiles show the following status: Instead of performing these actions from a selected Azure VM or Arc-enabled serv - **Update status of machines**ΓÇöshows the update status information for assessed machines that had applicable or needed updates. You can filter the results based on classification types. By default, all [classifications](../automation/update-management/overview.md#update-classifications) are selected and as per the classification selection, the tile is updated. - The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used update management center (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days. + The graph provides a snapshot for all your machines in your subscription, regardless of whether you have used Update Manager (preview) for that machine. This assessment data comes from Azure Resource Graph, and it stores the data for seven days. From the assessment data available, machines are classified into the following categories: Instead of performing these actions from a selected Azure VM or Arc-enabled serv ## Summary of machine status -Update management center (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to update management center (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings. +Update Manager (preview) in Azure enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). The section shows how you can filter information to understand the update status of your machine resources, and for multiple machines, initiate an update assessment, update deployment, and manage their update settings. - In the update management center (preview) page, select **Machines** from the left menu. + In the Update Manager (preview) page, select **Machines** from the left menu. - :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of update management center(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png"::: + :::image type="content" source="./media/manage-multiple-machines/update-center-machines-page-inline.png" alt-text="Screenshot of Update Manager(preview) Machines page in the Azure portal." lightbox="./media/manage-multiple-machines/update-center-machines-page-expanded.png"::: On the page, the table lists all the machines in the specified subscription, and for each machine it helps you understand the following details that show up based on the latest assessment. - **Update status**ΓÇöthe total number of updates available identified as applicable to the machine's OS. For machines that haven't had a compliance assessment scan for the first time, y :::image type="content" source="./media/manage-multiple-machines/update-center-assess-now-complete-banner-inline.png" alt-text="Screenshot of assessment banner on Manage Machines page." lightbox="./media/manage-multiple-machines/update-center-assess-now-complete-banner-expanded.png"::: -Select a machine from the list to open update management center (Preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment. +Select a machine from the list to open Update Manager (preview) scoped to that machine. Here, you can view its detailed assessment status, update history, configure its patch orchestration options, and initiate an update deployment. ### Deploy the updates You can create a recurring update deployment for your machines. Select your mach ## Update deployment history -Update management center (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update management center (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update management center (preview), select **History** from the left menu. +Update Manager (preview) enables you to browse information about your Azure VMs and Arc-enabled servers across your Azure subscriptions relevant to Update Manager (preview). You can filter information to understand the update assessment and deployment history for multiple machines. In Update Manager (preview), select **History** from the left menu. ## Update deployment history by machines When you select any one maintenance run ID record, you can view an expanded stat The update assessment and deployment data are available for querying in Azure Resource Graph. You can apply this data to scenarios that include security compliance, security operations, and troubleshooting. Select **Go to resource graph** to go to the Azure Resource Graph Explorer. It enables running Resource Graph queries directly in the Azure portal. Resource Graph supports Azure CLI, Azure PowerShell, Azure SDK for Python, and more. For more information, see [First query with Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md). -When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update management center (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included. +When the Resource Graph Explorer opens, it is automatically populated with the same query used to generate the results presented in the table on the **History** page in Update Manager (preview). Ensure that you review the [query Update logs](query-logs.md) article to learn about the log records and their properties, and the sample queries included. ## Next steps * To set up and manage recurring deployment schedules, see [Schedule recurring updates](scheduled-patching.md)-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). +* To view update assessment and deployment logs generated by update manager (preview), see [query logs](query-logs.md). |
update-center | Manage Update Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md | Title: Manage update configuration settings in Update management center (preview) -description: The article describes how to manage the update settings for your Windows and Linux machines managed by Update management center (preview). -+ Title: Manage update configuration settings in Azure Update Manager (preview) +description: The article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview). + Last updated 05/30/2023-The article describes how to configure update settings from Update management center (preview) in Azure, to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines. +The article describes how to configure update settings from Azure Update Manager (preview), to control the update settings on your Azure VMs and Arc-enabled servers for one or more machines. ## Configure settings on single VM The article describes how to configure update settings from Update management ce To configure update settings on your machines on a single VM, follow these steps: >[!NOTE]-> You can schedule updates from the Overview blade or Machines blade in update management center (preview) page or from the selected VM. +> You can schedule updates from the Overview blade or Machines blade in Update Manager (preview) page or from the selected VM. # [From Overview blade](#tab/manage-single-overview) 1. Sign in to the [Azure portal](https://portal.azure.com).-1. In **Update management center**, select **Overview**, select your **Subscription**, and select **Update settings**. +1. In **Update Manager**, select **Overview**, select your **Subscription**, and select **Update settings**. 1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. 1. In **Select resources**, select the machine and select **Add**. 1. In the **Change update settings** page, you will see the machine classified as per the operating system with the list of following updates that you can select and apply. To configure update settings on your machines on a single VM, follow these steps - **Periodic assessment** - The **periodic Assessment** is set to run every 24 hours. You can either enable or disable this setting. - - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use update management center (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting. + - **Hot patch** - You can enable [hot patching](../automanage/automanage-hotpatch.md) for Windows Server Azure Edition Virtual Machines (VMs). Hot patching is a new way to install updates on supported *Windows Server Azure Edition* virtual machines that doesn't require a reboot after installation. You can use Update Manager (preview) to install other patches by scheduling patch installation or triggering immediate patch deployment. You can enable, disable or reset this setting. - **Patch orchestration** option provides the following: To configure update settings on your machines on a single VM, follow these steps # [From Machines blade](#tab/manage-single-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).-1. In **Update management center**, select **Machines** > your **subscription**. +1. In **Update Manager**, select **Machines** > your **subscription**. 1. Select the checkbox of your machine from the list and select **Update settings**. 1. Select **Update Settings** to proceed with the type of update for your machine. 1. In **Change update settings**, select **+Add machine** to select the machine for which you want to change the update settings. To configure update settings on your machines at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center**, select **Overview**, select your **Subscription** and select **Update settings**. +1. In **Update Manager**, select **Overview**, select your **Subscription** and select **Update settings**. 1. In **Change update settings**, select the update settings that you want to change for your machines. Follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). # [From Machines blade](#tab/manage-scale-machines) 1. Sign in to the [Azure portal](https://portal.azure.com).-1. In **Update management center**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list. +1. In **Update Manager**, select **Machines** > your **subscription**, and select the checkbox for all your machines from the list. 1. Select **Update Settings** to proceed with the type of update for your machines. 1. In **Change update settings**, you can select the update settings that you want to change for your machine and follow the procedure from step 3 listed in **From Overview blade** of [Configure settings on single VM](#configure-settings-on-single-vm). A notification appears to confirm that the update settings are successfully chan ## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Manage Updates Customized Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-updates-customized-images.md | Title: Overview of customized images in Update management center (preview). + Title: Overview of customized images in Azure Update Manager (preview). description: The article describes about customized images, how to register, validate the customized images for public preview and its limitations.-+ Last updated 05/02/2023 This article describes the customized image support, how to enable the subscript ## Asynchronous check to validate customized image support -If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update management Center (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported. +If you're using the Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager (preview) operations such as Check for updates, One-time update, Schedule updates, or Periodic assessment to validate if the virtual machines are supported for guest patching and then initiate patching if the VMs are supported. -Unlike marketplace images where support is validated even before Update management center operation is triggered. Here, there are no pre-existing validations in place and the Update management center operations are triggered and only their success or failure determines support. +Unlike marketplace images where support is validated even before Update Manager operation is triggered. Here, there are no pre-existing validations in place and the Update Manager operations are triggered and only their success or failure determines support. For instance, assessment call, will attempt to fetch the latest patch that is available from the image's OS family to check support. It stores this support-related data in Azure Resource Graph (ARG) table, which you can query to see the support status for your Azure Compute Gallery image. |
update-center | Manage Vms Programmatically | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-vms-programmatically.md | Title: Programmatically manage updates for Azure VMs -description: This article tells how to use update management center (preview) in Azure using REST API with Azure virtual machines. -+description: This article tells how to use Azure Update Manager (preview) in Azure using REST API with Azure virtual machines. + Last updated 06/15/2023-This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with update management center (preview) in Azure. If you're new to update management center (preview) and you want to learn more, see [overview of update management center (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md). +This article walks you through the process of using the Azure REST API to trigger an assessment and an update deployment on your Azure virtual machine with Azure Update Manager (preview) in Azure. If you're new to Update Manager (preview) and you want to learn more, see [overview of Azure Update Manager (preview)](overview.md). To use the Azure REST API to manage Arc-enabled servers, see [How to programmatically work with Arc-enabled servers](manage-arc-enabled-servers-programmatically.md). -Update management center (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/). +Azure Update Manager (preview) in Azure enables you to use the [Azure REST API](/rest/api/azure/) for access programmatically. Additionally, you can use the appropriate REST commands from [Azure PowerShell](/powershell/azure/) and [Azure CLI](/cli/azure/). -Support for Azure REST API to manage Azure VMs is available through the update management center (preview) virtual machine extension. +Support for Azure REST API to manage Azure VMs is available through the Update Manager (preview) virtual machine extension. ## Update assessment DELETE on `<ARC or Azure VM resourceId>/providers/Microsoft.Maintenance/configur ## Next steps -* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Manage Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-workbooks.md | Title: Create reports using workbooks in update management center (preview).. + Title: Create reports using workbooks in Azure Update Manager (preview). description: This article describes how to create and manage workbooks for VM insights.-+ Last updated 05/23/2023 -# Create reports in update management center (preview) +# Create reports in Azure Update Manager (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. This article describes how to create a workbook and how to edit a workbook to cr ## Create a workbook -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). -1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery. 1. Select **Quick start** tile > **Empty** or alternatively, you can select **+New** to create a workbook. 1. Select **+Add** to select any [elements](../azure-monitor/visualize/workbooks-create-workbook.md#create-a-new-azure-workbook) to add to the workbook. This article describes how to create a workbook and how to edit a workbook to cr 1. Select **Done Editing**. ## Edit a workbook-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). -1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update management center (Preview)| Workbooks|Gallery. -1. Select **Update management center** tile > **Overview** to view the Update management center (Preview)|Workbooks|Overview page. +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). +1. Under **Monitoring**, selectΓÇ»**Workbooks** to view the Update Manager (preview)| Workbooks|Gallery. +1. Select **Update Manager** tile > **Overview** to view the Update Manager (preview)|Workbooks|Overview page. 1. Select your subscription, and select **Edit** to enable the edit mode for all the four options. - Machines overall status & configuration This article describes how to create a workbook and how to edit a workbook to cr * [Deploy updates now (on-demand) for single machine](deploy-updates.md) * [Schedule recurring updates](scheduled-patching.md) * [Manage update settings via Portal](manage-update-settings.md)-* [Manage multiple machines using update management center](manage-multiple-machines.md) +* [Manage multiple machines using update manager](manage-multiple-machines.md) |
update-center | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/overview.md | Title: Update management center (preview) overview -description: The article tells what update management center (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments. -+ Title: Azure Update Manager (preview) overview +description: The article tells what Azure Update Manager (preview) in Azure is and the system updates for your Windows and Linux machines in Azure, on-premises, and other cloud environments. + Last updated 07/05/2023 -# About Update management center (preview) +# About Azure Update Manager (preview) > [!Important]-> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update management center (Preview) is the v2 version of Automation Update management and the future of Update management in Azure. UMC is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md). -> - Guidance for migrating from Automation Update management to Update management center will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to UMC. +> - [Automation Update management](../automation/update-management/overview.md) relies on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (aka MMA agent), which is on a deprecation path and wonΓÇÖt be supported after **August 31, 2024**. Update manager (preview) is the v2 version of Automation Update management and the future of Update management in Azure. Azure Update Manager (preview) is a native service in Azure and does not rely on [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) or [Azure Monitor agent](../azure-monitor/agents/agents-overview.md). +> - Guidance for migrating from Automation Update management to Update manager (preview) will be provided to customers once the latter is Generally Available. For customers using Automation Update management, we recommend continuing to use the Log Analytics agent and **NOT** migrate to Azure Monitoring agent until migration guidance is provided for Update management or else Automation Update management will not work. Also, the Log Analytics agent would not be deprecated before moving all Automation Update management customers to Update Manager (preview). -Update management center (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update management center (preview) to make real-time updates or schedule them within a defined maintenance window. +Update Manager (preview) is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. In addition, you can use the Update Manager (preview) to make real-time updates or schedule them within a defined maintenance window. -You can use the update management center (preview) in Azure to: +You can use the Update Manager (preview) in Azure to: - Oversee update compliance for your entire fleet of machines in Azure, on-premises, and other cloud environments. - Instantly deploy critical updates to help secure your machines. You can use the update management center (preview) in Azure to: We also offer other capabilities to help you manage updates for your Azure Virtual Machines (VM) that you should consider as part of your overall update management strategy. Review the Azure VM [Update options](../virtual-machines/updates-maintenance-overview.md) to learn more about the options available. -Before you enable your machines for update management center (preview), make sure that you understand the information in the following sections. +Before you enable your machines for Update Manager (preview), make sure that you understand the information in the following sections. > [!IMPORTANT]-> - Update management center (preview) doesnΓÇÖt store any customer data. -> - Update management center (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA). -> - While update management center is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> - Update Manager (preview) doesnΓÇÖt store any customer data. +> - Update Manager (preview) can manage machines that are currently managed by Azure Automation [Update management](../automation/update-management/overview.md) feature without interrupting your update management process. However, we don't recommend migrating from Automation Update Management since this preview gives you a chance to evaluate and provide feedback on features before it's generally available (GA). +> - While update manager is in **preview**, the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Key benefits -Update management center (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update management center (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below: +Update Manager (preview) has been redesigned and doesn't depend on Azure Automation or Azure Monitor Logs, as required by the [Azure Automation Update Management feature](../automation/update-management/overview.md). Update Manager (preview) offers many new features and provides enhanced functionality over the original version available with Azure Automation and some of those benefits are listed below: - Provides native experience with zero on-boarding. - Built as native functionality on Azure Compute and Azure Arc for Servers platform for ease of use. Update management center (preview) has been redesigned and doesn't depend on Azu - Global availability in all Azure Compute and Azure Arc regions. - Works with Azure roles and identity. - Granular access control at per resource level instead of access control at Automation account and Log Analytics workspace level.- - Update management center now as Azure Resource Manager based operations. It allows RBAC and roles based of ARM in Azure. + - Azure Update Manager now as Azure Resource Manager based operations. It allows RBAC and roles based of ARM in Azure. - Enhanced flexibility - Ability to take immediate action either by installing updates immediately or schedule them for a later date. - Check updates automatically or on demand. - Helps secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md) or custom maintenance schedules. - Sync patch cycles in relation to patch TuesdayΓÇöthe unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month. -The following diagram illustrates how update management center (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux. +The following diagram illustrates how Update Manager (preview) assesses and applies updates to all Azure machines and Arc-enabled servers for both Windows and Linux. -![Update center workflow](./media/overview/update-management-center-overview.png) +![Update Manager workflow](./media/overview/update-management-center-overview.png) -To support management of your Azure VM or non-Azure machine, update management center (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any update management center operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The update management center (preview) extension is installed and managed using the following: +To support management of your Azure VM or non-Azure machine, Update Manager (preview) relies on a new [Azure extension](../virtual-machines/extensions/overview.md) designed to provide all the functionality required to interact with the operating system to manage the assessment and application of updates. This extension is automatically installed when you initiate any Update manager (preview) operations such as **check for updates**, **install one time update**, **periodic assessment** on your machine. The extension supports deployment to Azure VMs or Arc-enabled servers using the extension framework. The Update Manager (preview) extension is installed and managed using the following: - [Azure virtual machine Windows agent](../virtual-machines/extensions/agent-windows.md) or [Azure virtual machine Linux agent](../virtual-machines/extensions/agent-linux.md) for Azure VMs. - [Azure arc-enabled servers agent](../azure-arc/servers/agent-overview.md) for non-Azure Linux and Windows machines or physical servers. - The extension agent installation and configuration are managed by the update management center (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The update management center (preview) extension runs code locally on the machine to interact with the operating system, and it includes: + The extension agent installation and configuration are managed by the Update Manager (preview). There's no manual intervention required as long as the Azure VM agent or Azure Arc-enabled server agent is functional. The Update Manager (preview) extension runs code locally on the machine to interact with the operating system, and it includes: - Retrieving the assessment information about status of system updates for it specified by the Windows Update client or Linux package manager. - Initiating the download and installation of approved updates with Windows Update client or Linux package manager. -All assessment information and update installation results are reported to update management center (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results. +All assessment information and update installation results are reported to Update Manager (preview) from the extension and is available for analysis with [Azure Resource Graph](../governance/resource-graph/overview.md). You can view up to the last seven days of assessment data, and up to the last 30 days of update installation results. -The machines assigned to update management center (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in update management center (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository. +The machines assigned to Update Manager (preview) report how up to date they're based on what source they're configured to synchronize with. [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) on Windows machines can be configured to report to [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) or Microsoft Update which is by default, and Linux machines can be configured to report to a local or public YUM or APT package repository. If the Windows Update Agent is configured to report to WSUS, depending on when WSUS last synchronized with Microsoft update, the results in Update Manager (preview) might differ from what Microsoft update shows. This behavior is the same for Linux machines that are configured to report to a local repository instead of a public package repository. >[!NOTE]-> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with update management center (preview). +> You can manage your Azure VMs or Arc-enabled servers directly, or at-scale with Update Manager (preview). ## Prerequisites-Along with the prerequisites listed below, see [support matrix](support-matrix.md) for update management center (preview). +Along with the prerequisites listed below, see [support matrix](support-matrix.md) for Update Manager (preview). ### Role Arc enabled server | [Azure Connected Machine Resource Administrator](../azure-a ### Permissions -You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the update management center (preview). +You need the following permissions to create and manage update deployments. The following table shows the permissions needed when using the Update Manager (preview). **Actions** |**Permission** |**Scope** | | | | You need the following permissions to create and manage update deployments. The For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems). > [!NOTE]-> Currently, update management center (preview) has the following limitations regarding the operating system support: +> Currently, Update Manager (preview) has the following limitations regarding the operating system support: > - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.-> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in update management center (preview). +> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview). > -> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update management center (preview). [Learn more](support-matrix.md#supported-operating-systems). +> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview). [Learn more](support-matrix.md#supported-operating-systems). ## VM Extensions To view the available extensions for a VM in the Azure portal, follow these step ### Network planning -To prepare your network to support update management center (preview), you may need to configure some infrastructure components. +To prepare your network to support Update Manager (preview), you may need to configure some infrastructure components. For Windows machines, you must allow traffic to any endpoints required by Windows Update agent. You can find an updated list of required endpoints in [Issues related to HTTP/Proxy](/windows/deployment/update/windows-update-troubleshooting#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) (WSUS) deployment, you must also allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../v - [Deploy updates now (on-demand) for single machine](deploy-updates.md) - [Schedule recurring updates](scheduled-patching.md) - [Manage update settings via Portal](manage-update-settings.md)-- [Manage multiple machines using update management center](manage-multiple-machines.md)+- [Manage multiple machines using Update manager](manage-multiple-machines.md) |
update-center | Periodic Assessment At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/periodic-assessment-at-scale.md | Title: Enable periodic assessment using policy -description: This article describes how to manage the update settings for your Windows and Linux machines managed by update management center (preview). -+description: This article describes how to manage the update settings for your Windows and Linux machines managed by Azure Update Manager (preview). + Last updated 04/21/2022-This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, update management center (preview) fetches updates on your machine once every 24 hours. +This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, Update Manager (preview) fetches updates on your machine once every 24 hours. ## Enable Periodic assessment for your Azure machines using Policy You can monitor the compliance of resources under **Compliance** and remediation ## Enable Periodic assessment for your Arc machines using Policy 1. Go to **Policy** from the Azure portal and under **Authoring**, **Definitions**. -1. From the **Category** dropdown, select **Update management center**. Select *[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers* for Arc-enabled machines. +1. From the **Category** dropdown, select **Update Manager**. Select *[Preview]: Configure periodic checking for missing system updates on Azure Arc-enabled servers* for Arc-enabled machines. 1. When the Policy Definition opens, select **Assign**. 1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope and select **Next**. 1. In **Parameters**, uncheck **Only show parameters that need input or review** so that you can see the values of parameters. In **Assessment** mode, select *AutomaticByPlatform*, select *Operating system* and select **Next**. You need to create separate policies for Windows and Linux. You can monitor compliance of resources under **Compliance** and remediation sta ## Monitor if Periodic Assessment is enabled for your machines (both Azure and Arc-enabled machines) 1. Go to **Policy** from the Azure portal and under **Authoring**, go to **Definitions**. -1. From the Category dropdown above, select **Update management center**. Select *[Preview]: Machines should be configured to periodically check for missing system updates*. +1. From the Category dropdown above, select **Update Manager**. Select *[Preview]: Machines should be configured to periodically check for missing system updates*. 1. When the Policy Definition opens, select **Assign**. 1. In **Basics**, select your subscription as your scope. You can also specify a resource group within subscription as the scope. Select **Next.** 1. In **Parameters** and **Remediation**, select **Next.** You can monitor compliance of resources under **Compliance** and remediation sta ## Next steps * [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal.-* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Prerequsite For Schedule Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/prerequsite-for-schedule-patching.md | Title: Configure schedule patching on Azure VMs to ensure business continuity in update management center (preview). -description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Update management center (preview). -+ Title: Configure schedule patching on Azure VMs to ensure business continuity in Azure Update Manager (preview). +description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager (preview). + Last updated 05/09/2023 Additionally, in some instances, when you remove the schedule from a VM, there i To identify the list of VMs with the associated schedules for which you have to enable new VM property, follow these steps: -1. Go to **Update management center (Preview)** home page and select **Machines** tab. +1. Go to **Update Manager (preview)** home page and select **Machines** tab. 1. In **Patch orchestration** filter, select **Azure Managed - Safe Deployment**. 1. Use the **Select all** option to select the machines and then select **Export to CSV**. 1. Open the CSV file and in the column **Associated schedules**, select the rows that have an entry. You can update the patch orchestration option for existing VMs that either alrea To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Go to **Update management center (Preview)**, select **Update Settings**. +1. Go to **Update Manager (preview)**, select **Update Settings**. 1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select *Customer Managed Schedules* and then select **Save**. To update the patch mode, follow these steps: To update the patch mode, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).-1. Go to **Update management center (Preview)**, select **Update Settings**. +1. Go to **Update Manager (preview)**, select **Update Settings**. 1. In **Change update settings**, select **+Add machine**. 1. In **Select resources**, select your VMs and then select **Add**. 1. In **Change update settings**, under **Patch orchestration**, select ***Azure Managed - Safe Deployment*** and then select **Save**. Scenario 8 | No | False | No | Neither the autopatch nor the schedule patch will ## Next steps -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/query-logs.md | Title: Query logs and results from Update management center (preview) -description: The article provides details on how you can review logs and search results from update management center (preview) in Azure using Azure Resource Graph + Title: Query logs and results from Update Manager (preview) +description: The article provides details on how you can review logs and search results from update manager (preview) in Azure using Azure Resource Graph Last updated 04/21/2022 -# Overview of query logs in update management center (Preview) +# Overview of query logs in Azure Update Manager (preview) -Logs created from operations like update assessments and installations are stored by Update management center (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update management center (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources. +Logs created from operations like update assessments and installations are stored by Update Manager (preview) in an [Azure Resource Graph](../governance/resource-graph/overview.md). The Azure Resource Graph is a service in Azure designed to be the store for Azure service details without any cost or deployment requirements. Update Manager (preview) uses the Azure Resource Graph to store its results, and you can view the update history of the last 30 days from the resources. Azure Resource Graph's query language is based on the [Kusto query language](../governance/resource-graph/concepts/query-language.md) used by Azure Data Explorer. -The article describes the structure of the logs from Update management center (Preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs. +The article describes the structure of the logs from Update Manager (preview) and how you can use [Azure Resource Graph Explorer](../governance/resource-graph/first-query-portal.md) to analyze them in support of your reporting, visualizing, and export needs. ## Log structure -Update management center (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph. +Update Manager (preview) sends the results of all its operation into Azure Resource Graph as logs, which are available for 30 days. Listed below are the structure of logs being sent to Azure Resource Graph. ### Patch assessment results If the `PROPERTIES` property for the resource type is `patchassessmentresults/so |`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null.| |`classifications` |Category of which the specific update belongs to as per the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of category, then the value is `Others` (for Linux) or `Updates` (for Windows Server). | |`rebootRequired` |Value indicates if the specific update requires the OS to reboot to complete the installation. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't require a reboot, then the value is `false`.|-|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if update management center (preview) can reboot the target machine. | +|`rebootBehavior` |Behavior set in the OS update installation runs job when configuring the update deployment if Update Manager (preview) can reboot the target machine. | |`patchName` |Name or label for the specific update generated by the machine's OS package manager or update service.| |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service.| |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`.| If the `PROPERTIES` property for the resource type is `patchinstallationresults/ |`publishedDateTime` |Timestamp representing when the specific update was made available by the OS vendor. Information is generated by the machine's OS update service or package manager. If your OS package manager or update service doesn't provide the detail of when an update was provided by OS vendor, then the value is null. | |`classifications` |Category that the specific update belongs to as per the OS vendor. As provided by machine's OS update service or package manager. If your OS package manager or update service, doesn't provide the detail of category, then the value of the field will be Others (for Linux) and Updates (for Windows Server). | |`rebootRequired` |Flag to specify if the specific update requires the OS to reboot to complete installation. As provided by machine's OS update service or package manager. If your OS package manager or update service doesn't provide information regarding need of OS reboot, then the value of the field will be set to 'false'. |-|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing update management center (preview) to reboot the OS. | +|`rebootBehavior` |Behavior set in the OS update installation runs job by user, regarding allowing Update Manager (preview) to reboot the OS. | |`patchName` |Name or Label for the specific update as provided by the machine's OS package manager or update service. | |`Kbid` |If the machine's OS is Windows Server, the value includes the unique KB ID for the update provided by the Windows Update service. | |`version` |If the machine's OS is Linux, the value includes the version details for the update as provided by Linux package manager. For example, `1.0.1.el7.3`. | If the `PROPERTIES` property for the resource type is `configurationassignments` ## Next steps - For details of sample queries, see [Sample query logs](sample-query-logs.md).-- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) update management center (preview).+- To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Quickstart On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/quickstart-on-demand.md | Title: Quickstart - deploy updates in using update management center in the Azure portal -description: This quickstart helps you to deploy updates immediately and view results for supported machines in update management center (preview) using the Azure portal. + Title: Quickstart - deploy updates in using update manager in the Azure portal +description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager (preview) using the Azure portal. Last updated 04/21/2022 -Using the Update management center (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually. +Using the Update Manager (preview) you can update automatically at scale with the help of built-in policies and schedule updates on a recurring basis or you can also take control by checking and installing updates manually. This quickstart details you how to perform manual assessment and apply updates on a selected Azure virtual machine(s) or Arc-enabled server on-premises or in cloud environments. This quickstart details you how to perform manual assessment and apply updates o ## Check updates -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). 1. SelectΓÇ»**Getting started**, **On-demand assessment and updates**, selectΓÇ»**Check for updates**. For the assessed machines that are reporting updates, you can configure [hotpatc To configure the settings on your machines, follow these steps: -1. In **Update management center (Preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. +1. In **Update Manager (preview)|Getting started**, in **On-demand assessment and updates**, selectΓÇ»**Update settings**. In the **Change update settings** page, by default **Properties** is selected. 1. Select from the list of update settings to apply them to the selected machines. To configure the settings on your machines, follow these steps: As per the last assessment performed on the selected machines, you can now select resources and machines to install the updates -1. In the **Update management center(Preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. +1. In the **Update Manager (preview)|Getting started** page, in **On-demand assessment and updates**, selectΓÇ»**Install updates by machines**. 1. In the **Install one-time updates** page, select one or more machines from the list in the **Machines** tab and click **Next**. As per the last assessment performed on the selected machines, you can now selec 1. In **Review + install**, verify the update deployment options and select **Install**. -A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update management center**, **History** page. +A notification confirms that the installation of updates is in progress and after completion, you can view the results in the **Update Manager**, **History** page. ## Next steps |
update-center | Sample Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/sample-query-logs.md | Title: Sample query logs and results from Update management center (preview) -description: The article provides details of sample query logs from update management center (preview) in Azure using Azure Resource Graph -+ Title: Sample query logs and results from Azure Update Manager (preview) +description: The article provides details of sample query logs from Azure Update Manager (preview) in Azure using Azure Resource Graph + Last updated 04/21/2022 maintenanceresources ``` ## Next steps-- Review logs and search results from update management center (preview) in Azure using [Azure Resource Graph](query-logs.md).-- Troubleshoot issues in update management center (preview), see the [Troubleshoot](troubleshoot.md).+- Review logs and search results from Update Manager (preview) in Azure using [Azure Resource Graph](query-logs.md). +- Troubleshoot issues in Update Manager (preview), see the [Troubleshoot](troubleshoot.md). |
update-center | Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/scheduled-patching.md | Title: Scheduling recurring updates in Update management center (preview) -description: The article details how to use update management center (preview) in Azure to set update schedules that install recurring updates on your machines. -+ Title: Scheduling recurring updates in Azure Update Manager (preview) +description: The article details how to use Azure Update Manager (preview) in Azure to set update schedules that install recurring updates on your machines. + Last updated 05/30/2023 -You can use update management center (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale. +You can use Update Manager (preview) in Azure to create and save recurring deployment schedules. You can create a schedule on a daily, weekly or hourly cadence, specify the machines that must be updated as part of the schedule, and the updates to be installed. This schedule will then automatically install the updates as per the created schedule for single VM and at scale. -Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see [Maintenance control documentation](/azure/virtual-machines/maintenance-control). ## Prerequisites for scheduled patching -1. See [Prerequisites for Update management center (preview)](./overview.md#prerequisites) +1. See [Prerequisites for Update Manager (preview)](./overview.md#prerequisites) 1. Patch orchestration of the Azure machines should be set to **Customer Managed Schedules (Preview)**. For more information, see [how to enable schedule patching on existing VMs](prerequsite-for-schedule-patching.md#enable-schedule-patching-on-azure-vms). For Azure Arc-enabled machines, it isn't a requirement. > [!Note] Update management center (preview) uses maintenance control schedule instead of 1. All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently. 1. VMs in a common availability set are updated within Update Domain boundaries and, VMs across multiple Update Domains aren't updated concurrently. +## Configure reboot settings ++The registry keys listed in [Configuring Automatic Updates by editing the registry](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry) and [Registry keys used to manage restart](/windows/deployment/update/waas-restart#registry-keys-used-to-manage-restart) can cause your machines to reboot, even if you specify **Never Reboot** in the **Schedule** settings. Configure these registry keys to best suit your environment. + ## Service limits The following are the recommended limits for the mentioned indicators: The following are the recommended limits for the mentioned indicators: ## Schedule recurring updates on single VM >[!NOTE]-> You can schedule updates from the Overview or Machines blade in update management center (preview) page or from the selected VM. +> You can schedule updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM. # [From Overview blade](#tab/schedule-updates-single-overview) To schedule recurring updates on a single VM, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**. +1. In **Update Manager (preview)**, **Overview**, select your **Subscription**, and select **Schedule updates**. 1. In **Create new maintenance configuration**, you can create a schedule for a single VM. To schedule recurring updates on a single VM, follow these steps: 1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note] - > Update management center (preview) doesn't support driver updates. + > Update Manager (preview) doesn't support driver updates. 1. In the **Tags** page, assign tags to maintenance configurations. To schedule recurring updates on a single VM, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (Preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**. +1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machine and select **Schedule updates**. 1. In **Create new maintenance configuration**, you can create a schedule for a single VM, assign machine and tags. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. To schedule recurring updates at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (Preview)**, **Overview**, select your **Subscription** and select **Schedule updates**. +1. In **Update Manager (preview)**, **Overview**, select your **Subscription** and select **Schedule updates**. 1. In the **Create new maintenance configuration** page, you can create a schedule for multiple machines. To schedule recurring updates at scale, follow these steps: 1. In the **Updates** page, specify the updates to include in the deployment such as update classification(s) or KB ID/ packages that must be installed when you trigger your schedule. > [!Note] - > Update management center (preview) doesn't support driver updates. + > Update Manager (preview) doesn't support driver updates. 1. In the **Tags** page, assign tags to maintenance configurations. To schedule recurring updates at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In **Update management center (Preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**. +1. In **Update Manager (preview)**, **Machines**, select your **Subscription**, select your machines and select **Schedule updates**. In **Create new maintenance configuration**, you can create a schedule for a single VM. Follow the procedure from step 3 listed in **From Overview blade** of [Schedule recurring updates on single VM](#schedule-recurring-updates-on-single-vm) to create a maintenance configuration and assign a schedule. A notification appears that the deployment is created. ## Attach a maintenance configuration A maintenance configuration can be attached to multiple machines. It can be attached to machines at the time of creating a new maintenance configuration or even after you've created one. - 1. In **Update management center**, select **Machines** and select your **Subscription**. + 1. In **Update Manager**, select **Machines** and select your **Subscription**. 1. Select your machine and in **Updates (Preview)**, select **Scheduled updates** to create a maintenance configuration or attach existing maintenance configuration to the scheduled recurring updates. 1. In **Scheduling**, select **Attach maintenance configuration**. 1. Select the maintenance configuration that you would want to attach and select **Attach**. You can create a new Guest OS update maintenance configuration or modify an exis ## Onboarding to Schedule using Policy -The update management center (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. +The update Manager (preview) allows you to target a group of Azure or non-Azure VMs for update deployment via Azure Policy. The grouping using policy, keeps you from having to edit your deployment to update machines. You can use subscription, resource group, tags or regions to define the scope and use this feature for the built-in policies which you can customize as per your use-case. > [!NOTE] > This policy also ensures that the patch orchestration property for Azure machines is set to **Customer Managed Schedules (Preview)** as it is a prerequisite for scheduled patching. Policy allows you to assign standards and assess compliance at scale. [Learn mor 1. Under **Basics**, in the **Assign policy** page: - In **Scope**, choose your subscription, resource group, and choose **Select**. - Select **Policy definition** to view a list of policies.- - In **Available Definitions**, select **Built in** for Type and in search, enter - *[Preview] Schedule recurring updates using Update Management Center* and click **Select**. + - In **Available Definitions**, select **Built in** for Type and in search, enter - *[Preview] Schedule recurring updates using Update Manager* and click **Select**. :::image type="content" source="./media/scheduled-updates/dynamic-scoping-defintion.png" alt-text="Screenshot that shows on how to select the definition."::: To view the current compliance state of your existing resources: :::image type="content" source="./media/scheduled-updates/dynamic-scoping-policy-compliance.png" alt-text="Screenshot that shows on policy compliance."::: ## Check your scheduled patching run-You can check the deployment status and history of your maintenance configuration runs from the Update management center portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id). +You can check the deployment status and history of your maintenance configuration runs from the Update Manager portal. Follow [Update deployment history by maintenance run ID](./manage-multiple-machines.md#update-deployment-history-by-maintenance-run-id). ## Next steps -* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | Security Awareness Ubuntu Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/security-awareness-ubuntu-support.md | + + Title: Security awareness and Ubuntu Pro support in Azure Update Manager +description: Guidance on security awareness and Ubuntu Pro support in Azure Update Manager. +++ Last updated : 08/24/2023++++# Guidance on security awareness and Ubuntu Pro support ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. +++This article provides the details on security vulnerabilities and Ubuntu Pro support in Azure Update Manager. ++If you are using Ubuntu 18.04 LTS, you must take the necessary steps against security vulnerabilities as the Ubuntu 18.04 image has reached the end of its [standard security maintenance](https://ubuntu.com/blog/18-04-end-of-standard-support) in May 2023. As Canonical has stopped publishing new security or critical updates after May 2023, the risk of systems and data to potential security threats is high. Without software updates, you may experience performance issues or compatibility issues whenever a new hardware or software is released. ++You can either upgrade to [Ubuntu Pro](https://ubuntu.com/azure/pro) or migrate to a newer version of LTS to avoid any future disruption to the patching mechanisms. When you [upgrade to Ubuntu Pro](https://ubuntu.com/blog/enhancing-the-ubuntu-experience-on-azure-introducing-ubuntu-pro-updates-awareness), you can avoid any security or performance issues. +++## Ubuntu Pro on Azure Update Manager + +Azure Update Manager assesses both Azure and Arc-enabled VMs to indicate any action. AUM helps to identify Ubuntu instances that don't have the available security updates and allows an upgrade to Ubuntu Pro from the Azure portal. For example, an Ubuntu server 18.04 LTS instance on Azure Update Manager has information about upgrading to Ubuntu Pro. +++You can continue to use the Azure Update Manager [capabilities](updates-maintenance-schedules.md) to remain secure after migrating to a supported model from Canonical. ++> [!NOTE] +> - [Ubuntu Pro](https://ubuntu.com/azure/pro) will provide the support on 18.04 LTS from Canonical until 2028 through Expanded Security Maintenance (ESM). You can also [upgrade to Ubuntu Pro from Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/canonical.0001-com-ubuntu-pro-bionic?tab=Overview) as well. +> - Ubuntu offers 20.04 LTS and 22.04 LTS as a migration from 18.04 LTS. [Learn more](https://ubuntu.com/18-04/azure). ++ +## Next steps +- [An overview on Azure Update Manager](overview.md) +- [View updates for single machine](view-updates.md) +- [Deploy updates now (on-demand) for single machine](deploy-updates.md) +- [Schedule recurring updates](scheduled-patching.md) |
update-center | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/support-matrix.md | Title: Update management center (preview) support matrix + Title: Azure Update Manager (preview) support matrix description: Provides a summary of supported regions and operating system settings.-+ Last updated 07/11/2023-# Support matrix for update management center (preview) +# Support matrix for Azure Update Manager (preview) -This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by update management center (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers. +This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Manager (preview) including the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure VMs or machines managed by Arc-enabled servers. ## Update sources supported -**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the update management center (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations) +**Windows**: [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) reports to Microsoft Update by default, but you can configure it to report to [Windows Server Update Services (WSUS)](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). If you configure WUA to report to WSUS, based on the WSUS's last synchronization with Microsoft update, the results in the Update Manager (preview) might differ to what the Microsoft update shows. You can specify sources for scanning and downloading updates using [specify intranet Microsoft Update service location](/windows/deployment/update/waas-wu-settings?branch=main#specify-intranet-microsoft-update-service-location). To restrict machines to the internal update service, see [Do not connect to any Windows Update Internet locations](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates?branch=main#do-not-connect-to-any-windows-update-internet-locations) -**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in update management center (preview) depend on where the machines are configured to report. +**Linux**: You can configure Linux machines to report to a local or public YUM or APT package repository. The results shown in Update Manager (preview) depend on where the machines are configured to report. ## Types of updates supported ### Operating system updates-Update management center (preview) supports operating system updates for both Windows and Linux. +Update Manager (preview) supports operating system updates for both Windows and Linux. > [!NOTE]-> Update management center (preview) doesn't support driver Updates. +> Update Manager (preview) doesn't support driver Updates. ### First party updates on Windows By default, the Windows Update client is configured to provide updates only for Windows operating system. If you enable the **Give me updates for other Microsoft products when I update Windows** setting, you also receive updates for other Microsoft products, including security patches for Microsoft SQL Server and other Microsoft software. Use one of the following options to perform the settings change at scale: -- For Servers configured to patch on a schedule from Update management center (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change.+- For Servers configured to patch on a schedule from Update Manager (that has the VM PatchSettings set to AutomaticByPlatform = Azure-Orchestrated), and for all Windows Servers running on an earlier operating system than server 2016, Run the following PowerShell script on the server you want to change. ```powershell $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager") Use one of the following options to perform the settings change at scale: $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" $ServiceManager.AddService2($ServiceId,7,"") ```-- For servers running Server 2016 or later which are not using Update management center scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store).+- For servers running Server 2016 or later which aren't using Update Manager scheduled patching (that has the VM PatchSettings set to AutomaticByOS = Azure-Orchestrated) you can use Group Policy to control this by downloading and using the latest Group Policy [Administrative template files](https://learn.microsoft.com/troubleshoot/windows-client/group-policy/create-and-manage-central-store). > [!NOTE] > Run the following PowerShell script on the server to disable first party updates. Use one of the following options to perform the settings change at scale: ### Third-party updates -**Windows**: Update Management relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows update management to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher). +**Windows**: Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher). -**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it is scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it. +**Linux**: If you include a specific third party software repository in the Linux package manager repository location, it's scanned when it performs software update operations. The package won't be available for assessment and installation if you remove it. ++> [!NOTE] +> Update Manager does not support managing the Microsoft Configuration Manager client. ## Supported regions -Update management center (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use update management center (preview). +Update Manager (preview) will scale to all regions for both Azure VMs and Azure Arc-enabled servers. Listed below are the Azure public cloud where you can use Update Manager (preview). # [Azure virtual machine](#tab/azurevm) -Update management center (preview) is available in all Azure public regions where compute virtual machines are available. +Update Manager (preview) is available in all Azure public regions where compute virtual machines are available. # [Azure Arc-enabled servers](#tab/azurearc)-Update management center (preview) is supported in the following regions currently. It implies that VMs must be in below regions: +Update Manager (preview) is supported in the following regions currently. It implies that VMs must be in below regions: **Geography** | **Supported Regions** | Africa | South Africa North Asia Pacific | East Asia </br> South East Asia Australia | Australia East Brazil | Brazil South-Canada | Canada Central +Canada | Canada Central </br> Canada East Europe | North Europe </br> West Europe France | France Central India | Central India Japan | Japan East Korea | Korea Central+Sweden | Sweden Central Switzerland | Switzerland North United Kingdom | UK South </br> UK West United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3 United States | Central US </br> East US </br> East US 2</br> North Central US < > [!NOTE] > - All operating systems are assumed to be x64. x86 isn't supported for any operating system.-> - Update management center (preview) doesn't support CIS hardened images. +> - Update Manager (preview) doesn't support CIS hardened images. # [Azure VMs](#tab/azurevm-os) > [!NOTE]-> Currently, update management center has the following limitations regarding the operating system support: -> - Marketplace images other than the [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported. -> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in update management center (preview). +> Currently, Update Manager has the following limitation regarding the operating system support: +> - [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) and **VMs created by Azure Migrate, Azure Backup, Azure Site Recovery** aren't fully supported for now. However, you can **use on-demand operations such as one-time update and check for updates** in Update Manager (preview). >-> For the above limitations, we recommend that you use [Automation update management](../automation/update-management/overview.md) till the support is available in Update management center (preview). --**Marketplace/PIR images** --Currently, we support a combination of Offer, Publisher, and Sku of the image. Ensure that you match all the three to confirm support. For more information, see [list of supported marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images). --**Custom images** --We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update manage center to manage updates on custom images. +> For the above limitation, we recommend that you use [Automation Update management](../automation/update-management/overview.md) till the support is available in Update Manager (preview). +++### Marketplace/PIR images ++The Marketplace image in Azure has the following attributes: +- **Publisher** - The organization that creates the image. Examples: Canonical, MicrosoftWindowsServer +- **Offer**- The name of the group of related images created by the publisher. Examples: UbuntuServer, WindowsServer +- **SKU**- An instance of an offer, such as a major release of a distribution. Examples: 18.04LTS, 2019-Datacenter +- **Version** - The version number of an image SKU. ++Azure Update Manager supports the following operating system versions. However, you could experience failures if there are any configuration changes on the VMs such as package or repository. ++#### Windows operating systems ++| **Publisher**| **Versions(s)** +|-|-| +|Microsoft Windows Server | 1709, 1803, 1809, 2012, 2016, 2019, 2022| +|Microsoft Windows Server HPC Pack | 2012, 2016, 2019 | +|Microsoft SQL Server | 2008, 2012, 2014, 2016, 2017, 2019, 2022 | +|Microsoft Visual Studio | ws2012r2, ws2016, ws2019, ws2022 | +|Microsoft Azure Site Recovery | Windows 2012 +|Microsoft Biz Talk Server | 2016, 2020 | +|Microsoft DynamicsAx | ax7 | +|Microsoft Power BI | 2016, 2017, 2019, 2022 | +|Microsoft Sharepoint | sp* | ++#### Linux operating systems ++| **Publisher**| **Versions(s)** +|-|-| +|Canonical | Ubuntu 16.04, 18.04, 20.04, 22.04 | +|RedHat | RHEL 7,8,9| +|Openlogic | CentOS 7| +|SUSE 12 |sles, sles-byos, sap, sap-byos, sapcal, sles-standard | +|SUSE 15 | basic, hpc, opensuse, sles, sap, sapcal| +|Oracle Linux | 7*, ol7*, ol8*, ol9* | +|Oracle Database | 21, 19-0904, 18.*| ++#### Unsupported Operating systems ++The following table lists the operating systems for marketplace images that aren't supported: ++| **Publisher**| **OS Offer** | **SKU**| +|-|-|--| +|OpenLogic | CentOS | 8* | +|OpenLogic | centos-hpc| * | +|Oracle | Oracle-Linux | 8, 8-ci, 81, 81-ci , 81-gen2, ol82, ol8_2-gen2,ol82-gen2, ol83-lvm, ol83-lvm-gen2, ol84-lvm,ol84-lvm-gen2 | +|Red Hat | RHEL | 74-gen2 | +|Red Hat | RHEL-HANA | 7.4, 7.5, 7.6, 8.1, 81_gen2 | +|Red Hat | RHEL-SAP | 7.4, 7.5, 7.7 | +|Red Hat | RHEL-SAP-HANA | 7.5 | +|Microsoft SQL Server | SQL 2019-SLES* | * | +|Microsoft SQL Server | SQL 2019-RHEL7 | * | +|Microsoft SQL Server | SQL 2017-RHEL7 | * | +|Microsoft | microsoft-ads |*.* | +|SUSE| sles-sap-15-*-byos | gen *| ++### Custom images ++We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Table below lists the operating systems that we support for generalized images. Refer to [custom images (preview)](manage-updates-customized-images.md) for instructions on how to start using Update Manager (preview) to manage updates on custom images. |**Windows Operating System**| |-- | The following table lists the operating systems that aren't supported: | Azure Kubernetes Nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/azure/aks/node-updates-kured).| -As the Update management center (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md). +As the Update Manager (preview) depends on your machine's OS package manager or update service, ensure that the Linux package manager, or Windows Update client are enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [configure Windows Update settings](configure-wu-agent.md). ## Next steps |
update-center | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md | Title: Troubleshoot known issues with update management center (preview) -description: The article provides details on the known issues and troubleshooting any problems with update management center (preview). -+ Title: Troubleshoot known issues with Azure Update Manager (preview) +description: The article provides details on the known issues and troubleshooting any problems with Azure Update Manager (preview). + Last updated 05/30/2023 -# Troubleshoot issues with update management center (preview) +# Troubleshoot issues with Azure Update Manager (preview) -This article describes the errors that might occur when you deploy or use update management center (preview), how to resolve them and the known issues and limitations of scheduled patching. +This article describes the errors that might occur when you deploy or use Update Manager (preview), how to resolve them and the known issues and limitations of scheduled patching. ## General troubleshooting If you don't want any patch installation to be orchestrated by Azure or aren't u ### Cause -The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Management relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Management can't properly report on the patches that are needed or installed. +The Update Agent (Windows Update Agent on Windows; the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed. ### Resolution To review the logs related to all actions performed by the extension, on Windows - For concurrent/conflicting schedule, only one schedule will be triggered. The other schedule will be triggered once a schedule is finished. - If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in case of Azure VMs.-- Policy definition *[Preview]: Schedule recurring updates using Update Management Center* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false.+- Policy definition *[Preview]: Schedule recurring updates using Update Manager* with version 1.0.0-preview successfully remediates resources however, it will always show them as non-compliant. The current value of the existence condition is a placeholder that will always evaluate to false. ### Scenario: Unable to apply patches for the shutdown machines Setting a longer time range for maximum duration when triggering an [on-demand u ## Next steps -* To learn more about Azure Update management center (preview), see the [Overview](overview.md). -* To view logged results from all your machines, see [Querying logs and results from update management center (preview)](query-logs.md). +* To learn more about Azure Update Manager (preview), see the [Overview](overview.md). +* To view logged results from all your machines, see [Querying logs and results from update Manager (preview)](query-logs.md). |
update-center | Tutorial Dynamic Grouping For Scheduled Patching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/tutorial-dynamic-grouping-for-scheduled-patching.md | Title: Schedule updates on Dynamic scoping (preview). description: In this tutorial, you learn how to group machines, dynamically apply the updates at scale.-+ Last updated 07/05/2023 In this tutorial, you learn how to: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. -## Prerequisites --- Patch Orchestration must be set to Customer Managed Schedules (Preview). This sets patch mode to AutomaticByPlatform and the **BypassPlatformSafetyChecksOnUserSchedule** = *True*.-- Associate a Schedule with the VM. ## Create a Dynamic scope To create a dynamic scope, follow the steps: -1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update management center (preview). +1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to Update Manager (preview). 1. Select **Overview** > **Schedule updates** > **Create a maintenance configuration**. 1. In the **Create a maintenance configuration** page, enter the details in the **Basics** tab and select **Maintenance scope** as *Guest* (Azure VM, Arc-enabled VMs/servers). 1. Select **Dynamic Scopes** and follow the steps to [Add Dynamic scope](manage-dynamic-scoping.md#add-a-dynamic-scope-preview). |
update-center | Updates Maintenance Schedules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/updates-maintenance-schedules.md | Title: Updates and maintenance in update management center (preview). -description: The article describes the updates and maintenance options available in Update management center (preview). -+ Title: Updates and maintenance in Azure Update Manager (preview). +description: The article describes the updates and maintenance options available in Azure Update Manager (preview). + Last updated 05/23/2023 -# Update options in update management center (preview) +# Update options in Azure Update Manager (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article provides an overview of the various update and maintenance options available by update management center (preview). +This article provides an overview of the various update and maintenance options available by Update Manager (preview). -Update management center (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on. +Update Manager (preview) provides you the flexibility to take an immediate action or schedule an update within a defined maintenance window. It also supports new patching methods such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md), [Hotpatching](../automanage/automanage-hotpatch.md?context=%2fazure%2fvirtual-machines%2fcontext%2fcontext) and so on. ## Update Now/One-time update -Update management center (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm). +Update Manager (preview) allows you to secure your machines immediately by installing updates on demand. To perform the on-demand updates, see [Check and install one time updates](deploy-updates.md#install-updates-on-single-vm). + ## Scheduled patching You can create a schedule on a daily, weekly or hourly cadence as per your requirement, specify the machines that must be updated as part of the schedule, and the updates that you must install. The schedule will then automatically install the updates as per the specifications. -Update management center (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). +Update Manager (preview) uses maintenance control schedule instead of creating its own schedules. Maintenance control enables customers to manage platform updates. For more information, see the [Maintenance control documentation](/azure/virtual-machines/maintenance-control). Start using [scheduled patching](scheduled-patching.md) to create and save recurring deployment schedules. > [!NOTE] Start using [scheduled patching](scheduled-patching.md) to create and save recur This mode of patching lets the Azure platform automatically download and install all the security and critical updates on your machines every month and apply them on your machines following the availability-first principles. For more information, see [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). -In **Update management center** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property. +In **Update Manager** home page, go to **Update Settings** blade, select Patch orchestration as **Azure Managed - Safe Deployment** value to enable this VM property. ## Windows automatic updates This mode of patching allows operating system to automatically install updates a Hotpatching allows you to install updates on supported Windows Server Azure Edition virtual machines without requiring a reboot after installation. It reduces the number of reboots required on your mission critical application workloads running on Windows Server. For more information, see [Hotpatch for new virtual machines](../automanage/automanage-hotpatch.md) -Hotpatching property is available as a setting in Update management center (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm) +Hotpatching property is available as a setting in Update Manager (preview) which you can enable by using Update settings flow. Refer to detailed instructions [here](manage-update-settings.md#configure-settings-on-single-vm) :::image type="content" source="media/updates-maintenance/hot-patch-inline.png" alt-text="Screenshot that shows the hotpatch option." lightbox="media/updates-maintenance/hot-patch-expanded.png"::: ## Next steps -* To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview). +* To view update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) Update Manager (preview). |
update-center | View Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/view-updates.md | Title: Check update compliance in Update management center (preview) -description: The article details how to use Azure Update management center (preview) in the Azure portal to assess update compliance for supported machines. -+ Title: Check update compliance in Azure Update Manager (preview) +description: The article details how to use Azure Update Manager (preview) in the Azure portal to assess update compliance for supported machines. + Last updated 05/31/2023 -# Check update compliance with update management center (preview) +# Check update compliance with Azure Update Manager (preview) **Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. -This article details how to check the status of available updates on a single VM or multiple VMs using update management center (preview). +This article details how to check the status of available updates on a single VM or multiple VMs using Update Manager (preview). ## Check updates on single VM >[!NOTE]-> You can check the updates from the Overview or Machines blade in update management center (preview) page or from the selected VM. +> You can check the updates from the Overview or Machines blade in Update Manager (preview) page or from the selected VM. # [From Overview blade](#tab/singlevm-overview) 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update management center (Preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. +1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. 1. In **Select resources and check for updates**, choose the machine for which you want to check the updates and select **Check for updates**. This article details how to check the status of available updates on a single VM 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update management center (preview), **Machines**, select your **Subscription** to view all your machines. +1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines. 1. Select your machine from the checkbox and select **Check for updates**, **Assess now** or alternatively, you can select your machine, in **Updates Preview**, select **Assess updates**, and in **Trigger assess now**, select **OK**. This article details how to check the status of available updates on a single VM 1. Select your virtual machine and the **virtual machines | Updates** page opens. 1. Under **Operations**, select **Updates**.-1. In **Updates**, select **Go to Updates using Update Management Center**. +1. In **Updates**, select **Go to Updates using Update Manager**. :::image type="content" source="./media/view-updates/resources-check-updates.png" alt-text="Screenshot showing selection of updates from Home page."::: To check the updates on your machines at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update management center (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. +1. In Update Manager (preview), **Overview**, select your **Subscription** to view all your machines and select **Check for updates**. 1. In **Select resources and check for updates**, choose your machines for which you want to check the updates and select **Check for updates**. To check the updates on your machines at scale, follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com). -1. In Update management center (preview), **Machines**, select your **Subscription** to view all your machines. +1. In Update Manager (preview), **Machines**, select your **Subscription** to view all your machines. 1. Select the **Select all** to choose all your machines and select **Check for updates**. 1. Select **Assess now** to perform the assessment. - A notification appears when the operation is initiated and completed. After a successful scan, the **Update management center (Preview) | Machines** page is refreshed to display the updates. + A notification appears when the operation is initiated and completed. After a successful scan, the **Update Manager (preview) | Machines** page is refreshed to display the updates. > [!NOTE]-> In update management center (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository. +> In update Manager (preview), you can initiate a software updates compliance scan on the machine to get the current list of operating system (guest) updates including the security and critical updates. On Windows, the software update scan is performed by the Windows Update Agent. On Linux, the software update scan is performed using OVAL-compatible tools to test for the presence of vulnerabilities based on the OVAL Definitions for that platform, which is retrieved from a local or remote repository. ## Next steps * Learn about deploying updates on your machines to maintain security compliance by reading [deploy updates](deploy-updates.md).-* To view the update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md). -* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update management center (preview). +* To view the update assessment and deployment logs generated by Update Manager (preview), see [query logs](query-logs.md). +* To troubleshoot issues, see [Troubleshoot](troubleshoot.md) Azure Update Manager (preview). |
update-center | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-new.md | Title: What's new in Update management center (Preview) -description: Learn about what's new and recent updates in the Update management center (Preview) service. -+ Title: What's new in Azure Update Manager (preview) +description: Learn about what's new and recent updates in the Azure Update Manager (preview) service. + Last updated 07/05/2023 -# What's new in Update management center (Preview) +# What's new in Azure Update Manager (Preview) -[Update management center (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update management center (Preview). +[Azure Update Manager (preview)](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Update Manager (preview). ## July 2023 Dynamic scope (preview) is an advanced capability of schedule patching. You can ### Customized image support -Update management center (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems). +Update Manager (preview) now supports [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images, and a combination of offer, publisher, and SKU for Marketplace/PIR images.See the [list of supported operating systems](support-matrix.md#supported-operating-systems). ### Multi-subscription support -The limit on the number of subscriptions that you can manage to use the Update management center (preview) portal has now been removed. You can now manage all your subscriptions using the update management center (preview) portal. +The limit on the number of subscriptions that you can manage to use the Update Manager (preview) portal has now been removed. You can now manage all your subscriptions using the update Manager (preview) portal. ## April 2023 A new patch orchestration - **Customer Managed Schedules (Preview)** is introduc ### New region support -Update management center (Preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). +Update Manager (preview) now supports new five regions for Azure Arc-enabled servers. [Learn more](support-matrix.md#supported-regions). ## October 2022 |
update-center | Whats Upcoming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/whats-upcoming.md | Title: What's upcoming in Update management center (Preview) -description: Learn about what's upcoming and updates in the Update management center (Preview) service. -+ Title: What's upcoming in Azure Update Manager (preview) +description: Learn about what's upcoming and updates in the Update manager (preview) service. + Last updated 06/01/2023 -# What's upcoming in Update management center (Preview) +# What are the upcoming features in Azure Update Manager (preview) -The primary [what's New in Update management center (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features. +The primary [what's New in Azure Update Manager (preview)](whats-new.md) contains updates of feature releases and this article lists all the upcoming features. ## Expanded support for Operating system and VM images Expanded support for [specialized images](../virtual-machines/linux/imaging.md#specialized-images), VMs created by Azure Migrate, Azure Backup, Azure Site Recovery, and marketplace images are upcoming in Q3, CY 2023. Until then, we recommend that you continue using [Automation update management](../automation/update-management/overview.md) for these images. [Learn more](support-matrix.md#supported-operating-systems). -## Update management center will be GA soon +## Update Manager will be GA soon -Update management center will be declared GA soon. +Update Manager will be declared GA soon. ++## Prescript and postscript ++The prescript and postscript will be available soon. ++## SQL Server patching +SQL Server patching using Update Manager will be available soon. ## Next steps |
update-center | Workbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/workbooks.md | Title: An overview of Workbooks description: This article provides information on how workbooks provide a flexible canvas for data analysis and the creation of rich visual reports.-+ Last updated 01/16/2023 -Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update management center (preview). +Workbooks help you to create visual reports that help in data analysis. This article describes the various features that Workbooks offer in Update Manager (preview). ## Key benefits - Provides a canvas for data analysis and creation of visual reports The gallery lists all the saved workbooks and templates for your workspace. You - In the **Recently modified** tile, you can view and edit the workbooks. -- In the **Update management center** tile, you can view the following summary:+- In the **Update Manager** tile, you can view the following summary: :::image type="content" source="./media/workbooks/workbooks-summary-inline.png" alt-text="Screenshot of workbook summary." lightbox="./media/workbooks/workbooks-summary-expanded.png"::: |
virtual-desktop | Administrative Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/administrative-template.md | Title: Administrative template for Azure Virtual Desktop description: Learn how to use the administrative template (ADMX) for Azure Virtual Desktop with Intune or Group Policy to configure certain settings on your session hosts. Previously updated : 06/28/2023 Last updated : 08/25/2023 We've created an administrative template for Azure Virtual Desktop to configure You can configure the following features with the administrative template: - [Graphics related data logging](connection-latency.md#connection-graphics-data-preview)-- [Screen capture protection](screen-capture-protection.md) - [RDP Shortpath for managed networks](rdp-shortpath.md?tabs=managed-networks)+- [Screen capture protection](screen-capture-protection.md) - [Watermarking](watermarking.md) ## Prerequisites To configure the administrative template, select a tab for your scenario and fol # [Intune](#tab/intune) -> [!IMPORTANT] -> The administrative template for Azure Virtual Desktop is only available with the *templates* profile type, not the *settings catalog*. You can use the templates profile type with Windows 10 and Windows 11, but you can't use this with multi-session versions of these operating systems as they only support the settings catalog. You'll need to use one of the other methods with multi-session. - 1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/). -1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Templates** profile type and **Administrative templates** template name. +1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type. -1. Browse to **Computer configuration** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see policy settings for Azure Virtual Desktop available for you to configure, as shown in the following screenshot: +1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Azure Virtual Desktop**. You should see settings in the Azure Virtual Desktop subcategory available for you to configure, as shown in the following screenshot: - :::image type="content" source="media/administrative-template/azure-virtual-desktop-intune-template.png" alt-text="Screenshot of the Intune admin center showing Azure Virtual Desktop policy settings." lightbox="media/administrative-template/azure-virtual-desktop-intune-template.png"::: + :::image type="content" source="media/administrative-template/azure-virtual-desktop-intune-settings-catalog.png" alt-text="Screenshot of the Intune admin center showing Azure Virtual Desktop settings." lightbox="media/administrative-template/azure-virtual-desktop-intune-settings-catalog.png"::: -1. Apply the configuration profile to your session hosts, then restart your clients. +1. Once you've configured settings, apply the configuration profile to your session hosts, then restart your session hosts for the settings to take effect. # [Group Policy (AD)](#tab/group-policy-domain) To configure the administrative template, select a tab for your scenario and fol :::image type="content" source="media/administrative-template/azure-virtual-desktop-gpo.png" alt-text="Screenshot of the Group Policy Management Editor showing Azure Virtual Desktop policy settings." lightbox="media/administrative-template/azure-virtual-desktop-gpo.png"::: -1. Apply the policy to your session hosts, then restart your session hosts. +1. Once you've configured settings, apply the policy to your session hosts, then restart your session hosts for the settings to take effect. # [Local Group Policy](#tab/local-group-policy) To configure the administrative template, select a tab for your scenario and fol :::image type="content" source="media/administrative-template/azure-virtual-desktop-gpo.png" alt-text="Screenshot of the Local Group Policy Editor showing Azure Virtual Desktop policy settings." lightbox="media/administrative-template/azure-virtual-desktop-gpo.png"::: -1. Restart your session hosts for the settings to take effect. +1. Once you've configured settings, restart your session hosts for the settings to take effect. |
virtual-desktop | Host Pool Load Balancing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/host-pool-load-balancing.md | Title: Azure Virtual Desktop host pool load-balancing - Azure -description: Learn about host pool load-balancing algorithms for a Azure Virtual Desktop environment. -+ Title: Host pool load balancing algorithms in Azure Virtual Desktop - Azure +description: Learn about the host pool load balancing algorithms available for pooled host pools in Azure Virtual Desktop. + Previously updated : 09/19/2022-- Last updated : 08/25/2023+ -# Host pool load-balancing algorithms ->[!IMPORTANT] ->This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/host-pool-load-balancing-2019.md). +# Host pool load balancing algorithms in Azure Virtual Desktop -Azure Virtual Desktop supports two load-balancing algorithms. Each algorithm determines which session host will host a user's session when they connect to a resource in a pooled host pool. The information in this article only applies to pooled host pools. +Azure Virtual Desktop supports two load balancing algorithms for pooled host pools. Each algorithm determines which session host is used when a user starts a remote session. Load balancing doesn't apply to personal host pools because users always have a 1:1 mapping to a session host within the host pool. -The following load-balancing algorithms are available in Azure Virtual Desktop: +The following load balancing algorithms are available for pooled host pools: -- Breadth-first load balancing allows you to evenly distribute user sessions across the session hosts in a host pool. You don't have to specify a maximum session limit for the number of sessions.-- Depth-first load balancing allows you to saturate a session host with user sessions in a host pool. You have to specify a maximum session limit for the number of sessions. Once the first session host reaches its session limit threshold, the load balancer directs any new user connections to the next session host in the host pool until it reaches its limit, and so on.+- **Breadth-first**, which aims to evenly distribute new user sessions across the session hosts in a host pool. You don't have to specify a maximum session limit for the number of sessions. -Each host pool can only configure one type of load-balancing specific to it. However, both load-balancing algorithms share the following behaviors no matter which host pool they're in: +- **Depth-first**, which keeps starting new user sessions on one session host until the maximum session limit is reached. Once the session limit is reached, any new user connections are directed to the next session host in the host pool until it reaches its session limit, and so on. -- If a user already has an active or disconnected session in the host pool and signs in again, the load balancer will successfully redirect them to the session host with their existing session. This behavior applies even if that session host's AllowNewConnections property is set to False (drain mode is enabled).-- If a user doesn't already have a session in the host pool, then the load balancer won't consider session hosts whose AllowNewConnections property is set to False during load balancing.-- If you lower the maximum session limit on a session host while it has active user sessions, the change won't affect the active user sessions.+You can only configure one of the load balancing at a time per pooled host pool, but you can change which one is used after a host pool is created. However, both load balancing algorithms share the following behaviors: -## Breadth-first load-balancing algorithm +- If a user already has an active or disconnected session in the host pool and signs in again, the load balancer will successfully redirect them to the session host with their existing session. This behavior applies even if [drain mode](drain-mode.md) has been enabled for that session host. -The breadth-first load-balancing algorithm allows you to distribute user sessions across session hosts to optimize for session performance. This algorithm is ideal for organizations that want to provide the best experience for users connecting to their pooled virtual desktop environment. +- If a user doesn't already have a session on a session host in the host pool, the load balancer doesn't consider a session host where drain mode has been enabled. -The breadth-first algorithm first queries session hosts that allow new connections. The algorithm then selects a session host randomly from half the set of session hosts with the least number of sessions. For example, if there are nine machines with 11, 12, 13, 14, 15, 16, 17, 18, and 19 sessions, a new session you create won't automatically go to the first machine. Instead, it can go to any of the first five machines with the lowest number of sessions (11, 12, 13, 14, 15). +- If you lower the maximum session limit on a session host while it has active user sessions, the change doesn't affect existing user sessions. -## Depth-first load-balancing algorithm +## Breadth-first load balancing algorithm -The depth-first load-balancing algorithm allows you to saturate one session host at a time to optimize for scale down scenarios. This algorithm is ideal for cost-conscious organizations that want more granular control on the number of virtual machines they've allocated for a host pool. +The breadth-first load balancing algorithm aims to distribute user sessions across session hosts to optimize for session performance. Breadth-first is ideal for organizations that want to provide the best experience for users connecting to their remote resources as session host resources, such as CPU, memory, and disk, are generally less contended. -The depth-first algorithm first queries session hosts that allow new connections and haven't gone over their maximum session limit. The algorithm then selects the session host with highest number of sessions. If there's a tie, the algorithm selects the first session host in the query. +The breadth-first algorithm first queries session hosts in a host pool that allow new connections. The algorithm then selects a session host randomly from half the set of available session hosts with the fewest sessions. For example, if there are nine session hosts with 11, 12, 13, 14, 15, 16, 17, 18, and 19 sessions, a new session doesn't automatically go to the session host with the fewest sessions. Instead, it can go to any of the first five session hosts with the fewest sessions at random. Due to the randomization, some sessions may not be evenly distributed across all session hosts. ++## Depth-first load balancing algorithm ++The depth-first load balancing algorithm aims to saturate one session host at a time. This algorithm is ideal for cost-conscious organizations that want more granular control on the number of session hosts available in a host pool, enabling you to more easily scale down when there are fewer users. ++The depth-first algorithm first queries session hosts that allow new connections and haven't reached their maximum session limit. The algorithm then selects the session host with most sessions. If there's a tie, the algorithm selects the first session host from the query. ++You must [set a maximum session limit](configure-host-pool-load-balancing.md#configure-depth-first-load-balancing) when using the depth-first algorithm. You can use Azure Virtual Desktop Insights to monitor [the number of sessions on each session host](insights-use-cases.md#session-host-utilization) and [session host performance](insights-use-cases.md#session-host-performance) to help determine the best maximum session limit for your environment. > [!IMPORTANT]-> The maximum session limit parameter is required when you use the depth-first load balancing algorithm. For the best possible user experience, make sure to change the maximum session host limit parameter to a number that best suits your environment. -> -> Once all session hosts have reached the maximum session limit, you will need to increase the limit or deploy more session hosts. +> Once all session hosts have reached the maximum session limit, you need to increase the limit or [add more session hosts to the host pool](add-session-hosts-host-pool.md). ++## Next steps ++- Learn how to [configure load balancing for a host pool](configure-host-pool-load-balancing.md). ++- Understand how [autoscale](autoscale-scenarios.md) can automatically scale the number of available session hosts in a host pool. |
virtual-desktop | Insights Use Cases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights-use-cases.md | + + Title: Use cases for Azure Virtual Desktop Insights - Azure Virtual Desktop +description: Learn about how using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop, including some use cases and example scenarios. +++ Last updated : 08/24/2023+++# Use cases for Azure Virtual Desktop Insights ++Using Azure Virtual Desktop Insights can help you understand your deployments of Azure Virtual Desktop. It can help with checks such as which client versions are connecting, opportunities for cost saving, or knowing if you have resource limitations or connectivity issues. If you make changes, you can continually validate that the changes have had the intended effect, and iterate if needed. This article provides some use cases for Azure Virtual Desktop Insights and example scenarios using the Azure portal. ++## Prerequisites ++- An existing host pool with session hosts, and a workspace [configured to use Azure Virtual Desktop Insights](insights.md). ++- You need to have had active sessions for a period of time before you can make informed decisions. ++## Connectivity ++Connectivity issues can have a severe impact on the quality and reliability of the end-user experience with Azure Virtual Desktop. Azure Virtual Desktop Insights can help you identify connectivity issues and understand where improvements can be made. ++### High latency ++High latency can cause poor quality and slowness of a remote session. Maintaining ideal interaction times requires latency to generally be below 100 milliseconds, with a session broadly becoming of low quality over 200 ms. Azure Virtual Desktop Insights can help pinpoint gateway regions and users impacted by latency by looking at the *round-trip time*, so that you can more easily find cases of user impact that are related to connectivity. ++To view round-trip time: ++1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi). ++1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Connection Performance** tab. ++1. Review the section for **Round-trip time** and focus on the table for **RTT by gateway region** and the graph **RTT median and 95th percentile for all regions**. In the example below, most median latencies are under the ideal threshold of 100 ms, but several are higher. In many cases, the 95th percentile (p95) is substantially higher than the median, meaning that there are some users experiencing periods of higher latency. + + :::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-1.png" alt-text="A screenshot of a table and graph showing the round-trip time." lightbox="media/insights-use-cases/insights-connection-performance-latency-1.png"::: + +1. For the table **RTT by gateway region**, select **Median**, until the arrow next to it points down, to sort by the median latency in descending order. This order highlights gateways your users are reaching with the highest latency that could be having the most impact. Select a gateway to view the graph of its RTT median and 95th percentile, and filter the list of 20 top users by RTT median to the specific region. + + In this example, the **SAN** gateway region has the highest median latency, and the graph indicates that over time users are substantially over the threshold for poor connection quality. + + :::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-2.png" alt-text="A screenshot of a table and graph showing the round-trip time for a selected gateway." lightbox="media/insights-use-cases/insights-connection-performance-latency-2.png"::: + + The list of users can be used to identify who is being impacted by these issues. You can select the magnifying glass icon in the **Details** column to drill down further into the data. + + :::image type="content" source="media/insights-use-cases/insights-connection-performance-latency-3.png" alt-text="A screenshot of a table showing the round-trip time per user." lightbox="media/insights-use-cases/insights-connection-performance-latency-3.png"::: ++There are several possibilities for why latency may be higher than anticipated for some users, such as a poor Wi-Fi connection, or issues with their Internet Service Provider (ISP). However, with a list of impacted users, you have the ability to proactively contact and attempt to resolve end-user experience problems by understanding their network connectivity. ++You should periodically review the round-trip time in your environment and the overall trend to identify potential performance concerns. ++## Session host performance ++Issues with session hosts, such as where session hosts have too many sessions to cope with the workload end-users are running, can be a major cause of poor end-user experience. Azure Virtual Desktop Insights can provide detailed information about resource utilization and [user input delay](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json) to allow you to more easily and quickly find if users are impacted by limitations for resources like CPU or memory. ++To view session host performance: ++1. Sign in to the [Azure portal](https://portal.azure.com/). ++1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry to go to the Azure Virtual Desktop overview. ++1. Select **Host pools**, then select the name of the host pool for which you want to view session host performance. ++1. Select **Insights**, specify a **time range**, then select the **Host Performance** tab. ++1. Review the table for **Input delay by host** and the graph **Median input delay over time** to find a summary of the median and 95th percentile user input delay values for each session host in the host pool. Ideally the user input delay for each host should be below 100 milliseconds, and a lower value is better. ++ In the following example, the session hosts have a reasonable median user input delay, but occasionally values peak above the threshold of 100 ms, implying potential for impacting end-users. ++ :::image type="content" source="media/insights-use-cases/insights-session-host-performance-1.png" alt-text="A screenshot of a table and graph showing the input delay of session hosts." lightbox="media/insights-use-cases/insights-session-host-performance-1.png"::: + +1. If you find higher than expected user input delay (>100 ms), it can be useful to then look at the aggregated statistics for CPU, memory, and disk activity for the session hosts to see if there are periods of higher-than-expected utilization. The graphs for **Host CPU and memory metrics**, **Host disk timing metrics**, and **Host disk queue length** show either the aggregate across session hosts, or a selected session host's resource metrics. + + In this example there are some periods of higher disk read times that correlate with the higher user input delay above. + + :::image type="content" source="media/insights-use-cases/insights-session-host-performance-2.png" alt-text="A screenshot of graphs showing session host metrics." lightbox="media/insights-use-cases/insights-session-host-performance-2.png"::: + +1. For more information about a specific session host, select the **Host Diagnostics** tab. ++1. Review the section for **Performance counters** to see a quick summary of any devices that have crossed the specified thresholds for: + - Available MBytes (available memory) + - Page Faults/sec + - CPU Utilization + - Disk Space + - Input Delay per Session + + Selecting a parameter allows you to drill down and see the trend for a selected session host. In the following example, one session host had higher CPU usage (> 60%) for the selected duration (1 minute). ++ :::image type="content" source="media/insights-use-cases/insights-session-host-performance-3.png" alt-text="A screenshot showing values from the performance counters of session hosts." lightbox="media/insights-use-cases/insights-session-host-performance-3.png"::: ++In cases where a session host has extended periods of high resource utilization, itΓÇÖs worth considering increasing the [Azure VM size](../virtual-machines/sizes.md) of the session host to better accommodate user workloads. ++## Client version usage ++A common source of issues for end-users of Azure Virtual Desktop is using older clients that may either be missing new or updated features, or have known issues that have been resolved with more recent versions. Azure Virtual Desktop Insights contains a list of the different clients in use, as well as identifying clients that may be out of date. ++To view a list of users with outdated clients: ++1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi). ++1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Clients** tab. ++1. Review the section for **Users with potentially outdated clients (all activity types)**. A summary table shows the highest version level of each client found connecting to your environment (marked as **Newest**) in the selected time range, and the count of users using outdated versions (in parentheses). ++ In the below example, the newest version of the Microsoft Remote Desktop Client for Windows (MSRDC) is 1.2.4487.0, and 993 users are currently using a version older than that. It also shows a count of connections and the number of days behind the latest version the older clients are. ++ :::image type="content" source="media/insights-use-cases/insights-client-version-usage-1.png" alt-text="A screenshot showing a table of outdated clients." lightbox="media/insights-use-cases/insights-client-version-usage-1.png"::: ++1. To find more information, expand a client for a list of users using an outdated version of that client, their versions, and the date last seen connecting with that version. You can export the data using the button in the top right-hand corner of the table for communication with the users or monitor the propagation of updates. ++ :::image type="content" source="media/insights-use-cases/insights-client-version-usage-2.png" alt-text="A screenshot showing a table of users with outdated clients." lightbox="media/insights-use-cases/insights-client-version-usage-2.png"::: ++You should periodically review the versions of clients in use to ensure your users are getting the best experience. ++## Cost saving opportunities ++Understanding the utilization of session hosts can help illustrate where there's potential to reduce spend by using a scaling plan, resize virtual machines, or reduce the number of session hosts in the pool. Azure Virtual Desktop Insights can provide visibility into usage patterns to help you make the most informed decisions about how best to manage your resources based on real user usage. ++### Session host utilization ++Knowing when your session hosts are in peak demand, or when there are few or no sessions can help you make decisions about how to manage your session hosts. You can use [autoscale](autoscale-scenarios.md) to scale session hosts based on usage patterns. Azure Virtual Desktop Insights can help you identify broad patterns of user activity across multiple host pools. If you find opportunities to scale session hosts, you can use this information to [create a scaling plan](autoscale-scaling-plan.md). ++To view session host utilization: ++1. Sign in to Azure Virtual Desktop Insights in the Azure portal by browsing to [https://aka.ms/avdi](https://aka.ms/avdi). ++1. From the drop-down lists, select one or more **subscriptions**, **resource groups**, **host pools**, and specify a **time range**, then select the **Utilization** tab. ++1. Review the **Session history** chart, which displays the number of active and idle (disconnected) sessions over time. Identify any periods of high activity, and periods of low activity from the peak user session count and the time period in which the peaks occur. If you find a regular, repeated pattern of activity, this usually implies there's a good opportunity to implement a scaling plan. + + In this example, the graph shows the number of users sessions over the course of a week. Peaks occur at around midday on weekdays, and there's a noticeable lack of activity over the weekend. This suggests that there's an opportunity to scale session hosts to meet demand during the week, and reduce the number of session hosts over the weekend. + + :::image type="content" source="media/insights-use-cases/insights-session-count-over-time.png" alt-text="A screenshot of a graph showing the number of users sessions over the course of a week." lightbox="media/insights-use-cases/insights-session-count-over-time.png"::: + +1. Use the **Session host count** chart to note the average number of active session hosts over time, and particularly the average number of session hosts that are idle (no sessions). Ideally session hosts should be actively supporting connected sessions and active workloads, and powered off when not in use by using a scaling plan. You'll likely need to keep a minimum number of session hosts powered on to ensure availability for users at irregular times, so understanding usage over time can help find an appropriate number of session hosts to keep powered on as a buffer. ++ Even if a scaling plan is ultimately not a good fit for your usage patterns, there's still an opportunity to balance the total number of session hosts available as a buffer by analyzing the session demand and potentially reducing the number of idle devices. ++ In this example, the graph shows there are long periods over the course of a week where idle session hosts are powered on and therefore increasing costs. ++ :::image type="content" source="media/insights-use-cases/insights-session-host-idle-count-over-time.png" alt-text="A screenshot of a graph showing the number of active and idle session hosts over the course of a week." lightbox="media/insights-use-cases/insights-session-host-idle-count-over-time.png"::: ++1. Use the drop-down lists to reduce the scope to a single host pool and repeat the analysis for **session history** and **session host count**. At this scope you can identify patterns that are specific to the session hosts in a particular host pool to help develop a scaling plan for that host pool. ++ In this example, the first graph shows the pattern of user activity throughout a week between 6AM and 10PM. On the weekend, there's minimal activity. The second graph shows the number of active and idle session hosts throughout the same week. There are long periods of time where idle session hosts are powered on. Use this information to help determine optimal ramp-up and ramp-down times for a scaling plan. ++ :::image type="content" source="media/insights-use-cases/insights-session-count-over-time-single-host-pool.png" alt-text="A graph showing the number of users sessions over the course of a week for a single host pool." lightbox="media/insights-use-cases/insights-session-count-over-time-single-host-pool.png"::: ++ :::image type="content" source="media/insights-use-cases/insights-session-host-idle-count-over-time-single-host-pool.png" alt-text="A graph showing the number of active and idle session hosts over the course of a week for a single host pool." lightbox="media/insights-use-cases/insights-session-host-idle-count-over-time-single-host-pool.png"::: ++1. [Create a scaling plan](autoscale-scaling-plan.md) based on the usage patterns you've identified, then [assign the scaling plan to your host pool](autoscale-new-existing-host-pool.md). ++After a period of time, you should repeat this process to validate that your session hosts are being utilized effectively. You can make changes to the scaling plan if needed, and continue to iterate until you find the optimal scaling plan for your usage patterns. ++## Next steps ++- [Create a scaling plan](autoscale-scaling-plan.md) |
virtual-desktop | Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/insights.md | Title: Use Azure Virtual Desktop Insights to monitor your deployment - Azure description: How to set up Azure Virtual Desktop Insights to monitor your Azure Virtual Desktop environments. Previously updated : 06/14/2023 Last updated : 08/24/2023 Azure Virtual Desktop Insights is a dashboard built on Azure Monitor Workbooks t Before you start using Azure Virtual Desktop Insights, you'll need to set up the following things: - All Azure Virtual Desktop environments you monitor must be based on the latest release of Azure Virtual Desktop thatΓÇÖs compatible with Azure Resource Manager.+ - At least one configured Log Analytics Workspace. Use a designated Log Analytics workspace for your Azure Virtual Desktop session hosts to ensure that performance counters and events are only collected from session hosts in your Azure Virtual Desktop deployment.+ - Enable data collection for the following things in your Log Analytics workspace:- - Diagnostics from your Azure Virtual Desktop environment - - Recommended performance counters from your Azure Virtual Desktop session hosts - - Recommended Windows Event Logs from your Azure Virtual Desktop session hosts - The data setup process described in this article is the only one you'll need to monitor Azure Virtual Desktop. You can disable all other items sending data to your Log Analytics workspace to save costs. + - Diagnostics from your Azure Virtual Desktop environment + - Recommended performance counters from your Azure Virtual Desktop session hosts + - Recommended Windows Event Logs from your Azure Virtual Desktop session hosts ++ The data setup process described in this article is the only one you'll need to monitor Azure Virtual Desktop. You can disable all other items sending data to your Log Analytics workspace to save costs. ++- Anyone monitoring Azure Virtual Desktop Insights for your environment will also need to have the following Azure role-based access control (RBAC) roles assigned as a minimum: -Anyone monitoring Azure Virtual Desktop Insights for your environment will also need the following read-access permissions: + - [Desktop Virtualization Reader](../role-based-access-control/built-in-roles.md#desktop-virtualization-reader) assigned on the resource group or subscription where the host pools, workspaces and session hosts are. + - [Log Analytics Reader](../role-based-access-control/built-in-roles.md#log-analytics-reader) assigned on any Log Analytics workspace used with Azure Virtual Desktop Insights. -- Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources.-- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts.-- Read access to the Log Analytics workspace. In the case that multiple Log Analytics workspaces are used, read access should be granted to each to allow viewing data.+ You can also create a custom role to reduce the scope of assignment on the Log Analytics workspace. For more information, see [Manage access to Log Analytics workspaces](../azure-monitor/logs/manage-access.md). -> [!NOTE] -> Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal. + > [!NOTE] + > Read access only lets admins view data. They'll need different permissions to manage resources in the Azure Virtual Desktop portal. ## Open Azure Virtual Desktop Insights |
virtual-desktop | Multimedia Redirection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md | Before you can use multimedia redirection on Azure Virtual Desktop, you'll need - Windows Desktop client: - To use video playback redirection, you must install [Windows Desktop client, version 1.2.3916 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). This feature is only compatible with version 1.2.3916 or later of the Windows Desktop client. - - To use call redirection, you must install the Windows Desktop client, version 1.2.4237 or later with [Insider releases enabled](./users/client-features-windows.md#enable-insider-releases). + - To use call redirection, you must install the Windows Desktop client, version 1.2.4337 or later with [Insider releases enabled](./users/client-features-windows.md#enable-insider-releases). - Microsoft Visual C++ Redistributable 2015-2022, version 14.32.31332.0 or later installed on your session hosts and Windows client devices. You can download the latest version from [Microsoft Visual C++ Redistributable latest supported downloads](/cpp/windows/latest-supported-vc-redist). The following section will show you how to use advanced features for call redire #### Enable call redirection for all sites -Call redirection is currently limited to the web apps listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. If you're using a listed calling app with an internal URL, you must turn the **Enable WebRTC for all sites** setting to use call redirection. You can also enable call redirection for all sites to test the feature with web apps that aren't officially supported yet. +Call redirection is currently limited to the web apps listed in [Websites that work with multimedia redirection](multimedia-redirection-intro.md#websites-that-work-with-multimedia-redirection) by default. If you're using one of the calling apps listed in [Call redirection](multimedia-redirection-intro.md#call-redirection) with an internal URL, you must turn the **Enable WebRTC for all sites** setting to use call redirection. You can also enable call redirection for all sites to test the feature with web apps that aren't officially supported yet. To enable call redirection for all sites: |
virtual-desktop | Private Link Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md | Private Link with Azure Virtual Desktop has the following limitations: - Using both Private Link and [RDP Shortpath](./shortpath.md) at the same time isn't currently supported. -- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You'll need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0.+- Early in the preview of Private Link with Azure Virtual Desktop, the private endpoint for the initial feed discovery (for the *global* sub-resource) shared the private DNS zone name of `privatelink.wvd.microsoft.com` with other private endpoints for workspaces and host pools. In this configuration, users are unable to establish private endpoints exclusively for host pools and workspaces. Starting September 1, 2023, sharing the private DNS zone in this configuration will no longer be supported. You need to create a new private endpoint for the *global* sub-resource to use the private DNS zone name of `privatelink-global.wvd.microsoft.com`. For the steps to do this, see [Initial feed discovery](private-link-setup.md#initial-feed-discovery). ++- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0. ## Next steps |
virtual-desktop | Troubleshoot Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-agent.md | To resolve this issue, first reinstall the side-by-side stack: 1. From a command prompt run `qwinsta.exe` again and verify the *STATE* column for **rdp-tcp** and **rdp-sxs** entries is **Listen**. If not, you must [re-register your VM and reinstall the agent](#your-issue-isnt-listed-here-or-wasnt-resolved) component. -## Error: Session host VMs are stuck in Unavailable state +## Error: Session hosts are stuck in Unavailable state If your session host VMs are stuck in the Unavailable state, your VM didn't pass one of the health checks listed in [Health check](troubleshoot-statuses-checks.md#health-check). You must resolve the issue that's causing the VM to not pass the health check. -## Error: VMs are stuck in the "Needs Assistance" state +## Error: Session hosts are stuck in the Needs Assistance state ++There are several health checks that can cause your session host VMs to be stuck in the **Needs Assistance** state, *UrlsAccessibleCheck*. *MetaDataServiceCheck*, and *MonitoringAgentCheck*. ++### UrlsAccessibleCheck If the session host doesn't pass the *UrlsAccessibleCheck* health check, you'll need to identify which [required URL](safe-url-list.md) your deployment is currently blocking. Once you know which URL is blocked, identify which setting is blocking that URL and remove it. If your local hosts file is blocking the required URLs, make sure none of the re **Name:** DataBasePath +### MetaDataServiceCheck + If the session host doesn't pass the *MetaDataServiceCheck* health check, then the service can't access the IMDS endpoint. To resolve this issue, you'll need to do the following things: - Reconfigure your networking, firewall, or proxy settings to unblock the IP address 169.254.169.254. If your issue is caused by a web proxy, add an exception for 169.254.169.254 in netsh winhttp set proxy proxy-server="http=<customerwebproxyhere>" bypass-list="169.254.169.254" ``` +### MonitoringAgentCheck ++If the session host doesn't pass the *MonitoringAgentCheck* health check, you'll need to check the *Remote Desktop Services Infrastructure Geneva Agent* and validate if it is functioning correctly on the session host: ++1. Verify if the Remote Desktop Services Infrastructure Geneva Agent is installed on the session host. You can verify this in the list of installed programs on the session host. If you see multiple version of this agent installed, uninstall older versions and only keep the latest version installed. ++1. If you don't find the Remote Desktop Services Infrastructure Geneva Agent installed on the session host, please review logs located under *C:\Program Files\Microsoft RDInfra\GenevaInstall.txt* and see if installation is failing due to an error. ++1. Verify if scheduled task *GenevaTask_\<version\>* is created. This scheduled task must be enabled and running. If it's not, please reinstall the agent using the `.msi` file named **Microsoft.RDInfra.Geneva.Installer-x64-\<version\>.msi**, which is available at **C:\Program Files\Microsoft RDInfra**. + ## Error: Connection not found: RDAgent does not have an active connection to the broker Your session host VMs may be at their connection limit and can't accept new connections. |
virtual-desktop | Troubleshoot Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md | If your data isn't displaying properly, check the following common solutions: - [Log Analytics Firewall Requirements](../azure-monitor/agents/log-analytics-agent.md#firewall-requirements). - Not seeing data from recent activity? You may want to wait for 15 minutes and refresh the feed. Azure Monitor has a 15-minute latency period for populating log data. To learn more, see [Log data ingestion time in Azure Monitor](../azure-monitor/logs/data-ingestion-time.md). -If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. Review [known issues and limitations](#known-issues-and-limitations). +If you're not missing any information but your data still isn't displaying properly, there may be an issue in the query or the data sources. For more information, see [known issues and limitations](#known-issues-and-limitations). # [Azure Monitor Agent (preview)](#tab/monitor) If this article doesn't have the data point you need to resolve an issue, you ca - To learn how to leave feedback, see [Troubleshooting overview, feedback, and support for Azure Virtual Desktop](troubleshoot-set-up-overview.md). - You can also leave feedback for Azure Virtual Desktop at the [Azure Virtual Desktop feedback hub](https://support.microsoft.com/help/4021566/windows-10-send-feedback-to-microsoft-with-feedback-hub-app). ++ ## Known issues and limitations The following are issues and limitations we're aware of and working to fix: The following are issues and limitations we're aware of and working to fix: - Do you see contradicting or unexpected connection times? While rare, a connection's completion event can go missing and can impact some visuals and metrics. - Time to connect includes the time it takes users to enter their credentials; this correlates to the experience but in some cases can show false peaks. -- ## Next steps - To get started, see [Use Azure Virtual Desktop Insights to monitor your deployment](insights.md). |
virtual-desktop | Troubleshoot Statuses Checks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-statuses-checks.md | Title: Azure Virtual Desktop session host statuses and health checks description: How to troubleshoot the failed session host statuses and failed health checks-+ Last updated 05/03/2023--+ # Azure Virtual Desktop session host statuses and health checks The following table lists all statuses for session hosts in the Azure portal eac | Session host status | Description | How to resolve related issues | |||| |Available| This status means that the session host passed all health checks and is available to accept user connections. If a session host has reached its maximum session limit but has passed health checks, it's still listed as ΓÇ£Available." |N/A| -|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: VMs are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state) to resolve the issue.| +|Needs Assistance|The session host didn't pass one or more of the following non-fatal health checks: the Geneva Monitoring Agent health check, the Azure Instance Metadata Service (IMDS) health check, or the URL health check. In this state, users can connect to VMs, but their user experience may degrade. You can find which health checks failed in the Azure portal by going to the **Session hosts** tab and selecting the name of your session host. |Follow the directions in [Error: Session hosts are stuck in "Needs Assistance" state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state) to resolve the issue.| |Shutdown| The session host has been shut down. If the agent enters a shutdown state before connecting to the broker, its status changes to *Unavailable*. If you've shut down your session host and see an *Unavailable* status, that means the session host shut down before it could update the status, and doesn't indicate an issue. You should use this status with the [VM instance view API](/rest/api/compute/virtual-machines/instance-view?tabs=HTTP#virtualmachineinstanceview) to determine the power state of the VM. |Turn on the session host. | |Unavailable| The session host is either turned off or hasn't passed fatal health checks, which prevents user sessions from connecting to this session host. |If the session host is off, turn it back on. If the session host didn't pass the domain join check or side-by-side stack listener health checks, refer to the table in [Health check](#health-check) for ways to resolve the issue. If the status is still "Unavailable" after following those directions, open a support case.| |Upgrade Failed| This status means that the Azure Virtual Desktop Agent couldn't update or upgrade. This status doesn't affect new nor existing user sessions. |Follow the instructions in the [Azure Virtual Desktop Agent troubleshooting article](troubleshoot-agent.md).| The health check is a test run by the agent on the session host. The following t | Geneva Monitoring Agent | Verifies that the session host has a healthy monitoring agent by checking if the monitoring agent is installed and running in the expected registry location. | If this check fails, it's semi-fatal. There may be successful connections, but they'll contain no logging information. To resolve this issue, make sure a monitoring agent is installed. If it's already installed, contact Microsoft support. | | Integrated Maintenance Data System (IMDS) reachable | Verifies that the service can't access the IMDS endpoint. | If this check fails, it's semi-fatal. There may be successful connections, but they won't contain logging information. To resolve this issue, you'll need to reconfigure your networking, firewall, or proxy settings. | | Side-by-side (SxS) Stack Listener | Verifies that the side-by-side stack is up and running, listening, and ready to receive connections. | If this check fails, it's fatal, and users won't be able to connect to the session host. Try restarting your virtual machine (VM). If restarting doesn't work, contact Microsoft support. |-| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: VMs are stuck in the Needs Assistance state](troubleshoot-agent.md#error-vms-are-stuck-in-the-needs-assistance-state). | +| UrlsAccessibleCheck | Verifies that the required Azure Virtual Desktop service and Geneva URLs are reachable from the session host, including the RdTokenUri, RdBrokerURI, RdDiagnosticsUri, and storage blob URLs for Geneva agent monitoring. | If this check fails, it isn't always fatal. Connections may succeed, but if certain URLs are inaccessible, the agent can't apply updates or log diagnostic information. To resolve this issue, follow the directions in [Error: Session hosts are stuck in the Needs Assistance state](troubleshoot-agent.md#error-session-hosts-are-stuck-in-the-needs-assistance-state). | | TURN (Traversal Using Relay NAT) Relay Access Health Check | When using [RDP Shortpath for public networks](rdp-shortpath.md?tabs=public-networks#how-rdp-shortpath-works) with an indirect connection, TURN uses User Datagram Protocol (UDP) to relay traffic between the client and session host through an intermediate server when direct connection isn't possible. | If this check fails, it's not fatal. Connections revert to the websocket TCP and the session host enters the "Needs assistance" state. To resolve the issue, follow the instructions in [Disable RDP Shortpath on managed and unmanaged windows clients using group policy](configure-rdp-shortpath.md?tabs=public-networks#disable-rdp-shortpath-on-managed-and-unmanaged-windows-clients-using-group-policy). | | App attach health check | Verifies that the [MSIX app attach](what-is-app-attach.md) service is working as intended during package staging or destaging. | If this check fails, it isn't fatal. However, certain apps stop working for end-users. | | Domain reachable | Verifies the domain the session host is joined to is still reachable. | If this check fails, it's fatal. The service won't be able to connect if it can't reach the domain. | |
virtual-desktop | Deploy Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md | Title: Deploy the diagnostics tool for Azure Virtual Desktop (classic) - Azure description: How to deploy the diagnostics UX tool for Azure Virtual Desktop (classic). + Last updated 12/15/2020 You can also interact with users on the session host: ## Next steps - Learn how to monitor activity logs at [Use diagnostics with Log Analytics](diagnostics-log-analytics-2019.md).-- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md).+- Read about common error scenarios and how to fix them at [Identify and diagnose issues](diagnostics-role-service-2019.md). |
virtual-desktop | Manage Resources Using Ui Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md | |
virtual-desktop | Whats New Client Android Chrome Os | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-android-chrome-os.md | description: Learn about recent changes to the Remote Desktop client for Android Previously updated : 01/04/2023 Last updated : 08/21/2023 # What's new in the Remote Desktop client for Android and Chrome OS |
virtual-desktop | Whats New Client Macos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-macos.md | description: Learn about recent changes to the Remote Desktop client for macOS Previously updated : 06/26/2023 Last updated : 08/21/2023 # What's new in the Remote Desktop client for macOS |
virtual-desktop | Whats New Client Windows Azure Virtual Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows-azure-virtual-desktop-app.md | Title: What's new in the Azure Virtual Desktop Store app for Windows (preview) - Azure Virtual Desktop description: Learn about recent changes to the Azure Virtual Desktop Store app for Windows. -- Previously updated : 08/04/2023++ Last updated : 08/29/2023 # What's new in the Azure Virtual Desktop Store app for Windows (preview) Last updated 08/04/2023 In this article you'll learn about the latest updates for the Azure Virtual Desktop Store app for Windows. To learn more about using the Azure Virtual Desktop Store app for Windows with Azure Virtual Desktop, see [Connect to Azure Virtual Desktop with the Azure Virtual Desktop Store app for Windows](users/connect-windows-azure-virtual-desktop-app.md) and [Use features of the Azure Virtual Desktop Store app for Windows when connecting to Azure Virtual Desktop](users/client-features-windows-azure-virtual-desktop-app.md). -## Latest client versions +## Supported client versions -The following table lists the current version available for the public release. To enable Insider releases, see [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases). +The following table lists the current versions available for the public and Insider releases. To enable Insider releases, see [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases). | Release | Latest version | Download | |-||-| | Public | 1.2.4487 | [Microsoft Store](https://aka.ms/AVDStoreClient) |-| Insider | 1.2.4487 | Download the public release, then [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases) and check for updates. | +| Insider | 1.2.4577 | Download the public release, then [Enable Insider releases](users/client-features-windows-azure-virtual-desktop-app.md#enable-insider-releases) and check for updates. | ++## Updates for version 1.2.4577 (Insider) ++*Date published: August 29, 2023* ++In this release, we've made the following changes: ++- Teams VDI 2.0 plugin now gets loaded for RDP connections. +- Fixed an issue when using the default display settings and a change is made to the system display settings, where the bar does not show when hovering over top of screen after it is hidden. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Accessibility improvements: + - Narrator now announces the view mode selector as "*View combo box*", instead of "*Tile view combo box*" or "*List view combo box*". + - Narrator now focuses on and announces **Learn more** hyperlinks. + - Keyboard focus is now set correctly when a warning dialog loads. + - Tooltip for the close button on the **About** panel now dismisses when keyboard focus moves. + - Keyboard focus is now properly displayed for certain drop-down selectors in the **Settings** panel for published desktops. ## Updates for version 1.2.4487 |
virtual-desktop | Whats New Client Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md | description: Learn about recent changes to the Remote Desktop client for Windows Previously updated : 08/01/2023 Last updated : 08/29/2023 # What's new in the Remote Desktop client for Windows The following table lists the current versions available for the public and Insi | Release | Latest version | Download | ||-|-| | Public | 1.2.4487 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |-| Insider | 1.2.4487 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) | +| Insider | 1.2.4577 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) | ++## Updates for version 1.2.4577 (Insider) ++*Date published: August 29, 2023* ++Download: [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139233), [Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139144), [Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139368) ++In this release, we've made the following changes: ++- Teams VDI 2.0 plugin now gets loaded for RDP connections. +- Fixed an issue when using the default display settings and a change is made to the system display settings, where the bar does not show when hovering over top of screen after it is hidden. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Accessibility improvements: + - Narrator now announces the view mode selector as "*View combo box*", instead of "*Tile view combo box*" or "*List view combo box*". + - Narrator now focuses on and announces **Learn more** hyperlinks. + - Keyboard focus is now set correctly when a warning dialog loads. + - Tooltip for the close button on the **About** panel now dismisses when keyboard focus moves. + - Keyboard focus is now properly displayed for certain drop-down selectors in the **Settings** panel for published desktops. ## Updates for version 1.2.4487 In this release, we've made the following changes: *Date published: July 11, 2023* -Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17f1J), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17mKo), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17sgF) +Download: [Windows 64-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17Yn9), [Windows 32-bit](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17VPy), [Windows ARM64](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RW17VPx) -In this release, we've made the following changes: +In this release, we've made the following changes: -- Added a new RDP file property called *allowed security protocols*. This property restricts the list of security protocols the client can negotiate. -- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Added a new RDP file property called *allowed security protocols*. This property restricts the list of security protocols the client can negotiate. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Accessibility improvements: - Narrator now describes the toggle button in the display settings side panel as *toggle button* instead of *button*.- - Control types for text now correctly say that they're *text* and not *custom*. - - Fixed an issue where Narrator didn't read the error message that appears after the user selects **Delete**. - - Added heading-level description to **Subscribe with URL**. + - Control types for text now correctly say that they're *text* and not *custom*. + - Fixed an issue where Narrator didn't read the error message that appears after the user selects **Delete**. + - Added heading-level description to **Subscribe with URL**. - Dialog improvements:- - Updated **file** and **URI launch** dialog error handling messages to be more specific and user-friendly. + - Updated **file** and **URI launch** dialog error handling messages to be more specific and user-friendly. - The client now displays an error message after unsuccessfully checking for updates instead of incorrectly notifying the user that the client is up to date. - Fixed an issue where, after having been automatically reconnected to the remote session, the **connection information** dialog gave inconsistent information about identity verification. In this release, we've made the following changes: *Date published: July 6, 2023* -In this release, we've made the following changes: +In this release, we've made the following changes: - General improvements to Narrator experience. - Fixed an issue that caused the text in the message for subscribing to workspaces to be cut off when the user increases the text size. - Fixed an issue that caused the client to sometimes stop responding when attempting to start new connections.-- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. -## Updates for version 1.2.4337 +## Updates for version 1.2.4337 -*Date published: June 13, 2023* +*Date published: June 13, 2023* -In this release, we've made the following changes: +In this release, we've made the following changes: - Fixed the vulnerability known as [CVE-2023-29362](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-29362). - Fixed the vulnerability known as [CVE-2023-29352](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-29352). In this release, we've made the following changes: - Fixed an application compatibility issue that affected preview versions of Windows. - Moved the identity verification method from the lock window message in the connection bar to the end of the connection info message. - Changed the error message that appears when the session host can't reach the authenticator to validate a user's credentials to be clearer.-- Added a reconnect button to the disconnect message boxes that appear whenever the local PC goes into sleep mode or the session is locked. +- Added a reconnect button to the disconnect message boxes that appear whenever the local PC goes into sleep mode or the session is locked. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. -## Updates for version 1.2.4240 +## Updates for version 1.2.4240 *Date published: May 16, 2023* -In this release, we've made the following changes: +In this release, we've made the following changes: - Fixed an issue where the connection bar remained visible on local sessions when the user changed their contrast themes.-- Made minor changes to connection bar UI, including improved button sizing. -- Fixed an issue where the client stopped responding if closed from the system tray. -- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Made minor changes to connection bar UI, including improved button sizing. +- Fixed an issue where the client stopped responding if closed from the system tray. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. ## Updates for version 1.2.4159 In this release, we've made the following changes: - Fixed a bug where users aren't able to update the client if the client is installed with the flags *ALLUSERS=2* and *MSIINSTALLPERUSER=1* - Fixed an issue that made the client disconnect and display error message 0x3000018 instead of showing a prompt to reconnect if the endpoint doesn't let users save their credentials. - Fixed the vulnerability known as [CVE-2023-28267](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-28267).-- Fixed an issue that generated duplicate Activity IDs for unique connections. +- Fixed an issue that generated duplicate Activity IDs for unique connections. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Fixed an application compatibility issue for preview versions of Windows. In this release, we've made the following changes: In this release, we've made the following changes: - Fixed a bug where refreshes increased memory usage.-- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. +- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues. - Updates to Teams for Azure Virtual Desktop, including the following: - Bug fix for Background Effects persistence between Teams sessions. - Updates to MMR for Azure Virtual Desktop, including the following: - Various bug fixes for multimedia redirection (MMR) video playback redirection.- - [Multimedia redirection for Azure Virtual Desktop](multimedia-redirection.md) is now generally available. + - [Multimedia redirection for Azure Virtual Desktop](multimedia-redirection.md) is now generally available. >[!IMPORTANT] >This is the final version of the Remote Desktop client with Windows 7 support. After this version, if you try to use the Remote Desktop client with Windows 7, it may not work as expected. For more information about which versions of Windows the Remote Desktop client currently supports, see [Prerequisites](./users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json&tabs=subscribe#prerequisites). |
virtual-desktop | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md | Title: What's new in Azure Virtual Desktop? - Azure description: New features and product updates for Azure Virtual Desktop. Previously updated : 07/18/2023 Last updated : 08/22/2023 Make sure to check back here often to keep up with new updates. Here's what changed in July 2023: +### Watermarking is now generally available ++[Watermarking](watermarking.md), when used with [screen capture protection](#screen-capture-protection), helps protect your sensitive information from capture on client endpoints. When you enable watermarking, QR code watermarks appear as part of remote desktops. The QR code contains the connection ID of a remote session that admins can use to trace the session. You can configure watermarking on session hosts and enforce it with the Remote Desktop client. ++### Audio call redirection for Azure Virtual Desktop in preview ++Call redirection, which optimizes audio calls for WebRTC-based calling apps, is now in preview. Multimedia redirection redirects media content from Azure Virtual Desktop to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature when using the Windows Desktop client. ++For more information about which sites are compatible with this feature, see [Call redirection](multimedia-redirection-intro.md#call-redirection). + ### Autoscale for personal host pools is currently in preview Autoscale for personal host pools is now in preview. Autoscale lets you scale your session host virtual machines (VMs) in a host pool up or down according to a schedule to optimize deployment costs. |
virtual-machine-scale-sets | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md | |
virtual-machine-scale-sets | Virtual Machine Scale Sets Orchestration Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md | The following table compares the Flexible orchestration mode, Uniform orchestrat | Monitor Application Health | Application health extension | Application health extension or Azure load balancer probe | Application health extension | | Instance Repair (Virtual Machine Scale Set) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | Yes, read [Instance Repair documentation](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md) | N/A | | Instance Protection | No, use [Azure resource lock](../azure-resource-manager/management/lock-resources.md) | Yes | No |-| Scale In Policy | No | Yes | No | +| Scale In Policy | Yes | Yes | No | | VMSS Get Instance View | No | Yes | N/A | | VM Batch Operations (Start all, Stop all, delete subset, etc.) | Yes | Yes | No | The following Virtual Machine Scale Set parameters aren't currently supported wi - Application health via SLB health probe - use Application Health Extension on instances - Virtual Machine Scale Set upgrade policy - must be null or empty - Unmanaged disks-- Virtual Machine Scale Set Scale in Policy - Virtual Machine Scale Set Instance Protection - Basic Load Balancer - Port Forwarding via Standard Load Balancer NAT Pool - you can configure NAT rules |
virtual-machine-scale-sets | Virtual Machine Scale Sets Scale In Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md | The scale-in policy feature provides users a way to configure the order in which 2. NewestVM 3. OldestVM -> [!IMPORTANT] -> Flexible orchestration for Virtual Machine Scale Sets does not currently support scale-in policy. - ### Default scale-in policy +#### Flexible orchestration +With this policy, virtual machines are scaled-in after balancing across availability zones (if the scale set is in zonal configuration), and the oldest virtual machine as per `createdTime` is scaled-in first. +Note that balancing across fault domain is not available in Default policy with flexible orchestration mode. ++#### Uniform orchestration By default, Virtual Machine Scale Set applies this policy to determine which instance(s) will be scaled in. With the *Default* policy, VMs are selected for scale-in in the following order: 1. Balance virtual machines across availability zones (if the scale set is deployed in zonal configuration) |
virtual-machines | Disks Incremental Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md | description: Learn about incremental snapshots for managed disks, including how Previously updated : 08/11/2023 Last updated : 08/17/2023 ms.devlang: azurecli ms.devlang: azurecli # [Azure CLI](#tab/azure-cli) -You can use the Azure CLI to create an incremental snapshot. You'll need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI. +You can use the Azure CLI to create an incremental snapshot. You need the latest version of the Azure CLI. See the following articles to learn how to either [install](/cli/azure/install-azure-cli) or [update](/cli/azure/update-azure-cli) the Azure CLI. -The following script will create an incremental snapshot of a particular disk: +The following script creates an incremental snapshot of a particular disk: ```azurecli # Declare variables yourDiskID=$(az disk show -n $diskName -g $resourceGroupName --query "id" --outp az snapshot create -g $resourceGroupName -n $snapshotName --source $yourDiskID --incremental true ``` -> [!IMPORTANT] -> After taking a snapshot of an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details. - You can identify incremental snapshots from the same disk with the `SourceResourceId` property of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. You can use `SourceResourceId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots: az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremen # [Azure PowerShell](#tab/azure-powershell) -You can use the Azure PowerShell module to create an incremental snapshot. You'll need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest: +You can use the Azure PowerShell module to create an incremental snapshot. You need the latest version of the Azure PowerShell module. The following command will either install it or update your existing installation to latest: ```PowerShell Install-Module -Name Az -AllowClobber -Scope CurrentUser $snapshotConfig=New-AzSnapshotConfig -SourceUri $yourDisk.Id -Location $yourDisk New-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName -Snapshot $snapshotConfig ``` -> [!IMPORTANT] -> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details. - You can identify incremental snapshots from the same disk with the `SourceResourceId` and the `SourceUniqueId` properties of snapshots. `SourceResourceId` is the Azure Resource Manager resource ID of the parent disk. `SourceUniqueId` is the value inherited from the `UniqueId` property of the disk. If you delete a disk and then create a new disk with the same name, the value of the `UniqueId` property changes. You can use `SourceResourceId` and `SourceUniqueId` to create a list of all snapshots associated with a particular disk. Replace `yourResourceGroupNameHere` with your value and then you can use the following example to list your existing incremental snapshots: $incrementalSnapshots # [Portal](#tab/azure-portal) [!INCLUDE [virtual-machines-disks-incremental-snapshots-portal](../../includes/virtual-machines-disks-incremental-snapshots-portal.md)] -> [!IMPORTANT] -> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details. - # [Resource Manager Template](#tab/azure-resource-manager) You can also use Azure Resource Manager templates to create an incremental snapshot. You'll need to make sure the apiVersion is set to **2022-03-22** and that the incremental property is also set to true. The following snippet is an example of how to create an incremental snapshot with Resource Manager templates: You can also use Azure Resource Manager templates to create an incremental snaps ] } ```-> [!IMPORTANT] -> After taking a snapshot of a Premium SSD v2 or an Ultra Disk, you must wait for the snapshot to complete before you can use it. See the [Check status of snapshots or disks](#check-status-of-snapshots-or-disks) section for details. ---## Check status of snapshots or disks +## Check snapshot status -Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed. Similarly, Premium SSD v2 or Ultra Disks created from incremental snapshots can't be attached to a VM until the background process copying the data into the disk has completed. +Incremental snapshots of Premium SSD v2 or Ultra Disks can't be used to create new disks until the background process copying the data into the snapshot has completed. -You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot and you can use the [Check disk creation status](#check-disk-creation-status) section to check the status of a background copy from a snapshot to a disk. +You can use either the [CLI](#cli) or [PowerShell](#powershell) sections to check the status of the background copy from a disk to a snapshot. ### CLI The following script returns a list of all snapshots associated with a particula subscriptionId="yourSubscriptionId" resourceGroupName="yourResourceGroupNameHere" diskName="yourDiskNameHere"- az account set --subscription $subscriptionId- diskId=$(az disk show -n $diskName -g $resourceGroupName --query [id] -o tsv)- az snapshot list --query "[?creationData.sourceResourceId=='$diskId' && incremental]" -g $resourceGroupName --output table ``` The following script returns a list of all incremental snapshots associated with $resourceGroupName = "yourResourceGroupNameHere" $snapshots = Get-AzSnapshot -ResourceGroupName $resourceGroupName $diskName = "yourDiskNameHere"- $yourDisk = Get-AzDisk -DiskName $diskName -ResourceGroupName $resourceGroupName- $incrementalSnapshots = New-Object System.Collections.ArrayList- foreach ($snapshot in $snapshots) { if($snapshot.Incremental -and $snapshot.CreationData.SourceResourceId -eq $yourDisk.Id -and $snapshot.CreationData.SourceUniqueId -eq $yourDisk.UniqueId) foreach ($snapshot in $snapshots) } } }- $incrementalSnapshots ``` You can check the `CompletionPercent` property of an individual snapshot to get ```azurepowershell $resourceGroupName = "yourResourceGroupNameHere" $snapshotName = "yourSnapshotName"- $targetSnapshot=Get-AzSnapshot -ResourceGroupName $resourceGroupName -SnapshotName $snapshotName- $targetSnapshot.CompletionPercent ``` -### Check disk creation status --When creating a disk from either a Premium SSD v2 or an Ultra Disk snapshot, you must wait for the background copy process to complete before you can attach it. Currently, you must use the Azure CLI to check the progress of the copy process. --The following script gives you the status of an individual disk's copy process. The value of `completionPercent` must be 100 before the disk can be attached. --```azurecli -subscriptionId=yourSubscriptionID -resourceGroupName=yourResourceGroupName -diskName=yourDiskName --az account set --subscription $subscriptionId --az disk show -n $diskName -g $resourceGroupName --query [completionPercent] -o tsv -``` + ## Check sector size az snapshot show -g resourcegroupname -n snapshotname --query [creationData.logi See [Copy an incremental snapshot to a new region](disks-copy-incremental-snapshot-across-regions.md) to learn how to copy an incremental snapshot across regions. -If you have additional questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ. +If you have more questions on snapshots, see the [snapshots](faq-for-disks.yml#snapshots) section of the FAQ. If you'd like to see sample code demonstrating the differential capability of incremental snapshots, using .NET, see [Copy Azure Managed Disks backups to another region with differential capability of incremental snapshots](https://github.com/Azure-Samples/managed-disks-dotnet-backup-with-incremental-snapshots). |
virtual-machines | Disks Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md | Title: Select a disk type for Azure IaaS VMs - managed disks description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 07/12/2023 Last updated : 08/17/2023 To deploy a Premium SSD v2, see [Deploy a Premium SSD v2](disks-deploy-premium-v ## Premium SSDs -Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs only supports 512E sector size. +Azure Premium SSDs deliver high-performance and low-latency disk support for virtual machines (VMs) with input/output (IO)-intensive workloads. To take advantage of the speed and performance of Premium SSDs, you can migrate existing VM disks to Premium SSDs. Premium SSDs are suitable for mission-critical production applications, but you can use them only with compatible VM series. Premium SSDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)). To learn more about individual Azure VM types and sizes for Windows or Linux, including size compatibility for premium storage, see [Sizes for virtual machines in Azure](sizes.md). You'll need to check each individual VM size article to determine if it's premium storage-compatible. For Premium SSDs, each I/O operation less than or equal to 256 kB of throughput ## Standard SSDs -Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSD only supports 512E sector size. +Azure standard SSDs are optimized for workloads that need consistent performance at lower IOPS levels. They're an especially good choice for customers with varying workloads supported by on-premises hard disk drive (HDD) solutions. Compared to standard HDDs, standard SSDs deliver better availability, consistency, reliability, and latency. Standard SSDs are suitable for web servers, low IOPS application servers, lightly used enterprise applications, and non-production workloads. Like standard HDDs, standard SSDs are available on all Azure VMs. Standard SSDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)). ### Standard SSD size Standard SSDs offer disk bursting, which provides better tolerance for the unpre ## Standard HDDs -Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs only supports 512E sector size. +Azure standard HDDs deliver reliable, low-cost disk support for VMs running latency-tolerant workloads. With standard storage, your data is stored on HDDs, and performance may vary more widely than that of SSD-based disks. Standard HDDs are designed to deliver write latencies of less than 10 ms and read latencies of less than 20 ms for most IO operations. Actual performance may vary depending on IO size and workload pattern, however. When working with VMs, you can use standard HDD disks for dev/test scenarios and less critical workloads. Standard HDDs are available in all Azure regions and can be used with all Azure VMs. Standard HDDs support the [512E sector size](https://en.wikipedia.org/wiki/Advanced_Format#512_emulation_(512e)). ### Standard HDD size [!INCLUDE [disk-storage-standard-hdd-sizes](../../includes/disk-storage-standard-hdd-sizes.md)] |
virtual-machines | Ebdsv5 Ebsv5 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ebdsv5-ebsv5-series.md | -The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. Standard SSDs and Standard HDD disk storage aren't supported in the Ebv5 series. +The Ebsv5 and Ebdsv5 VMs offer up to 260000 IOPS and 8000 MBps of remote disk storage throughput. Both series also include up to 672 GiB of RAM. The Ebdsv5 series has local SSD storage up to 3800 GiB. Both series provide a 3X increase in remote storage performance of data-intensive workloads compared to prior VM generations. You can use these series to consolidate existing workloads on fewer VMs or smaller VM sizes while achieving potential cost savings. The Ebdsv5 series comes with a local disk and Ebsv5 is without a local disk. We recommend choosing Premium SSD, Premium SSD v2 or Ultra disks to attain the published disk performance. The Ebdsv5 and Ebsv5 series run on the Intel® Xeon® Platinum 8370C (Ice Lake) processors in a hyper-threaded configuration. The series are ideal for various memory-intensive enterprise applications. They feature: Ebdsv5-series sizes run on the Intel® Xeon® Platinum 8370C (Ice Lake) processo - SCSI Interface: Supported on Generation 1 and 2 VMs ## Ebdsv5 Series (SCSI)-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | +| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | ||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | | Standard_E8bds_v5 | 8 | 64 | 300 | 16 | 38000/500 | 22000/625 | 40000/1200 |29480/625 |60000/1200 | 4 | 12500 |-| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 4 | 12500 | +| Standard_E16bds_v5 | 16 | 128 | 600 | 32 | 75000/1000 | 44000/1250 | 64000/2000 |58960/1250 |96000/2000 | 8 | 12500 | | Standard_E32bds_v5 | 32 | 256 | 1200 | 32 | 150000/2000 | 88000/2500 | 120000/4000 | 117920/2500|160000/4000| 8 | 16000 | | Standard_E48bds_v5 | 48 | 384 | 1800 | 32 | 225000/3000 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 16000 | | Standard_E64bds_v5 | 64 | 512 | 2400 | 32 | 300000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 20000 | | Standard_E96bds_v5 | 96 | 672 | 3600 | 32 | 450000/4000 | 120000/4000 | 120000/4000 |160000/4000 | 160000/4000| 8 | 25000 | ## Ebdsv5 Series (NVMe)-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | +| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | Max data disks | Max temp storage throughput: IOPS / MBps | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | ||||||||||||| | Standard_E2bds_v5 | 2 | 16 | 75 | 4 | 9000/125 | 5500/156 | 10000/1200 | 7370/156 | 15000/1200 | 2 | 12500 | | Standard_E4bds_v5 | 4 | 32 | 150 | 8 | 19000/250 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V - NVMe Interface: Supported only on Generation 2 VMs - SCSI Interface: Supported on Generation 1 and Generation 2 VMs ## Ebsv5 Series (SCSI)-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | +| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | | | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | Ebsv5-series sizes run on the Intel® Xeon® Platinum 8272CL (Ice Lake). These V | Standard_E96bs_v5 | 96 | 672 | 32 | 120000/4000 | 120000/4000 | 160000/4000|160000/4000 | 8 | 25000 | ## Ebsv5 Series (NVMe)-| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max burst uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | +| Size | vCPU | Memory: GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max NICs | Network bandwidth | | | | | | | | | | | | | Standard_E2bs_v5 | 2 | 16 | 4 | 5500/156 | 10000/1200 | 7370/156|15000/1200 | 2 | 12500 | | Standard_E4bs_v5 | 4 | 32 | 8 | 11000/350 | 20000/1200 | 14740/350|30000/1200 | 2 | 12500 | |
virtual-machines | Agent Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md | Testing has confirmed that the following systems work with the Azure Linux VM Ag Other supported systems: -- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [Github repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.+- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [GitHub repository](https://github.com/Azure/WALinuxAgent) and we may be able to help. The Linux agent depends on these system packages to function properly: |
virtual-machines | Key Vault Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-linux.md | The following JSON shows the schema for the Key Vault VM extension. The extensio | pollingIntervalInS | 3600 | string | | certificateStoreName | It is ignored on Linux | string | | linkOnRenewal | false | boolean |-| certificateStoreLocation | /var/lib/waagent/Microsoft.Azure.KeyVault | string | +| certificateStoreLocation | /var/lib/waagent/Microsoft.Azure.KeyVault.Store | string | | requireInitialSync | true | boolean | | observedCertificates | ["https://myvault.vault.azure.net/secrets/mycertificate", "https://myvault.vault.azure.net/secrets/mycertificate2"] | string array | msiEndpoint | http://169.254.169.254/metadata/identity | string | |
virtual-machines | Key Vault Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/key-vault-windows.md | By default, Administrators and SYSTEM receive Full Control. The extension relies on the default behavior of the [PFXImportCertStore API](/windows/win32/api/wincrypt/nf-wincrypt-pfximportcertstore). By default, if a certificate has a Provider Name attribute that matches with CAPI1, then the certificate is imported by using CAPI1 APIs. Otherwise, the certificate is imported by using CNG APIs. -#### Does the extension support IIS certificate autobinding? +#### Does the extension support certificate auto-rebinding? -No. The Azure Key Vault VM extension doesn't support IIS automatic rebinding. The automatic rebinding process requires certificate services lifecycle notifications, and the extension doesn't write a certificate-renewal event (event ID 1001) upon newer versions. +Yes, the Azure Key Vault VM extension supports certificate auto-rebinding. The Key Vault VM extension does support S-channel binding on certificate renewal when the `linkOnRenewal` property is set to true. -The recommended approach is to use the Key Vault VM extension schema's `linkOnRenewal` property. Upon installation, when the `linkOnRenewal` property is set to `true`, the previous version of a certificate is chained to its successor via the `CERT_RENEWAL_PROP_ID` certificate extension property. The chaining enables the S-channel to pick up the most recent (latest) valid certificate with a matching SAN. This feature enables autorotation of SSL certificates without necessitating a redeployment or binding. +For IIS, you can configure auto-rebind by enabling automatic rebinding of certificate renewals in IIS. The Azure Key Vault VM extension generates Certificate Lifecycle Notifications when a certificate with a matching SAN is installed. IIS uses this event to auto-rebind the certificate. For more information, see [Certifcate Rebind in IIS](https://statics.teams.cdn.office.net/evergreen-assets/safelinks/1/atp-safelinks.html) ### View extension status Here are some other options to help you resolve deployment issues: - If you don't find an answer on the site, you can post a question for input from Microsoft or other members of the community. -- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/).+- You can also [Contact Microsoft Support](https://support.microsoft.com/contactus/). For information about using Azure support, read the [Azure support FAQ](https://azure.microsoft.com/support/legal/faq/). |
virtual-machines | Hbv4 Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-performance.md | Performance expectations using common HPC microbenchmarks are as follows: ## Memory bandwidth test -The STREAM memory test can be run using the scripts in this github repository. +The STREAM memory test can be run using the scripts in this GitHub repository. ```bash git clone https://github.com/Azure/woc-benchmarking cd woc-benchmarking/apps/hpc/stream/ sh stream_run_script.sh $PWD ΓÇ£hbrs_v4ΓÇ¥ ``` ## Compute performance test -The HPL benchmark can be run using the script in this github repository. +The HPL benchmark can be run using the script in this GitHub repository. ```bash git clone https://github.com/Azure/woc-benchmarking cd woc-benchmarking/apps/hpc/hpl |
virtual-machines | Image Builder Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-overview.md | az feature register --namespace Microsoft.VirtualMachineImages --name MooncakePu ## OS support -VM Image Builder supports the following Azure Marketplace base operating system images: -- Ubuntu 18.04-- Ubuntu 16.04-- RHEL 7.6, 7.7-- CentOS 7.6, 7.7-- SLES 12 SP4-- SLES 15, SLES 15 SP1-- Windows 10 RS5 Enterprise/Enterprise multi-session/Professional-- Windows 2016-- Windows 2019-- CBL-Mariner-->[!IMPORTANT] -> These operating systems have been tested and now work with VM Image Builder. However, VM Image Builder should work with any Linux or Windows image in the marketplace. +VM Image Builder is designed to work with all Azure Marketplace base operating system images. ++ > [!NOTE] > You can now use the Azure Image Builder service inside the portal as of March 2023. [Get started](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal. |
virtual-machines | Image Builder Reliability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-reliability.md | - Title: Reliability in Azure Image Builder -description: Find out about reliability in Azure Image Builder ------ Previously updated : 02/03/2023---# Reliability in Azure Image Builder --This article describes reliability support in Azure Image Builder, and covers both regional resiliency with availability zones. For a more detailed overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview). --Azure Image Builder (AIB) is a regional service with cluster serving single regions. The AIB regional setup keeps data and resources within the regional boundary. AIB as a service doesn't do fail over for cluster and SQL database in region down scenarios. ---## Availability zone support --Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. In the event of a local zone failure, availability zones are designed so that if the one zone is affected. Regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview). --Azure availability zones-enabled services are designed to provide the right level of reliability and flexibility. They can be configured in two ways. They can be either zone redundant, with automatic replication across zones, or zonal, with instances pinned to a specific zone. You can also combine these approaches. For more information on zonal vs. zone-redundant architecture, see [Build solutions with availability zones](/azure/architecture/high-availability/building-solutions-for-high-availability). --> [!NOTE] -> Azure Image Builder doesn't currently support availability zones at this time. Availability zone outage within a region is considered Regional outage for Azure Image Builder and customers are recommended to follow guidance as per the Disaster Recovery and failover to backup region. ----## Disaster recovery: cross-region failover --In the event of a region-wide disaster, Azure can provide protection from regional or large geography disasters with disaster recovery by making use of another region. For more information on Azure disaster recovery architecture, see [Azure to Azure disaster recovery architecture](../site-recovery/azure-to-azure-architecture.md). --To ensure fast and easy recovery for Azure Image Builder (AIB), it's recommended to run an image template in region pairs or multiple regions when designing your AIB solution. You'll also want to replicate resources from the start when you're setting up your image templates. ---### Cross-region disaster recovery in multi-region geography --Microsoft will be responsible for outage detection, notifications, and support in the event of disaster recovery scenarios for Azure Image Builder. Customers will need to set up disaster recovery for the control plane (service side) and data plane. ---#### Outage detection, notification, and management --Microsoft will send a notification if there's an outage for the Azure Image Builder (AIB) Service. The common outage symptom includes image templates getting 500 errors when attempting to run. Customers can review Azure Image Builder outage notifications and status updates through [support request management.](../azure-portal/supportability/how-to-manage-azure-support-request.md) ---#### Set up disaster recovery and outage detection --Customers are responsible for setting up disaster recovery for their Azure Image Builder (AIB) environment, as there isn't a region failover at the AIB service side. Both the control plane (service side) and data plane will need to configure by the customer. --The high level guidelines include creating a AIB resource in another region close by and replicating your resources. For more information, see the [supported regions](./image-builder-overview.md#regions) and what resources are involved in [AIB]( /azure/virtual-machines/image-builder-overview#how-it-works) creation. --### Single-region geography disaster recovery --On supporting single-region geography for Azure Image Builder, the challenge will be to get the image template resource since the region isn't available. For those cases, customers can either maintain a copy of an image template locally or can use [Azure Resource Graph](../governance/resource-graph/index.yml) from the Azure portal or Azure CLI to get an Image template resource. --Below are instructions on how to get an image template resource using Resource Graph from the Azure portal: --1. Go to the search bar in Azure portal and search for *resource graph explorer*. -- ![Screenshot of Azure Resource Graph Explorer in the portal](./media/image-builder-reliability/resource-graph-explorer-portal.png#lightbox) --1. Use the search bar on the far left to search resource by type and name to see how the details will give you properties of the image template. The *See details* option on the bottom right will show the image template's properties attribute and tags separately. Template name, location, ID, and tenant ID can be used to get the correct image template resource. -- ![Screenshot of using Azure Resource Graph Explorer search](./media/image-builder-reliability/resource-graph-explorer-search.png#lightbox) ---### Capacity and proactive disaster recovery resiliency --Microsoft and its customers operate under the Shared responsibility model. This means that for customer-enabled DR (customer-responsible services), the customer must address DR for any service they deploy and control. To ensure that recovery is proactive, customers should always pre-deploy secondaries because there's no guarantee of capacity at time of impact for those who haven't pre-allocated. --When planning where to replicate a template, consider: --- AIB region availability:- - Choose [AIB supported regions](./image-builder-overview.md#regions) close to your users. - - AIB continually expands into new regions. -- Azure paired regions:- - For your geographic area, choose two regions paired together. - - Recovery efforts for paired regions where prioritization is needed. --## Additional guidance --In regards to customer data processing information, refer to the Azure Image Builder [data residency](./linux/image-builder-json.md#data-residency) details. ---## Next steps --> [!div class="nextstepaction"] -> [Reliability in Azure](../reliability/overview.md) -> [Enable Azure VM disaster recovery between availability zones](../site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md) -> [Azure Image Builder overview](./image-builder-overview.md) |
virtual-machines | Image Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version.md | $targetSubID = "<subscription ID for the target>" $sourceTenantID = "<tenant ID where for the source image>" $sourceImageID = "<resource ID of the source image>" -# Login to the subscription where the new image will be created -Connect-AzAccount -UseDeviceAuthentication -Subscription $targetSubID - # Login to the tenant where the source image is published Connect-AzAccount -Tenant $sourceTenantID -UseDeviceAuthentication  -# Login to the subscription again where the new image will be created and set the context +# Login to the subscription where the new image will be created and set the context Connect-AzAccount -UseDeviceAuthentication -Subscription $targetSubID Set-AzContext -Subscription $targetSubID  # Create the image version from another image version in a different tenant-New-AzGalleryImageVersion \ - -ResourceGroupName myResourceGroup -GalleryName myGallery \ - -GalleryImageDefinitionName myImageDef \ - -Location "West US 2" \ - -Name 1.0.0 \ +New-AzGalleryImageVersion ` + -ResourceGroupName myResourceGroup -GalleryName myGallery ` + -GalleryImageDefinitionName myImageDef ` + -Location "West US 2" ` + -Name 1.0.0 ` -SourceImageId $sourceImageID ``` |
virtual-machines | Disk Encryption Key Vault Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-key-vault-aad.md | If you would like to use certificate authentication and wrap the encryption key ## Next steps -[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md) +[Enable Azure Disk Encryption with Azure AD on Linux VMs (previous release)](disk-encryption-linux-aad.md) |
virtual-machines | Disks Upload Vhd To Managed Disk Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-upload-vhd-to-managed-disk-cli.md | description: Learn how to upload a VHD to an Azure managed disk and copy a manag Previously updated : 01/03/2023 Last updated : 08/25/2023 -This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using AzCopy. This process, direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for standard HDD, standard SSD, and premium SSD managed disks. It isn't supported for ultra disks, yet. +This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using AzCopy. This process, direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for Ultra Disks, Premium SSD v2, Premium SSD, Standard SSD, and Standard HDD. If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs. Create an empty standard HDD for uploading by specifying both the **-ΓÇôfor-uplo Replace `<yourdiskname>`, `<yourresourcegroupname>`, `<yourregion>` with values of your choosing. The `--upload-size-bytes` parameter contains an example value of `34359738880`, replace it with a value appropriate for you. -> [!TIP] +> [!IMPORTANT] > If you're creating an OS disk, add `--hyper-v-generation <yourGeneration>` to `az disk create`. > > If you're using Azure AD to secure disk uploads, add `-dataAccessAuthmode 'AzureActiveDirectory'`.+> When uploading to an Ultra Disk or Premium SSD v2 you need to select the correct sector size of the target disk. If you're using a VHDX file with a 4k logical sector size, the target disk must be set to 4k. If you're using a VHD file with a 512 logical sector size, the target disk must be set to 512. +> +> VHDX files with logical sector size of 512k aren't supported. ```azurecli+##For Ultra Disk or Premium SSD v2, add --logical-sector-size and specify either 512 or 4096, depending on if you're using a VHD or VHDX + az disk create -n <yourdiskname> -g <yourresourcegroupname> -l <yourregion> --os-type Linux --for-upload --upload-size-bytes 34359738880 --sku standard_lrs ``` -If you would like to upload either a premium SSD or a standard SSD, replace **standard_lrs** with either **premium_LRS** or **standardssd_lrs**. Ultra disks are not supported for now. +If you would like to upload a different disk type, replace **standard_lrs** with **premium_lrs**, **premium_zrs**, **standardssd_lrs**, **standardssd_zrs**, **premiumv2_lrs**, or **ultrassd_lrs**. ### (Optional) Grant access to the disk Sample returned value: } ``` -## Upload a VHD +## Upload a VHD or VHDX Now that you have a SAS for your empty managed disk, you can use it to set your managed disk as the destination for your upload command. -Use AzCopy v10 to upload your local VHD file to a managed disk by specifying the SAS URI you generated. +Use AzCopy v10 to upload your local VHD or VHDX file to a managed disk by specifying the SAS URI you generated. This upload has the same throughput as the equivalent [standard HDD](../disks-types.md#standard-hdds). For example, if you have a size that equates to S4, you will have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you will have a throughput of up to 500 MiB/s. sourceDiskSizeBytes=$(az disk show -g $sourceRG -n $sourceDiskName --query '[dis az disk create -g $targetRG -n $targetDiskName -l $targetLocation --os-type $targetOS --for-upload --upload-size-bytes $(($sourceDiskSizeBytes+512)) --sku standard_lrs -targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 -o tsv) +targetSASURI=$(az disk grant-access -n $targetDiskName -g $targetRG --access-level Write --duration-in-seconds 86400 --query [accessSas] -o tsv) sourceSASURI=$(az disk grant-access -n $sourceDiskName -g $sourceRG --duration-in-seconds 86400 --query [accessSas] -o tsv) |
virtual-machines | Image Builder Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md | The `customization.log` file includes the following stages: - Ensure that Azure Policy and Firewall allow connectivity to remote resources. - Output comments to the console by using `Write-Host` or `echo`. Doing so lets you search the *customization.log* file. + ## Troubleshoot common build errors +### The template deployment failed because of policy violation ++#### Error ++```text +{ + "statusCode": "BadRequest", + "serviceRequestId": null, + "statusMessage": "{\"error\":{\"code\":\"InvalidTemplateDeployment\",\"message\":\"The template deployment failed because of policy violation. Please see details for more information.\",\"details\":[{\"code\":\"RequestDisallowedByPolicy\",\"target\":\"<target_name>\",\"message\":\"Resource '<resource_name>' was disallowed by policy. Policy identifiers: '[{\\\"policyAssignment\\\":{\\\"name\\\":\\\"[Initiative] KeyVault (Microsoft.KeyVault)\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policyAssignments/Microsoft.KeyVault\\\"},\\\"policyDefinition\\\":{\\\"name\\\":\\\"Azure Key Vault should disable public network access\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policyDefinitions/KeyVault.disablePublicNetworkAccess_deny_deny\\\"},\\\"policySetDefinition\\\":{\\\"name\\\":\\\"[Initiative] KeyVault (Microsoft.KeyVault)\\\",\\\"id\\\":\\\"/providers/Microsoft.Management/managementGroups/<managementGroup_name>/providers/Microsoft.Authorization/policySetDefinitions/Microsoft.KeyVault\\\"}}]'.\",\"additionalInfo\":[{\"type\":\"PolicyViolation\"}]}]}}", + "eventCategory": "Administrative", + "entity": "/subscriptions/<subscription_ID>/<resourcegroups>/<resourcegroupname>/providers/Microsoft.Resources/deployments/<deployment_name>", + "message": "Microsoft.Resources/deployments/validate/action", + "hierarchy": "<subscription_ID>/<resourcegroupname>/<policy_name>/<managementGroup_name>/<deployment_ID>" +} +``` ++#### Cause ++The above policy violation error is a result of using an Azure Key Vault with public access disabled. At this time, Azure Image Builder doesn't support this configuration. ++#### Solution ++The Azure Key Vault must be created with public access enabled. + ### Packer build command failure #### Error |
virtual-machines | Scheduled Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/scheduled-events.md | With Scheduled Events, your application can discover when maintenance will occur Scheduled Events provides events in the following use cases: -- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/linux/breadcrumb/toc.json&toc=/azure/virtual-machines/linux/toc.json) (for example, VM reboot, live migration or memory preserving updates for host).+- [Platform initiated maintenance](../maintenance-and-updates.md?bc=/azure/virtual-machines/windows/breadcrumb/toc.json&toc=/azure/virtual-machines/windows/toc.json) (for example, VM reboot, live migration or memory preserving updates for host). - Virtual machine is running on [degraded host hardware](https://azure.microsoft.com/blog/find-out-when-your-virtual-machine-hardware-is-degraded-with-scheduled-events) that is predicted to fail soon. - Virtual machine was running on a host that suffered a hardware failure. - User-initiated maintenance (for example, a user restarts or redeploys a VM). Scheduled Events provides events in the following use cases: Scheduled events are delivered to and can be acknowledged by: - Standalone Virtual Machines.-- All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml). +- All the VMs in an [Azure cloud service (classic)](../../cloud-services/index.yml). - All the VMs in an availability set. - All the VMs in a scale set placement group. > [!NOTE]-> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM Scale Set (VMSS) regardless of Availability Zone usage. -> For example, if you have 100 VMs in a availability set and there's an update to one of them, the scheduled event will go to all 100, whereas if there are 100 single VMs in a zone, then event will only go to the VM which is getting impacted. +> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a Virtual Machine Scale Set regardless of Availability Zone usage. As a result, check the `Resources` field in the event to identify which VMs are affected. For VNET enabled VMs, Metadata Service is available from a static nonroutable IP > `http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01` -If the VM isn't created within a Virtual Network, the default cases for cloud services and classic VMs, extra logic is required to discover the IP address to use. +If the VM isn't created within a Virtual Network, the default cases for cloud services and classic VMs, additional logic is required to discover the IP address to use. To learn how to [discover the host endpoint](https://github.com/azure-samples/virtual-machines-python-scheduled-events-discover-endpoint-for-non-vnet-vm), see this sample. -### Version and Region Availability +### Version and region availability The Scheduled Events service is versioned. Versions are mandatory; the current version is `2020-07-01`. | Version | Release Type | Regions | Release Notes | The Scheduled Events service is versioned. Versions are mandatory; the current v > [!NOTE] > Previous preview releases of Scheduled Events supported {latest} as the api-version. This format is no longer supported and will be deprecated in the future. -### Enabling and Disabling Scheduled Events -Scheduled Events are enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes. Scheduled Events are disabled for your service if it doesn't make a request for 24 hours. +### Enabling and disabling Scheduled Events +Scheduled Events is enabled for your service the first time you make a request for events. You should expect a delayed response in your first call of up to two minutes. Scheduled Events is disabled for your service if it doesn't make a request to the endpoint for 24 hours. -Scheduled events are disabled by default for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). To enable scheduled events for these operations, first enable them using [OSImageNotificationProfile](https://learn.microsoft.com/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#osimagenotificationprofile). --### User-initiated Maintenance +### User-initiated maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance. -If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This arrangement will prevent delays in recovering your application back to a good state. +If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. Immediately approving events prevents delays in recovering your application back to a good state. +Scheduled events for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) are supported for general purpose VM sizes that [support memory preserving updates](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot) only. It doesn't work for G, M, N, and H series. Scheduled events for VMSS Guest OS upgrades and reimages are disabled by default. To enable scheduled events for these operations on supported VM sizes, first enable them using [OSImageNotificationProfile](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP). + ## Use the API +### High level overview ++There are two major components to handling Scheduled Events, preparation and recovery. All current events impacting the customer will be available via the IMDS Scheduled Events endpoint. When the event has reached a terminal state, it is removed from the list of events. The following diagram shows the various state transitions that a single scheduled event can experience: ++![State diagram showing the various transitions a scheduled event can take.](media/scheduled-events/scheduled-events-states.png) ++For events in the EventStatus:"Scheduled" state, you'll need to take steps to prepare your workload. Once the preparation is complete, you should then approve the event using the scheduled event API. Otherwise, the event will be automatically approved when the NotBefore time is reached. If the VM is on shared infrastructure, the system will then wait for all other tenants on the same hardware to also approve the job or timeout. Once approvals are gathered from all impacted VMs or the NotBefore time is reached then Azure generates a new scheduled event payload with EventStatus:"Started" and triggers the start of the maintenance event. When the event has reached a terminal state, it is removed from the list of events which serves as the signal for the tenant to recover their VM(s)ΓÇ¥ ++Below is psudeo code demonstrating a process for how to read and manage scheduled events in your application: +``` +current_list_of_scheduled_events = get_latest_from_se_endpoint() +#prepare for new events +for each event in current_list_of_scheduled_events: + if event not in previous_list_of_scheduled_events: + prepare_for_event(event) +#recover from completed events +for each event in previous_list_of_scheduled_events: + if event not in current_list_of_scheduled_events: + receover_from_event(event) +#prepare for future jobs +previous_list_of_scheduled_events = current_list_of_scheduled_events +``` +As scheduled events are often used for applications with high availability requirements, there are a few exceptional cases that should be considered: ++1. Once a scheduled event is completed and removed from the array there will be no further impacts without a new event including another EventStatus:"Scheduled" event +2. Azure monitors maintenance operations across the entire fleet and in rare circumstances determines that a maintenance operation too high risk to apply. In that case the scheduled event will go directly from ΓÇ£ScheduledΓÇ¥ to being removed from the events array +3. In the case of hardware failure, Azure will bypass the ΓÇ£ScheduledΓÇ¥ state and immediately move to the EventStatus:"Started" state. +4. While the event is still in EventStatus:"Started" state, there may be additional impacts of a shorter duration than what was advertised in the scheduled event. ++As part of AzureΓÇÖs availability guarantee, VMs in different fault domains won't be impacted by routine maintenance operations at the same time. However, they may have operations serialized one after another. VMs in one fault domain can receive scheduled events with EventStatus:"Scheduled" shortly after another fault domainΓÇÖs maintenance is completed. Regardless of what architecture you chose, always keep checking for new events pending against your VMs. ++While the exact timings of events vary, the following diagram provides a rough guideline for how a typical maintenance operation proceeds: ++- EventStatus:"Scheduled" to Approval Timeout: 15 minutes +- Impact Duration: 7 seconds +- EventStatus:"Started" to Completed (event removed from Events array): 10 minutes ++![Diagram of a timeline showing the flow of a scheduled event.](media/scheduled-events/scheduled-events-timeline.png) ++ ### Headers When you query Metadata Service, you must provide the header `Metadata:true` to ensure the request wasn't unintentionally redirected. The `Metadata:true` header is required for all scheduled events requests. Failure to include the header in the request results in a "Bad Request" response from Metadata Service. You can query for scheduled events by making the following call: ``` curl -H Metadata:true http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```+#### PowerShell sample +``` +Invoke-RestMethod -Headers @{"Metadata"="true"} -Method GET -Uri "http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01" | ConvertTo-Json -Depth 64 +``` #### Python sample ```` import json In the case where there are scheduled events, the response contains an array of } ``` -### Event Properties +### Event properties |Property | Description | | - | - | | Document Incarnation | Integer that increases when the events array changes. Documents with the same incarnation contain the same event information, and the incarnation will be incremented when an event changes. | | EventId | Globally unique identifier for this event. <br><br> Example: <br><ul><li>602d9444-d2cd-49c7-8624-8643e7171297 |-| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). This event is made available on a best effort basis <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). This event is delivered on a best effort basis. <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). <li> `Terminate`: The virtual machine is scheduled to be deleted. | +| EventType | Impact this event causes. <br><br> Values: <br><ul><li> `Freeze`: The Virtual Machine is scheduled to pause for a few seconds. CPU and network connectivity may be suspended, but there's no impact on memory or open files.<li>`Reboot`: The Virtual Machine is scheduled for reboot (non-persistent memory is lost). <li>`Redeploy`: The Virtual Machine is scheduled to move to another node (ephemeral disks are lost). <li>`Preempt`: The Spot Virtual Machine is being deleted (ephemeral disks are lost). This event is made available on a best effort basis <li> `Terminate`: The virtual machine is scheduled to be deleted. | | ResourceType | Type of resource this event affects. <br><br> Values: <ul><li>`VirtualMachine`| | Resources| List of resources this event affects. <br><br> Example: <br><ul><li> ["FrontEnd_IN_0", "BackEnd_IN_0"] | | EventStatus | Status of this event. <br><br> Values: <ul><li>`Scheduled`: This event is scheduled to start after the time specified in the `NotBefore` property.<li>`Started`: This event has started.</ul> No `Completed` or similar status is ever provided. The event is no longer returned when the event is finished. | NotBefore| Time after which this event can start. The event is guaranteed to not start before this time. Will be blank if the event has already started <br><br> Example: <br><ul><li> Mon, 19 Sep 2016 18:29:47 GMT | | Description | Description of this event. <br><br> Example: <br><ul><li> Host server is undergoing maintenance. | | EventSource | Initiator of the event. <br><br> Example: <br><ul><li> `Platform`: This event is initiated by platform. <li>`User`: This event is initiated by user. |-| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li> `0`: The event won't interrupt the VM or impact its availability (for example, update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. | +| DurationInSeconds | The expected duration of the interruption caused by the event. <br><br> Example: <br><ul><li> `9`: The interruption caused by the event will last for 9 seconds. <li>`0`: The event won't interrupt the VM or impact its availability (eg. update to the network) <li>`-1`: The default value used if the impact duration is either unknown or not applicable. | -### Event Scheduling +### Event scheduling Each event is scheduled a minimum amount of time in the future based on the event type. This time is reflected in an event's `NotBefore` property. |EventType | Minimum notice | Each event is scheduled a minimum amount of time in the future based on the even | Redeploy | 10 minutes | | Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes | -Once an event is scheduled it will move into the started state after it is either approved or the not before time passes. However in rare cases the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array and the impact will not occur as previously scheduled. +Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be canceled by Azure before it starts. In that case the event will be removed from the Events array, and the impact won't occur as previously scheduled. > [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there's a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible.- + >[!NOTE] > In the case the host node experiences a hardware failure Azure will bypass the minimum notice period an immediately begin the recovery process for affected virtual machines. This reduces recovery time in the case that the affected VMs are unable to respond. During the recovery process an event will be created for all impacted VMs with `EventType = Reboot` and `EventStatus = Started`.- + ### Polling frequency You can poll the endpoint for updates as frequently or infrequently as you like. However, the longer the time between requests, the more time you potentially lose to react to an upcoming event. Most events have 5 to 15 minutes of advance notice, although in some cases advance notice might be as little as 30 seconds. To ensure that you have as much time as possible to take mitigating actions, we recommend that you poll the service once per second. ### Start an event -After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible). The event may not start immediately upon approval, in some cases Azure will require the approval of all the VMs hosted on the node before proceeding with the event. +After you learn of an upcoming event and finish your logic for graceful shutdown, you can approve the outstanding event by making a `POST` call to Metadata Service with `EventId`. This call indicates to Azure that it can shorten the minimum notification time (when possible). The event may not start immediately upon approval, in some cases Azure will require the approval of all the VMs hosted on the node before proceeding with the event. The following JSON sample is expected in the `POST` request body. The request should contain a list of `StartRequests`. Each `StartRequest` contains `EventId` for the event you want to expedite:+ ``` { "StartRequests" : [ The following JSON sample is expected in the `POST` request body. The request sh } ``` -The service will always return a 200 success code for a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed. +The service will always return a 200 success code if it is passed a valid event ID, even if the event was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed. +> [!Note] +> Events will not proceed unless they are either approved via a POST message or the NotBefore time elapses. This includes user triggered events such as VM restarts from the Azure portal. #### Bash sample ``` curl -H Metadata:true -X POST -d '{"StartRequests": [{"EventId": "f020ba2e-3bc0-4c40-a10b-86575a9eabd5"}]}' http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 ```+#### PowerShell sample +``` +Invoke-RestMethod -Headers @{"Metadata" = "true"} -Method POST -body '{"StartRequests": [{"EventId": "5DD55B64-45AD-49D3-BBC9-F57D4EA97BD7"}]}' -Uri http://169.254.169.254/metadata/scheduledevents?api-version=2020-07-01 | ConvertTo-Json -Depth 64 +``` #### Python sample ```` import json def confirm_scheduled_event(event_id): > [!NOTE] > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field. -## Example Responses -The following response is an example of a series of events that were seen by two VMs that were live migrated to another node. +## Example responses +The following events are an example that was seen by two VMs that were live migrated to another node. -The `DocumentIncarnation` is changing every time there's new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take. +The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take. ```JSON { def advanced_sample(last_document_incarnation): int(event["DurationInSeconds"]) < 9): confirm_scheduled_event(event["EventId"]) - # Events that may be impactful (for example, Reboot or redeploy) may need custom + # Events that may be impactful (eg. Reboot or redeploy) may need custom # handling for your application else: #TODO Custom handling for impactful events |
virtual-machines | Time Sync | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/time-sync.md | cat /sys/class/ptp/ptp0/clock_name This should return `hyperv`, meaning the Azure host. -In Linux VMs with Accelerated Networking enabled, you may see multiple PTP devices listed because the Mellanox mlx5 driver also creates a /dev/ptp device. Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be `/dev/ptp0` or it might be `/dev/ptp1`, which makes it difficult to configure `chronyd` with the correct clock source. To solve this problem, the most recent Linux images have a `udev` rule that creates the symlink `/dev/ptp_hyperv` to whichever `/dev/ptp` entry corresponds to the Azure host. Chrony should be configured to use this symlink instead of `/dev/ptp0` or `/dev/ptp1`. +In some Linux VMs you may see multiple PTP devices listed. One example is for Accelerated Networking the Mellanox mlx5 driver also creates a /dev/ptp device. Because the initialization order can be different each time Linux boots, the PTP device corresponding to the Azure host might be `/dev/ptp0` or it might be `/dev/ptp1`, which makes it difficult to configure `chronyd` with the correct clock source. To solve this problem, the most recent Linux images have a `udev` rule that creates the symlink `/dev/ptp_hyperv` to whichever `/dev/ptp` entry corresponds to the Azure host. Chrony should always be configured to use the `/dev/ptp_hyperv` symlink instead of `/dev/ptp0` or `/dev/ptp1`. ### chrony |
virtual-machines | Migration Classic Resource Manager Errors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-errors.md | This article catalogs the most common errors and mitigations during the migratio | Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it's a PaaS deployment (Web/Worker). |This happens when a deployment contains a web/worker role. Since migration is only supported for Virtual Machines, remove the web/worker role from the deployment and try migration again. | | Template {template-name} deployment failed. CorrelationId={guid} |In the backend of migration service, we use Azure Resource Manager templates to create resources in the Azure Resource Manager stack. Since templates are idempotent, usually you can safely retry the migration operation to get past this error. If this error continues to persist, [contact Azure support](../azure-portal/supportability/how-to-create-azure-support-request.md) and give them the CorrelationId. <br><br> **NOTE:** Once the incident is tracked by the support team, don't attempt any self-mitigation as this might have unintended consequences on your environment. | | The virtual network {virtual-network-name} doesn't exist. |This can happen if you created the Virtual Network in the new Azure portal. The actual Virtual Network name follows the pattern "Group * \<VNET name>" |-| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. If these extensions are left installed on the virtual machine, they're automatically uninstalled before completing the migration. | +| VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} which isn't supported in Azure Resource Manager. It's recommended to uninstall it from the VM before continuing with migration. |**NOTE:** Error Message is in processs of getting updated, moving forward <b>it is required to uninstall the extension before the migration</b> XML extensions such as BGInfo 1.\* aren't supported in Azure Resource Manager. Therefore, these extensions can't be migrated. | | VM {vm-name} in HostedService {hosted-service-name} contains Extension VMSnapshot/VMSnapshotLinux, which is currently not supported for Migration. Uninstall it from the VM and add it back using Azure Resource Manager after the Migration is Complete |This is the scenario where the virtual machine is configured for Azure Backup. Since this is currently an unsupported scenario, follow the workaround at https://aka.ms/vmbackupmigration | | VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} whose Status isn't being reported from the VM. Hence, this VM can't be migrated. Ensure that the Extension status is being reported or uninstall the extension from the VM and retry migration. <br><br> VM {vm-name} in HostedService {hosted-service-name} contains Extension {extension-name} reporting Handler Status: {handler-status}. Hence, the VM can't be migrated. Ensure that the Extension handler status being reported is {handler-status} or uninstall it from the VM and retry migration. <br><br> VM Agent for VM {vm-name} in HostedService {hosted-service-name} is reporting the overall agent status as Not Ready. Hence, the VM may not be migrated, if it has a migratable extension. Ensure that the VM Agent is reporting overall agent status as Ready. Refer to https://aka.ms/classiciaasmigrationfaqs. |Azure guest agent & VM Extensions need outbound internet access to the VM storage account to populate their status. Common causes of status failure include <li> a Network Security Group that blocks outbound access to the internet <li> If the VNET has on premises DNS servers and DNS connectivity is lost <br><br> If you continue to see an unsupported status, you can uninstall the extensions to skip this check and move forward with migration. | | Migration isn't supported for Deployment {deployment-name} in HostedService {hosted-service-name} because it has multiple Availabilities Sets. |Currently, only hosted services that have 1 or less Availability sets can be migrated. To work around this problem, move the additional availability sets, and Virtual machines in those availability sets, to a different hosted service. | |
virtual-machines | Migration Classic Resource Manager Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-ps.md | Set your Azure subscription for the current session. This example sets the defau ## Step 5: Run commands to migrate your IaaS resources-* [Migrate VMs in a cloud service (not in a virtual network)](#step-51-option-1migrate-virtual-machines-in-a-cloud-service-not-in-a-virtual-network) -* [Migrate VMs in a virtual network](#step-51-option-2migrate-virtual-machines-in-a-virtual-network) -* [Migrate a storage account](#step-52-migrate-a-storage-account) +* [Migrate VMs in a cloud service (not in a virtual network)](#step-5a-option-1migrate-virtual-machines-in-a-cloud-service-not-in-a-virtual-network) +* [Migrate VMs in a virtual network](#step-5a-option-2migrate-virtual-machines-in-a-virtual-network) +* [Migrate a storage account](#step-5b-migrate-a-storage-account) > [!NOTE] > All the operations described here are idempotent. If you have a problem other than an unsupported feature or a configuration error, we recommend that you retry the prepare, abort, or commit operation. The platform then tries the action again. -### Step 5.1: Option 1 - Migrate virtual machines in a cloud service (not in a virtual network) +### Step 5a: Option 1 - Migrate virtual machines in a cloud service (not in a virtual network) Get the list of cloud services by using the following command. Then pick the cloud service that you want to migrate. If the VMs in the cloud service are in a virtual network or if they have web or worker roles, the command returns an error message. ```powershell If the prepared configuration looks good, you can move forward and commit the re Move-AzureService -Commit -ServiceName $serviceName -DeploymentName $deploymentName ``` -### Step 5.1: Option 2 - Migrate virtual machines in a virtual network +### Step 5a: Option 2 - Migrate virtual machines in a virtual network To migrate virtual machines in a virtual network, you migrate the virtual network. The virtual machines automatically migrate with the virtual network. Pick the virtual network that you want to migrate. > [!NOTE] If the prepared configuration looks good, you can move forward and commit the re Move-AzureVirtualNetwork -Commit -VirtualNetworkName $vnetName ``` -### Step 5.2: Migrate a storage account +### Step 5b: Migrate a storage account After you're done migrating the virtual machines, perform the following prerequisite checks before you migrate the storage accounts. > [!NOTE] |
virtual-machines | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
virtual-machines | Reserved Vm Instance Size Flexibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/reserved-vm-instance-size-flexibility.md | Azure keeps link and schema updated so that you can use the file programmaticall ## View VM size recommendations -Azure shows VM size recommendations in the purchase experience. To view the smallest size recommendations, select **Group by smallest size**. +Azure shows VM size recommendations in the purchase experience. When enabled, the **Optimize for instance size flexibility (preview)** option groups and sorts recommendations by instance size flexibility. :::image type="content" source="./media/reserved-vm-instance-size-flexibility/select-product-recommended-quantity.png" alt-text="Screenshot showing recommended quantities." lightbox="./media/reserved-vm-instance-size-flexibility/select-product-recommended-quantity.png" ::: |
virtual-machines | Security Controls Policy Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md | Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
virtual-machines | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
virtual-machines | Setup Mpi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md | The following figure illustrates the architecture for the popular MPI libraries. ![Architecture for popular MPI libraries](./media/hpc/mpi-architecture.png) -## UCX --[Unified Communication X (UCX)](https://github.com/openucx/ucx) is a framework of communication APIs for HPC. It is optimized for MPI communication over InfiniBand and works with many MPI implementations such as OpenMPI and MPICH. --```bash -wget https://github.com/openucx/ucx/releases/download/v1.4.0/ucx-1.4.0.tar.gz -tar -xvf ucx-1.4.0.tar.gz -cd ucx-1.4.0 -./configure --prefix=<ucx-install-path> -make -j 8 && make install -``` --> [!NOTE] -> Recent builds of UCX have fixed an [issue](https://github.com/openucx/ucx/pull/5965) whereby the right InfiniBand interface is chosen in the presence of multiple NIC interfaces. For more information, see [Troubleshooting known issues with HPC and GPU VMs](hb-hc-known-issues.md) on running MPI over InfiniBand when Accelerated Networking is enabled on the VM. - ## HPC-X The [HPC-X software toolkit](https://www.mellanox.com/products/hpc-x-toolkit) contains UCX and HCOLL and can be built against UCX. cat /sys/class/infiniband/mlx5_0/ports/1/pkeys/1 0x7fff ``` -Use the partition other than default (0x7fff) partition key. UCX requires the MSB of p-key to be cleared. For example, set UCX_IB_PKEY as 0x000b for 0x800b. +Please note interfaces are named as mlx5_ib* inside HPC VM image. Also note that as long as the tenant (Availability Set or Virtual Machine Scale Set) exists, the PKEYs remain the same. This is true even when nodes are added/deleted. New tenants get different PKEYs. |
virtual-machines | Share Gallery Direct | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/share-gallery-direct.md | During the preview: - A direct shared gallery can't contain encrypted image versions. Encrypted images can't be created within a gallery that is directly shared. - Only the owner of a subscription, or a user or service principal assigned to the `Compute Gallery Sharing Admin` role at the subscription or gallery level will be able to enable group-based sharing. - You need to create a new gallery, with the property `sharingProfile.permissions` set to `Groups`. When using the CLI to create a gallery, use the `--permissions groups` parameter. You can't use an existing gallery, the property can't currently be updated.-- TrustedLaunch and ConfidentialVM are not supported - PowerShell, Ansible, and Terraform aren't supported at this time. - The image version region in the gallery should be same as the region home region, creating of cross-region version where the home region is different than the gallery is not supported, however once the image is in the home region it can be replicated to other regions - Not available in Government clouds |
virtual-machines | Sizes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes.md | This article describes the available sizes and options for the Azure virtual mac | Type | Sizes | Description | ||-|-|-| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. | +| [General purpose](sizes-general.md) | B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dpdsv5, Dpldsv5, Dpsv5, Dplsv5, Dv4, Dsv4, Ddv4, Ddsv4, Dv5, Dsv5, Ddv5, Ddsv5, Dasv5, Dadsv5, DCasv5, DCadsv5 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. | | [Compute optimized](sizes-compute.md) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. |-| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. | +| [Memory optimized](sizes-memory.md) | Esv3, Ev3, Easv4, Eav4, Epdsv5, Epsv5, Ev4, Esv4, Edv4, Edsv4, Ev5, Esv5, Edv5, Edsv5, Easv5, Eadsv5, Mv2, M, DSv2, Dv2, ECasv5, ECadsv5 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. | | [Storage optimized](sizes-storage.md) | Lsv2, Lsv3, Lasv3 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. | | [GPU](sizes-gpu.md) | NC, NCv2, NCv3, NCasT4_v3, NC A100 v4, ND, NDv2, NGads V620, NV, NVv3, NVv4, NDasrA100_v4, NDm_A100_v4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. | | [High performance compute](sizes-hpc.md) | HB, HBv2, HBv3, HBv4, HC, HX | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). | This article describes the available sizes and options for the Azure virtual mac ## REST API -For information on using the REST API to query for VM sizes, see the following: +For information on using the REST API to query for VM sizes, see the following articles: - [List available virtual machine sizes for resizing](/rest/api/compute/virtualmachines/listavailablesizes) - [List available virtual machine sizes for a subscription](/rest/api/compute/resourceskus/list) |
virtual-machines | Virtual Machines Create Restore Points | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md | An individual VM restore point is a resource that stores VM configuration and po VM restore points supports both application consistency and crash consistency (in preview). Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency. -Crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a Virtual Machine. This is same as the status of data in the VM after a power outage or a crash. "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview. +Multi-disk crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a virtual machine. This is the same as the status of data in the VM after a power outage or a crash. The "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview. ++> [!NOTE] +> For disks configured with read/write host caching, multi-disk crash consistency can't be guaranteed because writes occurring while the snapshot is taken might not have been acknowledged by Azure Storage. If maintaining consistency is crucial, we advise using the application consistency mode. VM restore points are organized into restore point collections. A restore point collection is an Azure Resource Management resource that contains the restore points for a specific VM. If you want to utilize ARM templates for creating restore points and restore point collections, visit the public [Virtual-Machine-Restore-Points](https://github.com/Azure/Virtual-Machine-Restore-Points) repository on GitHub. |
virtual-machines | Disk Encryption Key Vault Aad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-key-vault-aad.md | If you would like to use certificate authentication and wrap the encryption key ## Next steps -[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md) +[Enable Azure Disk Encryption with Azure AD on Windows VMs (previous release)](disk-encryption-windows-aad.md) |
virtual-machines | Disks Upload Vhd To Managed Disk Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-upload-vhd-to-managed-disk-powershell.md | Title: Upload a VHD to Azure or copy a disk across regions - Azure PowerShell description: Learn how to upload a VHD to an Azure managed disk and copy a managed disk across regions, using Azure PowerShell, via direct upload. Previously updated : 03/31/2023 Last updated : 08/25/2023 linux-This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using the Azure PowerShell module. The process of uploading a managed disk, also known as direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for standard HDD, standard SSD, and premium SSDs. It isn't supported for ultra disks, yet. +This article explains how to either upload a VHD from your local machine to an Azure managed disk or copy a managed disk to another region, using the Azure PowerShell module. The process of uploading a managed disk, also known as direct upload, enables you to upload a VHD up to 32 TiB in size directly into a managed disk. Currently, direct upload is supported for Ultra Disks, Premium SSD v2, Premium SSD, Standard SSD, and Standard HDD. If you're providing a backup solution for IaaS VMs in Azure, you should use direct upload to restore customer backups to managed disks. When uploading a VHD from a source external to Azure, speeds depend on your local bandwidth. When uploading or copying from an Azure VM, your bandwidth would be the same as standard HDDs. For detailed steps on assigning a role, see [Assign Azure roles using Azure Powe There are two ways you can upload a VHD with the Azure PowerShell module: You can either use the [Add-AzVHD](/powershell/module/az.compute/add-azvhd) command, which will automate most of the process for you, or you can perform the upload manually with AzCopy. -Generally, you should use [Add-AzVHD](#use-add-azvhd). However, if you need to upload a VHD that is larger than 50 GiB, consider [uploading the VHD manually with AzCopy](#manual-upload). VHDs 50 GiB and larger upload faster using AzCopy. +For Premium SSDs, Standard SSDs, and Standard HDDs, you should generally use [Add-AzVHD](#use-add-azvhd). However, if you're uploading to an Ultra Disk, or a Premium SSD v2, or if you need to upload a VHD that is larger than 50 GiB, you must [upload the VHD or VHDX manually with AzCopy](#manual-upload). VHDs 50 GiB and larger upload faster using AzCopy and Add-AzVhd doesn't currently support uploading to an Ultra Disk or a Premium SSD v2. For guidance on how to copy a managed disk from one region to another, see [Copy a managed disk](#copy-a-managed-disk). Now, on your local shell, create an empty standard HDD for uploading by specifyi Replace `<yourdiskname>`, `<yourresourcegroupname>`, and `<yourregion>` then run the following commands: -> [!TIP] +> [!IMPORTANT] > If you're creating an OS disk, add `-HyperVGeneration '<yourGeneration>'` to `New-AzDiskConfig`. > > If you're using Azure AD to secure your uploads, add `-dataAccessAuthMode 'AzureActiveDirectory'` to `New-AzDiskConfig`. +> When uploading to an Ultra Disk or Premium SSD v2 you need to select the correct sector size of the target disk. If you're using a VHDX file with a 4k logical sector size, the target disk must be set to 4k. If you're using a VHD file with a 512 logical sector size, the target disk must be set to 512. +> +> VHDX files with logical sector size of 512k aren't supported. ```powershell $vhdSizeBytes = (Get-Item "<fullFilePathHere>").length +## For Ultra Disks or Premium SSD v2, add -LogicalSectorSize and specify either 4096 or 512, depending on if you're using a VHDX or a VHD + $diskconfig = New-AzDiskConfig -SkuName 'Standard_LRS' -OsType 'Windows' -UploadSizeInBytes $vhdSizeBytes -Location '<yourregion>' -CreateOption 'Upload' New-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' -Disk $diskconfig ``` -If you would like to upload either a premium SSD or a standard SSD, replace **Standard_LRS** with either **Premium_LRS** or **StandardSSD_LRS**. Ultra disks aren't currently supported. +If you would like to upload a different disk type, replace **Standard_LRS** with **Premium_LRS**, **Premium_ZRS**, **StandardSSD_ZRS**, **StandardSSD_LRS**, or **UltraSSD_LRS**. ### Generate writeable SAS $diskSas = Grant-AzDiskAccess -ResourceGroupName '<yourresourcegroupname>' -Disk $disk = Get-AzDisk -ResourceGroupName '<yourresourcegroupname>' -DiskName '<yourdiskname>' ``` -### Upload a VHD +### Upload a VHD or VHDX Now that you have a SAS for your empty managed disk, you can use it to set your managed disk as the destination for your upload command. -Use AzCopy v10 to upload your local VHD file to a managed disk by specifying the SAS URI you generated. +Use AzCopy v10 to upload your local VHD or VHDX file to a managed disk by specifying the SAS URI you generated. This upload has the same throughput as the equivalent [standard HDD](../disks-types.md#standard-hdds). For example, if you have a size that equates to S4, you will have a throughput of up to 60 MiB/s. But, if you have a size that equates to S70, you will have a throughput of up to 500 MiB/s. |
virtual-machines | Quick Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-portal.md | Sign in to the [Azure portal](https://portal.azure.com). 1. Enter *virtual machines* in the search. 1. Under **Services**, select **Virtual machines**. 1. In the **Virtual machines** page, select **Create** and then **Azure virtual machine**. The **Create a virtual machine** page opens.-1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2022 Datacenter - Gen 2* for the **Image**. Leave the other defaults. +1. Under **Instance details**, enter *myVM* for the **Virtual machine name** and choose *Windows Server 2022 Datacenter: Azure Edition - x64 Gen 2* for the **Image**. Leave the other defaults. - :::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size."::: + :::image type="content" source="media/quick-create-portal/instance-details.png" alt-text="Screenshot of the Instance details section where you provide a name for the virtual machine and select its region, image and size." lightbox="media/quick-create-portal/instance-details.png"::: > [!NOTE] > Some users will now see the option to create VMs in multiple zones. To learn more about this new capability, see [Create virtual machines in an availability zone](../create-portal-availability-zone.md). Sign in to the [Azure portal](https://portal.azure.com). 1. After validation runs, select the **Create** button at the bottom of the page.- :::image type="content" source="media/quick-create-portal/validation.png" alt-text="Screenshot showing that validation has passed. Select the Create button to create the VM."::: + :::image type="content" source="media/quick-create-portal/validation.png" alt-text="Screenshot showing that validation has passed. Select the Create button to create the VM." lightbox="media/quick-create-portal/validation.png"::: 1. After deployment is complete, select **Go to resource**. |
virtual-machines | Quick Create Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-powershell.md | New-AzVm ` -ResourceGroupName 'myResourceGroup' ` -Name 'myVM' ` -Location 'East US' `+ -Image 'MicrosoftWindowsServer:WindowsServer:2022-datacenter-azure-edition:latest' ` -VirtualNetworkName 'myVnet' ` -SubnetName 'mySubnet' ` -SecurityGroupName 'myNetworkSecurityGroup' ` |
virtual-machines | Scheduled Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/scheduled-events.md | Scheduled events are delivered to and can be acknowledged by: - All the VMs in a scale set placement group. > [!NOTE]-> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a VM Scale Set (VMSS) regardless of Availability Zone usage. +> Scheduled Events for all virtual machines (VMs) in a Fabric Controller (FC) tenant are delivered to all VMs in a FC tenant. FC tenant equates to a standalone VM, an entire Cloud Service, an entire Availability Set, and a Placement Group for a Virtual Machine Scale Set regardless of Availability Zone usage. As a result, check the `Resources` field in the event to identify which VMs are affected. Scheduled Events is enabled for your service the first time you make a request f ### User-initiated maintenance User-initiated VM maintenance via the Azure portal, API, CLI, or PowerShell results in a scheduled event. You then can test the maintenance preparation logic in your application, and your application can prepare for user-initiated maintenance. -If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. This will prevent delays in recovering your application back to a good state. +If you restart a VM, an event with the type `Reboot` is scheduled. If you redeploy a VM, an event with the type `Redeploy` is scheduled. Typically events with a user event source can be immediately approved to avoid a delay on user-initiated actions. We advise having a primary and secondary VM communicating and approving user generated scheduled events in case the primary VM becomes unresponsive. Immediately approving events prevents delays in recovering your application back to a good state. -Scheduled events are disabled by default for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md). To enable scheduled events for these operations, first enable them using [OSImageNotificationProfile](https://learn.microsoft.com/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP#osimagenotificationprofile). +Scheduled events for [VMSS Guest OS upgrades or reimages](../../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) are supported for general purpose VM sizes that [support memory preserving updates](../maintenance-and-updates.md#maintenance-that-doesnt-require-a-reboot) only. It doesn't work for G, M, N, and H series. Scheduled events for VMSS Guest OS upgrades and reimages are disabled by default. To enable scheduled events for these operations on supported VM sizes, first enable them using [OSImageNotificationProfile](/rest/api/compute/virtual-machine-scale-sets/create-or-update?tabs=HTTP). ## Use the API +### High level overview ++There are two major components to handling Scheduled Events, preparation and recovery. All current events impacting the customer will be available via the IMDS Scheduled Events endpoint. When the event has reached a terminal state, it is removed from the list of events. The following diagram shows the various state transitions that a single scheduled event can experience: ++![State diagram showing the various transitions a scheduled event can take.](media/scheduled-events/scheduled-events-states.png) ++For events in the EventStatus:"Scheduled" state, you'll need to take steps to prepare your workload. Once the preparation is complete, you should then approve the event using the scheduled event API. Otherwise, the event will be automatically approved when the NotBefore time is reached. If the VM is on shared infrastructure, the system will then wait for all other tenants on the same hardware to also approve the job or timeout. Once approvals are gathered from all impacted VMs or the NotBefore time is reached then Azure generates a new scheduled event payload with EventStatus:"Started" and triggers the start of the maintenance event. When the event has reached a terminal state, it is removed from the list of events which serves as the signal for the tenant to recover their VM(s)ΓÇ¥ ++Below is psudeo code demonstrating a process for how to read and manage scheduled events in your application: +``` +current_list_of_scheduled_events = get_latest_from_se_endpoint() +#prepare for new events +for each event in current_list_of_scheduled_events: + if event not in previous_list_of_scheduled_events: + prepare_for_event(event) +#recover from completed events +for each event in previous_list_of_scheduled_events: + if event not in current_list_of_scheduled_events: + receover_from_event(event) +#prepare for future jobs +previous_list_of_scheduled_events = current_list_of_scheduled_events +``` +As scheduled events are often used for applications with high availability requirements, there are a few exceptional cases that should be considered: ++1. Once a scheduled event is completed and removed from the array there will be no further impacts without a new event including another EventStatus:"Scheduled" event +2. Azure monitors maintenance operations across the entire fleet and in rare circumstances determines that a maintenance operation too high risk to apply. In that case the scheduled event will go directly from ΓÇ£ScheduledΓÇ¥ to being removed from the events array +3. In the case of hardware failure, Azure will bypass the ΓÇ£ScheduledΓÇ¥ state and immediately move to the EventStatus:"Started" state. +4. While the event is still in EventStatus:"Started" state, there may be additional impacts of a shorter duration than what was advertised in the scheduled event. ++As part of AzureΓÇÖs availability guarantee, VMs in different fault domains won't be impacted by routine maintenance operations at the same time. However, they may have operations serialized one after another. VMs in one fault domain can receive scheduled events with EventStatus:"Scheduled" shortly after another fault domainΓÇÖs maintenance is completed. Regardless of what architecture you chose, always keep checking for new events pending against your VMs. ++While the exact timings of events vary, the following diagram provides a rough guideline for how a typical maintenance operation proceeds: ++- EventStatus:"Scheduled" to Approval Timeout: 15 minutes +- Impact Duration: 7 seconds +- EventStatus:"Started" to Completed (event removed from Events array): 10 minutes ++![Diagram of a timeline showing the flow of a scheduled event.](media/scheduled-events/scheduled-events-timeline.png) ++ ### Headers When you query Metadata Service, you must provide the header `Metadata:true` to ensure the request wasn't unintentionally redirected. The `Metadata:true` header is required for all scheduled events requests. Failure to include the header in the request results in a "Bad Request" response from Metadata Service. Each event is scheduled a minimum amount of time in the future based on the even | Redeploy | 10 minutes | | Terminate | [User Configurable](../../virtual-machine-scale-sets/virtual-machine-scale-sets-terminate-notification.md#enable-terminate-notifications): 5 to 15 minutes | -Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be cancelled by Azure before it starts. In that case the event will be removed from the Events array, and the impact will not occur as previously scheduled. +Once an event is scheduled, it will move into the `Started` state after it's been approved or the `NotBefore` time passes. However, in rare cases, the operation will be canceled by Azure before it starts. In that case the event will be removed from the Events array, and the impact won't occur as previously scheduled. > [!NOTE] > In some cases, Azure is able to predict host failure due to degraded hardware and will attempt to mitigate disruption to your service by scheduling a migration. Affected virtual machines will receive a scheduled event with a `NotBefore` that is typically a few days in the future. The actual time varies depending on the predicted failure risk assessment. Azure tries to give 7 days' advance notice when possible, but the actual time varies and might be smaller if the prediction is that there's a high chance of the hardware failing imminently. To minimize risk to your service in case the hardware fails before the system-initiated migration, we recommend that you self-redeploy your virtual machine as soon as possible. The following JSON sample is expected in the `POST` request body. The request sh } ``` -The service will always return a 200 success code in the case of a valid event ID, even if it was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed. +The service will always return a 200 success code if it is passed a valid event ID, even if the event was already approved by a different VM. A 400 error code indicates that the request header or payload was malformed. > [!Note] > Events will not proceed unless they are either approved via a POST message or the NotBefore time elapses. This includes user triggered events such as VM restarts from the Azure portal. def confirm_scheduled_event(event_id): > Acknowledging an event allows the event to proceed for all `Resources` in the event, not just the VM that acknowledges the event. Therefore, you can choose to elect a leader to coordinate the acknowledgement, which might be as simple as the first machine in the `Resources` field. ## Example responses-The following is an example of a series of events that were seen by two VMs that were live migrated to another node. +The following events are an example that was seen by two VMs that were live migrated to another node. The `DocumentIncarnation` is changing every time there is new information in `Events`. An approval of the event would allow the freeze to proceed for both WestNO_0 and WestNO_1. The `DurationInSeconds` of -1 indicates that the platform doesn't know how long the operation will take. |
virtual-machines | Ubuntu Pro In Place Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/canonical/ubuntu-pro-in-place-upgrade.md | -Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of Ubuntu 18.04 LTS going EOL to Ubuntu Pro. [Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)....](https://ubuntu.com/18-04/azure) Canonical no longer provides technical support, software updates, or security patches for this version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS. --## What's Ubuntu Pro -Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The secure use of open-source software allows teams to utilize the latest technologies while meeting internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional 24/7 phone and ticket support. --Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI. --## Why developers and devops choose Ubuntu Pro for Azure -* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt) -* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and help you meet the Azure Linux Security Baseline policy) +Customers can now upgrade from Ubuntu Server (16.04 or higher) to Ubuntu Pro on your existing Azure +Virtual Machines without redeployment or downtime. One of the major use cases includes conversion of +Ubuntu 18.04 LTS going EOL to Ubuntu Pro. +[Canonical announced that the Ubuntu 18.04 LTS (Bionic Beaver) OS images end-of-life (EOL)](https://ubuntu.com/18-04/azure). +Canonical no longer provides technical support, software updates, or security patches for this +version. Customers need to upgrade to Ubuntu Pro to continue to be on Ubuntu 18.04 LTS. ++## What is Ubuntu Pro? ++Ubuntu Pro is a cross-cloud OS, optimized for Azure, and security maintained for 10 years. The +secure use of open-source software allows teams to utilize the latest technologies while meeting +internal governance and compliance requirements. Ubuntu Pro 18.04 LTS, remains fully compatible with +Ubuntu Server 18.04 LTS, but adds more security enabled by default, including compliance and +management tools in a form suitable for small to large-scale Linux operations. Ubuntu Pro 18.04 LTS +is fully supported until April 2028. Ubuntu Pro also comes with security patching for all Ubuntu +packages due to Extended Security Maintenance (ESM) for Infrastructure and Applications and optional +24/7 phone and ticket support. ++Customers using Ubuntu Server 18.04, for example, can upgrade to Ubuntu Pro and continue to receive +security patches from Canonical until 2028. Customers can upgrade to Ubuntu Pro via Azure CLI. ++## Why developers and devops choose Ubuntu Pro for Azure ++* Access to security updates for 23,000+ packages including Apache Kafka, NGINX, MongoDB, Redis and + PostgreSQL, integrated into normal system tools (for example Azure Update Manager, apt) +* Security hardening and audit tools (CIS) to establish a security baseline across your systems (and + help you meet the Azure Linux Security Baseline policy) * FIPS 140-2 certified modules-* Common Criteria (CC) EAL2 provisioning packages -* Kernel Live patch: kernel patches delivered immediately, without the need to reboot -* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance and advanced device support -* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028 -* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads -* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools to innovate with the latest technologies -* Non-stop security: Canonical publishes images frequently, ensuring security is present from the moment an instance launches -* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go across regions or out to the Internet for updates -* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same experience regardless of the platform. It ensures consistency of your CI/CD pipelines and management mechanisms. --**This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs:** --1. Converting to Ubuntu Pro license --2. Validating the license +* Common Criteria (CC) EAL2 provisioning packages +* Kernel Live patch: kernel patches delivered immediately, without the need to reboot +* Optimized performance: optimized kernel, with improved boot speed, outstanding runtime performance + and advanced device support +* 10-year security maintenance: Ubuntu Pro 18.04 LTS provides security maintenance until April 2028 +* Production ready: Ubuntu is the leading Linux in the public cloud with > 50% of Linux workloads +* Developer friendly: Ubuntu is the \#1 Linux for developers offering the latest libraries and tools + to innovate with the latest technologies +* Non-stop security: Canonical publishes images frequently, ensuring security is present from the + moment an instance launches +* Portability: Ubuntu is available in all regions with content mirrors to reduce the need to go + across regions or out to the Internet for updates +* Consistent experience across platforms: from edge to multicloud, Ubuntu provides the same + experience regardless of the platform. It ensures consistency of your CI/CD pipelines and + management mechanisms. ++> [!NOTE] +> This document presents the direction to upgrade from an Ubuntu Server (16.04 or higher) image to +> Ubuntu Pro with zero downtime for upgrade by executing the following steps in your VMs: +> +> 1. Converting to Ubuntu Pro license +> 2. Validating the license +> +> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running +> detach. Open a support ticket for any exceptions. ++## Convert to Ubuntu Pro using the Azure CLI ->[!NOTE] -> Converting to UBUNTU_PRO is an irreversible process. You can't even downgrade a VM by running detach. Open a support ticket for any exceptions. --## Convert to Ubuntu Pro using the Azure CLI ```azurecli-interactive # The following will enable Ubuntu Pro on a virtual machine-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO +az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO ``` -```In-VM commands +```In-VM commands # The next step is to execute two in-VM commands-sudo apt install ubuntu-advantage-tools -sudo pro auto-attach +sudo apt install ubuntu-advantage-tools +sudo pro auto-attach ```-(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28) -## Validate the license +(Note that "sudo apt install ubuntu-advantage-tools" is only necessary if "pro --version" is lower than 28) ++## Validate the license + Expected output: ![Screenshot of the expected output.](./expected-output.png) ## Create an Ubuntu Pro VM using the Azure CLI+ You can also create a new VM using the Ubuntu Server images and apply Ubuntu Pro at create time. For example: ```azurecli-interactive # The following will enable Ubuntu Pro on a virtual machine-az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO +az vm update -g myResourceGroup -n myVmName --license-type UBUNTU_PRO ``` ```In-VM commands # The next step is to execute two in-VM commands-sudo apt install ubuntu-advantage-tools -sudo pro auto-attach +sudo apt install ubuntu-advantage-tools +sudo pro auto-attach ``` >[!NOTE] > For systems with advantage tools version 28 or higher installed the system will perform a pro attach during a reboot. ## Check licensing model using the Azure CLI+ You can use the az vm get-instance-view command to check the status. Look for a licenseType field in the response. If the licenseType field exists and the value is UBUNTU_PRO, your virtual machine has Ubuntu Pro enabled. ```Azure CLI-az vm get-instance-view -g MyResourceGroup -n MyVm +az vm get-instance-view -g MyResourceGroup -n MyVm ``` ## Check the licensing model of an Ubuntu Pro enabled VM using Azure Instance Metadata Service+ From within the virtual machine itself, you can query the attested metadata in Azure Instance Metadata Service to determine the virtual machine's licenseType value. A licenseType value of UBUNTU_PRO indicates that your virtual machine has Ubuntu Pro enabled. [Learn more about attested metadata](../../instance-metadata-service.md). ## Billing-You are charged for Ubuntu Pro as part of the Preview. Visit the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro pricing. To cancel the Pro subscription during the preview period, open a support ticket through the Azure portal. ++You are charged for Ubuntu Pro as part of the Preview. Visit the +[pricing calculator](https://azure.microsoft.com/pricing/calculator/) for more details on Ubuntu Pro +pricing. To cancel the Pro subscription during the preview period, open a support ticket through the +Azure portal. ## Frequently Asked Questions -#### I launched an Ubuntu Pro VM. Do I need to configure it or enable something else? -With the availability of outbound internet access, Ubuntu Pro automatically enables premium features such as Extended Security Maintenance for [Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and [live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required (for example CIS), check the using 'usg' to [harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview) tutorial. Should you require FIPS, check enabling FIPS tutorials. +### What are the next step after launching an Ubuntu Pro VM? ++With the availability of outbound internet access, Ubuntu Pro automatically enables premium features +such as Extended Security Maintenance for +[Main and Universe repositories](https://help.ubuntu.com/community/Repositories) and +[live patch](https://ubuntu.com/security/livepatch/docs). Should any specific hardening be required +(for example CIS), check the using 'usg' to +[harden your servers](https://ubuntu.com/tutorials/comply-with-cis-or-disa-stig-on-ubuntu#1-overview) +tutorial. Should you require FIPS, check enabling FIPS tutorials. -For more information about networking requirements for making sure Pro enablement process works (such as egress traffic, endpoints and ports) [check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html). +For more information about networking requirements for making sure Pro enablement process works +(such as egress traffic, endpoints and ports) +[check this documentation](https://canonical-ubuntu-pro-client.readthedocs-hosted.com/en/latest/references/network_requirements.html). ++### Does shutting down the machine stop billing? -#### If I shut down the machine, does the billing continue? If you launch Ubuntu Pro from Azure Marketplace you pay as you go, so, if you donΓÇÖt have any machine running, you wonΓÇÖt pay anything additional. -#### Can I get volume discounts? +### Are there volume discounts? + Yes. Contact your Microsoft sales representative. -#### Are Reserved Instances available? +### Are Reserved Instances available? + Yes -#### If the customer doesn't do the auto attach will they still get attached to pro on reboot? -If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot. However, this applies only if they have v28 of the Pro client. +### If the customer doesn't do the auto attach will they still get attached to pro on reboot? ++If the customer doesn't perform the auto attach, they still get the Pro attached upon reboot. +However, this applies only if they have v28 of the Pro client. + * For Jammy and Focal, this process works as expected. * For Bionic and Xenial this process doesn't work due to the older versions of the Pro client installed. |
virtual-machines | Configure Oracle Asm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-asm.md | Complete following steps to setup Oracle ASM. 3. In the **Create Disk Group** dialog box: 1. Enter the disk group name **FRA**.- 2. Under **Select Member Disks**, select **/dev/oracleasm/disks/VOL2** - 3. Under **Allocation Unit Size**, select **4**. - 4. Click **ok** to create the disk group. - 5. Click **ok** to close the confirmation window. + 2. For Redundancy option, select External (None). + 3. Under **Select Member Disks**, select **/dev/oracleasm/disks/VOL2** + 4. Under **Allocation Unit Size**, select **4**. + 5. Click **ok** to create the disk group. + 6. Click **ok** to close the confirmation window. :::image type="content" source="./media/oracle-asm/asm-config-assistant-02.png" alt-text="Screenshot of the Create Disk Group dialog box."::: |
virtual-machines | Oracle Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-overview.md | You can also implement high availability and disaster recovery for Oracle Databa We recommend placing the VMs in the same availability set to allow Azure to place them into separate fault domains and upgrade domains. If you want to have geo-redundancy, set up the two databases to replicate between two different regions and connect the two instances with a VPN Gateway. To walk through the basic setup procedure on Azure, see Implement Oracle Data Guard on an Azure Linux virtual machine. -With Oracle Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see Active Data Guard and GoldenGate. If you need read-write access to the copy of the database, you can use Oracle Active Data Guard. +With Oracle Active Data Guard, you can achieve high availability with a primary database in one VM, a secondary (standby) database in another VM, and one-way replication set up between them. The result is read access to the copy of the database. With Oracle GoldenGate, you can configure bi-directional replication between the two databases. To learn how to set up a high-availability solution for your databases using these tools, see [Active Data Guard and GoldenGate](https://www.oracle.com/docs/tech/database/oow14-con7715-adg-gg-bestpractices.pdf). If you need read-write access to the copy of the database, you can use Oracle Active Data Guard. + To walk through the basic setup procedure on Azure, see [Implement Oracle Golden Gate on an Azure Linux VM](configure-oracle-golden-gate.md). In addition to having a high availability and disaster recovery solution architected in Azure, you should have a backup strategy in place to restore your database. Different [backup strategies](oracle-database-backup-strategies.md) are availabl - Using [Azure backup](oracle-database-backup-azure-backup.md) - Using [Oracle RMAN Streaming data](oracle-rman-streaming-backup.md) backup ## Deploy Oracle applications on Azure-Use Terraform templates to set up Azure infrastructure and install Oracle applications. For more information, see [Terraform on Azure](/azure/developer/terraform). +Use Terraform templates, AZ CLI, or the Azure Portal to set up Azure infrastructure and install Oracle applications. You also use Ansible to configure DB inside the VM. For more information, see [Terraform on Azure](/azure/developer/terraform). Oracle has certified the following applications to run in Azure when connecting to an Oracle database by using the Azure with Oracle Cloud interconnect solution: - E-Business Suite You can deploy custom applications in Azure that connect with OCI and other Azur According to Oracle Support, JD Edwards EnterpriseOne versions 9.2 and above are supported on any public cloud offering that meets their specific Minimum Technical Requirements (MTR). You need to create custom images that meet their MTR specifications for operating system and software application compatibility. For more information, see [Doc ID 2178595.1](https://support.oracle.com/knowledge/JD%20Edwards%20EnterpriseOne/2178595_1.html). ## Licensing Deployment of Oracle solutions in Azure is based on a bring-your-own-license model. This model assumes that you have licenses to use Oracle software and that you have a current support agreement in place with Oracle. -Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf). +Microsoft Azure is an authorized cloud environment for running Oracle Database. The Oracle Core Factor table isn't applicable when licensing Oracle databases in the cloud. For more information, see [Oracle Processor Core Factor Table](https://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf). Instead, when using VMs with Hyper-Threading Technology enabled for Enterprise Edition databases, count two vCPUs as equivalent to one Oracle Processor license if hyperthreading is enabled, as stated in the policy document. The policy details can be found at [Licensing Oracle Software in the Cloud Computing Environment](https://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf). Oracle databases generally require higher memory and I/O. For this reason, we recommend [Memory Optimized VMs](/azure/virtual-machines/sizes-memory) for these workloads. To optimize your workloads further, we recommend [Constrained Core vCPUs](/azure/virtual-machines/constrained-vcpu) for Oracle Database workloads that require high memory, storage, and I/O bandwidth, but not a high core count. When you migrate Oracle software and workloads from on-premises to Microsoft Azure, Oracle provides license mobility as stated in [Oracle and Microsoft Strategic Partnership FAQ](https://www.oracle.com/cloud/azure/interconnect/faq/). ## Next steps |
virtual-machines | Oracle Rman Streaming Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-rman-streaming-backup.md | Each of these options has advantages or disadvantages in the areas of capacity, | **Managed disk** | Premium SSD v2 | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 | | **Managed disk** | UltraDisk | [Introduction to Azure managed disks](https://learn.microsoft.com/azure/virtual-machines/managed-disks-overview) | SCSI | Microsoft | [Managed Disks pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | 1 | | **Azure blob** | Block blobs | [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](https://learn.microsoft.com/azure/storage/blobs/network-file-system-protocol-support-how-to?tabs=linux) | NFS v3.0 | Microsoft | [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) | 2 |-| **Azure** **blobfuse** | v1 | [How to mount Azure Blob Storage as a file system with BlobFuse v1](https://learn.microsoft.com/azure/storage/blobs/storage-how-to-mount-container-linux?tabs=RHEL) | Fuse | Open source/Github | n/a | 3, 5, 6 | -| **Azure** **blobfuse** | v2 | [What is BlobFuse? - BlobFuse2](https://learn.microsoft.com/azure/storage/blobs/blobfuse2-what-is) | Fuse | Open source/Github | n/a | 3, 5, 6 | +| **Azure** **blobfuse** | v1 | [How to mount Azure Blob Storage as a file system with BlobFuse v1](https://learn.microsoft.com/azure/storage/blobs/storage-how-to-mount-container-linux?tabs=RHEL) | Fuse | Open source/GitHub | n/a | 3, 5, 6 | +| **Azure** **blobfuse** | v2 | [What is BlobFuse? - BlobFuse2](https://learn.microsoft.com/azure/storage/blobs/blobfuse2-what-is) | Fuse | Open source/GitHub | n/a | 3, 5, 6 | | **Azure Files** | Standard | [What is Azure Files?](https://learn.microsoft.com/azure/storage/files/storage-files-introduction) | SMB/CIFS | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 6 | | **Azure Files** | Premium | [What is Azure Files?](https://learn.microsoft.com//azure/storage/files/storage-files-introduction) | SMB/CIFS, NFS v4.1 | Microsoft | [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) | 4, 7 | | **Azure NetApp Files** | Standard | [Azure NetApp Files ](https://docs.netapp.com/us-en/cloud-manager-azure-netapp-files/) | SMB/CIFS, NFS v3.0, NFS v4.1 | Microsoft/NetApp | [Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | 4, 8, 11 | |
virtual-network-manager | Concept Security Admins | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-security-admins.md | Here are some scenarios where security admin rules can be used: | **Enforcing application-level security** | Security admin rules can be used to enforce application-level security by blocking traffic to or from specific applications or services. | With Azure Virtual Network Manager, you have a centralized location to manage security admin rules. Centralization allows you to define security policies at scale and apply them to multiple virtual networks at once.++> [!NOTE] +> Currently, security admin rules do not apply to private endpoints that fall under the scope of a managed virtual network. + ## How do security admin rules work? Security admin rules allow or deny traffic on specific ports, protocols, and source/destination IP prefixes in a specified direction. When you define a security admin rule, you specify the following conditions: Security admin rules allow or deny traffic on specific ports, protocols, and sou - The protocol to be used To enforce security policies across multiple virtual networks, you [create and deploy a security admin configuration](how-to-block-network-traffic-portal.md). This configuration contains a set of rule collections, and each rule collection contains one or more security admin rules. Once created, you associate the rule collection with the network groups requiring security admin rules. The rules are then applied to all virtual networks contained in the network groups when the configuration is deployed. A single configuration provides a centralized and scalable enforcement of security policies across multiple virtual networks.+ ### Evaluation of security admin rules and network security groups (NSGs) Security admin rules and network security groups (NSGs) can be used to enforce network security policies in Azure. However, they have different scopes and priorities. |
virtual-network-manager | Concept Virtual Network Flow Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-virtual-network-flow-logs.md | + + Title: Monitoring security admin rules with Virtual Network Flow Logs +description: This article covers using Network Watcher and Virtual Network Flow Logs to monitor traffic through security admin rules in Azure Virtual Network Manager. ++++ Last updated : 08/11/2023+++# Monitoring Azure Virtual Network Manager with VNet flow logs (Preview) ++Monitoring traffic is critical to understanding how your network is performing and to troubleshoot issues. Administrators can utilize VNet flow logs (Preview) to show whether traffic is flowing through or blocked on a VNet by a [security admin rule]. VNet flow logs (Preview) are a feature of Network Watcher. ++Learn more about [VNet flow logs (Preview)](../network-watcher/vnet-flow-logs-overview.md) including usage and how to enable. ++> [!IMPORTANT] +> VNet flow logs is currently in PREVIEW. This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++> [!IMPORTANT] +> Azure Virtual Network Manager is generally available for Virtual Network Manager and hub-and-spoke connectivity configurations. Mesh connectivity configurations and security admin rules remain in public preview. +> +> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ++## Enable VNet flow logs (Preview) ++Currently, you need to enable Virtual Network flow logs (Preview) on each VNet you want to monitor. You can enable Virtual Network Flow Logs on a VNet by using [PowerShell](../network-watcher/vnet-flow-logs-powershell.md) or the [Azure CLI](../network-watcher/vnet-flow-logs-cli.md). ++Here's an example of a flow log ++```json +{ + "records": [ + { + "time": "2022-09-14T09:00:52.5625085Z", + "flowLogVersion": 4, + "flowLogGUID": "a1b2c3d4-e5f6-g7h8-i9j0-k1l2m3n4o5p6", + "macAddress": "00224871C205", + "category": "FlowLogFlowEvent", + "flowLogResourceID": "/SUBSCRIPTIONS/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/RESOURCEGROUPS/NETWORKWATCHERRG/PROVIDERS/MICROSOFT.NETWORK/NETWORKWATCHERS/NETWORKWATCHER_EASTUS2EUAP/FLOWLOGS/VNETFLOWLOG", + "targetResourceID": "/subscriptions/1a2b3c4d-5e6f-7g8h-9i0j-1k2l3m4n5o6p7/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet01", + "operationName": "FlowLogFlowEvent", + "flowRecords": { + "flows": [ + { + "aclID": "9a8b7c6d-5e4f-3g2h-1i0j-9k8l7m6n5o4p3", + "flowGroups": [ + { + "rule": "DefaultRule_AllowInternetOutBound", + "flowTuples": [ + "1663146003599,10.0.0.6,52.239.184.180,23956,443,6,O,B,NX,0,0,0,0", + "1663146003606,10.0.0.6,52.239.184.180,23956,443,6,O,E,NX,3,767,2,1580", + "1663146003637,10.0.0.6,40.74.146.17,22730,443,6,O,B,NX,0,0,0,0", + "1663146003640,10.0.0.6,40.74.146.17,22730,443,6,O,E,NX,3,705,4,4569", + "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,B,NX,0,0,0,0", + "1663146004251,10.0.0.6,40.74.146.17,22732,443,6,O,E,NX,3,705,4,4569", + "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,B,NX,0,0,0,0", + "1663146004622,10.0.0.6,40.74.146.17,22734,443,6,O,E,NX,2,134,1,108", + "1663146017343,10.0.0.6,104.16.218.84,36776,443,6,O,B,NX,0,0,0,0", + "1663146022793,10.0.0.6,104.16.218.84,36776,443,6,O,E,NX,22,2217,33,32466" + ] + } + ] + }, + { + "aclID": "b1c2d3e4-f5g6-h7i8-j9k0-l1m2n3o4p5q6", + "flowGroups": [ + { + "rule": "BlockHighRiskTCPPortsFromInternet", + "flowTuples": [ + "1663145998065,101.33.218.153,10.0.0.6,55188,22,6,I,D,NX,0,0,0,0", + "1663146005503,192.241.200.164,10.0.0.6,35276,119,6,I,D,NX,0,0,0,0" + ] + }, + { + "rule": "Internet", + "flowTuples": [ + "1663145989563,20.106.221.10,10.0.0.6,50557,44357,6,I,D,NX,0,0,0,0", + "1663145989679,20.55.117.81,10.0.0.6,62797,35945,6,I,D,NX,0,0,0,0", + "1663145989709,20.55.113.5,10.0.0.6,51961,65515,6,I,D,NX,0,0,0,0", + "1663145990049,13.65.224.51,10.0.0.6,40497,40129,6,I,D,NX,0,0,0,0", + "1663145990145,20.55.117.81,10.0.0.6,62797,30472,6,I,D,NX,0,0,0,0", + "1663145990175,20.55.113.5,10.0.0.6,51961,28184,6,I,D,NX,0,0,0,0", + "1663146015545,20.106.221.10,10.0.0.6,50557,31244,6,I,D,NX,0,0,0,0" + ] + } + ] + } + ] + } + } + ] +} ++``` +++## Next steps +> [!div class="nextstepaction"] +> Learn more about [VNet Flow Logs](../network-watcher/vnet-flow-logs-overview.md) and how to use them. +> Learn more about [Event log options for Azure Virtual Network Manager](concept-event-logs.md). |
virtual-network-manager | Create Virtual Network Manager Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md | Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using the Azure portal' -description: Use this quickstart to learn how to create a mesh network topology with Virtual Network Manager by using the Azure portal. + Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager - Azure portal' +description: Learn to a mesh virtual network topology with Azure Virtual Network Manager by using the Azure portal. Previously updated : 04/12/2023 Last updated : 08/24/2023 -# Quickstart: Create a mesh network topology with Azure Virtual Network Manager by using the Azure portal +# Quickstart: Create a mesh network topology with Azure Virtual Network Manager - Azure portal Get started with Azure Virtual Network Manager by using the Azure portal to manage connectivity for all your virtual networks. |
virtual-network-manager | Create Virtual Network Manager Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-template.md | Title: 'Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template' -description: In this article, you create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template, ARM template. + Title: 'Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template' +description: In this article, you deploy various network topologies with Azure Virtual Network Manager using Azure Resource Manager template(ARM template). -# Quickstart: Create a mesh network topology with Azure Virtual Network Manager using Azure Resource Manager template -ARM template +# Quickstart: Deploy a network topology with Azure Virtual Network Manager using Azure Resource Manager template - ARM template Get started with Azure Virtual Network Manager by using Azure Resource Manager templates to manage connectivity for all your virtual networks. |
virtual-network-manager | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md | Yes, In Azure, VNet peering and connected groups are two methods of establishing connectivity between virtual networks (VNets). While VNet peering works by creating a 1:1 mapping between each peered VNet, connected groups use a new construct that establishes connectivity without such a mapping. In a connected group, all virtual networks are connected without individual peering relationships. For example, if VNetA, VNetB, and VNetC are part of the same connected group, connectivity is enabled between each VNet without the need for individual peering relationships. +### Do security admin rules apply to Azure Private Endpoints? ++Currently, security admin rules don't apply to Azure Private Endpoints that fall under the scope of a virtual network managed by Azure Virtual Network Manager. ### How can I explicitly allow Azure SQL Managed Instance traffic before having deny rules? Azure SQL Managed Instance has some network requirements. If your security admin rules can block the network requirements, you can use the below sample rules to allow SQLMI traffic with higher priority than the deny rules that can block the traffic of SQL Managed Instance. |
virtual-network-manager | How To Define Network Group Membership Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-define-network-group-membership-azure-policy.md | List of supported operators: ## Basic editor -Assume you have the following virtual networks in your subscription. Each virtual network has an associated tag named **environment** with the respective value of *Production* or *Test*. +Assume you have the following virtual networks in your subscription. Each virtual network has an associated tag named **environment** with the respective value of *production* or *test*. -| **Virtual Network** | **Tag** | -| - | - | -| myVNet01-EastUS | Production | -| myVNet01-WestUS | Production | -| myVNet02-WestUS | Test | -| myVNet03-WestUS | Test | +| **Virtual Network** | **Tag Name** | **Tag Value** | +| - | - | | +| myVNet01-EastUS | environment | production | +| myVNet01-WestUS | environment | production | +| myVNet02-WestUS | environment | test | +| myVNet03-WestUS | environment | test | -You only want to select virtual networks that contain **WestUS** in the name. To begin using the basic editor to create your conditional statement, you need to create a new network group. +You only want to select virtual networks that whose tag has a key value pair of **environment** equal to **production**. To begin using the basic editor to create your conditional statement, you need to create a new network group. 1. Go to your Azure Virtual Network Manager instance and select **Network Groups** under **Settings**. Then select **+ Create** to create a new network group. 1. Enter a **Name** and an optional **Description** for the network group, and select **Add**. 1. Select the network group from the list and select **Create Azure Policy**. 1. Enter a **Policy name** and leave the **Scope** selections unless changes are needed.-1. Under **Criteria**, select **Name** from the drop-down under **Parameter** and then select **Contains** from the drop-down under *Operator*. -1. Enter **WestUS** under **Condition** and select **Preview Resources**. You should see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list. +1. Under **Criteria**, select **Tags** from the drop-down under **Parameter** and then select **Key value pair** from the drop-down under **Operator**. +1. Enter **environment** and **production** under **Condition** and select **Preview Resources**. You should see myVNet01-EastUS and myVNet01-WestUS show up in the list. ++ :::image type="content" source="media/how-to-define-network-group-membership-azure-policy/add-key-value-pair-tag.png" alt-text="Screenshot of Create Azure Policy window setting tag with key value pair."::: + 1. Select **Close** and **Save**. -1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS, myVNet02-WestUS, and myVNet03-WestUS show up in the list. +1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS and myVNet01-WestUS. > [!IMPORTANT] > The **basic editor** is only available during the creation of an Azure Policy. Once a policy is created, all edits will be done using JSON in the **Policies** section of virtual network manager or via Azure Policy.-> -> When using the basic editor, your condition options are limited through the portal experience. For complex conditions like creating a network group for VNets based on a [customer-defined tag](#example-3-using-custom-tag-values-with-advanced-editor), you must use the advanced editor. Learn more about [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md). ## Advanced editor |
virtual-network | Accelerated Networking Mana Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-mana-overview.md | Several [Azure Marketplace](https://learn.microsoft.com/marketplace/azure-market We recommend using an operating system with support for MANA to maximize performance. In instances where the operating system doesn't or can't support MANA, network connectivity is provided through the hypervisorΓÇÖs virtual switch. The virtual switch is also used during some infrastructure servicing events where the Virtual Function (VF) is revoked. ### Using DPDK-Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers. --DPDK requires the following set of drivers: -1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later) -1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later) -1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later) -1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later) --DPDK only functions on Linux VMs. +For information about DPDK on MANA hardware, see [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md) ## Evaluating performance Differences in VM SKUs, operating systems, applications, and tuning parameters can all affect network performance on Azure. For this reason, we recommend that you benchmark and test your workloads to ensure you achieve the expected network performance. |
virtual-network | Create Peering Different Subscriptions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md | -Depending on whether the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json). +Depending on whether, the virtual networks are in the same, or different subscriptions the steps to create a virtual network peering are different. Steps to peer networks created with the classic deployment model are different. For more information about deployment models, see [Azure deployment model](../azure-resource-manager/management/deployment-models.md?toc=%2fazure%2fvirtual-network%2ftoc.json). Learn how to create a virtual network peering in other scenarios by selecting the scenario from the following table: This tutorial peers virtual networks in the same region. You can also peer virtu - Each user must accept the guest user invitation from the opposite Azure Active Directory tenant. +- Sign-in to the [Azure portal](https://portal.azure.com). + # [**PowerShell**](#tab/create-peering-powershell) - An Azure account(s) with two active subscriptions. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). The following resources and account examples are used in the steps in this artic | User account | Resource group | Subscription | Virtual network | | | -- | | |-| **UserA** | **myResourceGroupA** | **SubscriptionA** | **myVNetA** | -| **UserB** | **myResourceGroupB** | **SubscriptionB** | **myVNetB** | +| **user-1** | **test-rg** | **subscription-1** | **vnet-1** | +| **user-2** | **test-rg-2** | **subscription-2** | **vnet-2** | -## Create virtual network - myVNetA +## Create virtual network - vnet-1 > [!NOTE] > If you are using a single account to complete the steps, you can skip the steps for logging out of the portal and assigning another user permissions to the virtual networks. # [**Portal**](#tab/create-peering-portal) -1. Sign-in to the [Azure portal](https://portal.azure.com) as **UserA**. --2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. --3. Select **+ Create**. +<a name="create-virtual-network"></a> -4. In the **Basics** tab of **Create virtual network**, enter or select the following information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your **SubscriptionA**. | - | Resource group | Select **Create new**. </br> Enter **myResourceGroupA** in **Name**. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNetA**. | - | Region | Select a region. | --5. Select **Next: IP Addresses**. --6. In **IPv4 address space**, enter **10.1.0.0/16**. --7. Select **+ Add subnet**. --8. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Subnet name | Enter **mySubnet**. | - | Subnet address range | Enter **10.1.0.0/24**. | --9. Select **Add**. --10. Select **Review + create**. --11. Select **Create**. # [**PowerShell**](#tab/create-peering-powershell) -### Sign in to SubscriptionA +### Sign in to subscription-1 -Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**. +Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-1**. ```azurepowershell-interactive Connect-AzAccount ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). ```azurepowershell-interactive-Set-AzContext -Subscription SubscriptionA +Set-AzContext -Subscription subscription-1 ``` -### Create a resource group - myResourceGroupA +### Create a resource group - test-rg An Azure resource group is a logical container where Azure resources are deployed and managed. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resourc ```azurepowershell-interactive $rsg = @{- Name = 'myResourceGroupA' - Location = 'westus3' + Name = 'test-rg' + Location = 'eastus2' } New-AzResourceGroup @rsg ``` ### Create the virtual network -Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVNetA** in the **West US 3** location: +Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a subnet-1 virtual network named **vnet-1** in the **West US 3** location: ```azurepowershell-interactive $vnet = @{- Name = 'myVNetA' - ResourceGroupName = 'myResourceGroupA' - Location = 'westus3' - AddressPrefix = '10.1.0.0/16' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' + Location = 'eastus2' + AddressPrefix = '10.0.0.0/16' } $virtualNetwork = New-AzVirtualNetwork @vnet ``` ### Add a subnet -Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig): +Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **subnet-1** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig): ```azurepowershell-interactive $subnet = @{- Name = 'default' + Name = 'subnet-1' VirtualNetwork = $virtualNetwork- AddressPrefix = '10.1.0.0/24' + AddressPrefix = '10.0.0.0/24' } $subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet ``` $virtualNetwork | Set-AzVirtualNetwork # [**Azure CLI**](#tab/create-peering-cli) -### Sign in to SubscriptionA +### Sign in to subscription-1 -Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**. +Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**. ```azurecli-interactive az login ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [az account set](/cli/azure/account#az-account-set). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [az account set](/cli/azure/account#az-account-set). ```azurecli-interactive-az account set --subscription "SubscriptionA" +az account set --subscription "subscription-1" ``` -### Create a resource group - myResourceGroupA +### Create a resource group - test-rg An Azure resource group is a logical container where Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) ```azurecli-interactive az group create \- --name myResourceGroupA \ - --location westus3 + --name test-rg \ + --location eastus2 ``` ### Create the virtual network -Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a default virtual network named **myVNetA** in the **West US 3** location. +Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a subnet-1 virtual network named **vnet-1** in the **West US 3** location. ```azurecli-interactive az network vnet create \- --resource-group myResourceGroupA\ - --location westus3 \ - --name myVNetA \ - --address-prefixes 10.1.0.0/16 \ - --subnet-name default \ - --subnet-prefixes 10.1.0.0/24 + --resource-group test-rg\ + --location eastus2 \ + --name vnet-1 \ + --address-prefixes 10.0.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefixes 10.0.0.0/24 ``` -## Assign permissions for UserB +## Assign permissions for user-2 A user account in the other subscription that you want to peer with must be added to the network you previously created. If you're using a single account for both subscriptions, you can skip this section. # [**Portal**](#tab/create-peering-portal) -1. Remain signed in to the portal as **UserA**. +1. Remain signed in to the portal as **user-1**. -2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -3. Select **myVNetA**. +1. Select **vnet-1**. -4. Select **Access control (IAM)**. +1. Select **Access control (IAM)**. -5. Select **+ Add** -> **Add role assignment**. +1. Select **+ Add** -> **Add role assignment**. -6. In **Add role assignment** in the **Role** tab, select **Network Contributor**. +1. In **Add role assignment** in the **Role** tab, select **Network Contributor**. -7. Select **Next**. +1. Select **Next**. -8. In the **Members** tab, select **+ Select members**. +1. In the **Members** tab, select **+ Select members**. -9. In **Select members** in the search box, enter **UserB**. +1. In **Select members** in the search box, enter **user-2**. -10. Select **Select**. +1. Select **Select**. -11. Select **Review + assign**. +1. Select **Review + assign**. -12. Select **Review + assign**. +1. Select **Review + assign**. # [**PowerShell**](#tab/create-peering-powershell) -Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. Assign **UserB** from **SubscriptionB** to **myVNetA** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment). +Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-2** from **subscription-2** to **vnet-1** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment). -Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **UserB**. +Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **user-2**. -**UserB** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionB** that you wish to assign permissions to **myVNetA**. You can skip this step if you're using the same account for both subscriptions. +**user-2** is used in this example for the user account. Replace this value with the display name for the user from **subscription-2** that you wish to assign permissions to **vnet-1**. You can skip this step if you're using the same account for both subscriptions. ```azurepowershell-interactive $id = @{- Name = 'myVNetA' - ResourceGroupName = 'myResourceGroupA' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } $vnet = Get-AzVirtualNetwork @id -$obj = Get-AzADUser -DisplayName 'UserB' +$obj = Get-AzADUser -DisplayName 'user-2' $role = @{ ObjectId = $obj.id New-AzRoleAssignment @role # [**Azure CLI**](#tab/create-peering-cli) -Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetA**. Assign **UserB** from **SubscriptionB** to **myVNetA** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create). +Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-1**. Assign **user-2** from **subscription-2** to **vnet-1** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create). -Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **UserB**. +Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **user-2**. -**UserB** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionB** that you wish to assign permissions to **myVNetA**. You can skip this step if you're using the same account for both subscriptions. +**user-2** is used in this example for the user account. Replace this value with the display name for the user from **subscription-2** that you wish to assign permissions to **vnet-1**. You can skip this step if you're using the same account for both subscriptions. ```azurecli-interactive-az ad user list --display-name UserB +az ad user list --display-name user-2 ``` ```output [ { "businessPhones": [],- "displayName": "UserB", + "displayName": "user-2", "givenName": null, "id": "16d51293-ec4b-43b1-b54b-3422c108321a", "jobTitle": null,- "mail": "userB@fabrikam.com", + "mail": "user-2@fabrikam.com", "mobilePhone": null, "officeLocation": null, "preferredLanguage": null, "surname": null,- "userPrincipalName": "userb_fabrikam.com#EXT#@contoso.onmicrosoft.com" + "userPrincipalName": "user-2_fabrikam.com#EXT#@contoso.onmicrosoft.com" } ] ``` -Make note of the object ID of **UserB** in field **id**. In this example, its **16d51293-ec4b-43b1-b54b-3422c108321a**. +Make note of the object ID of **user-2** in field **id**. In this example, its **16d51293-ec4b-43b1-b54b-3422c108321a**. ```azurecli-interactive vnetid=$(az network vnet show \- --name myVNetA \ - --resource-group myResourceGroupA \ + --name vnet-1 \ + --resource-group test-rg \ --query id \ --output tsv) az role assignment create \ --scope $vnetid ``` -Replace the example guid in **`--assignee`** with the real object ID for **UserB**. +Replace the example guid in **`--assignee`** with the real object ID for **user-2**. -## Obtain resource ID of myVNetA +## Obtain resource ID of vnet-1 # [**Portal**](#tab/create-peering-portal) -1. Remain signed in to the portal as **UserA**. +1. Remain signed in to the portal as **user-1**. -2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -3. Select **myVNetA**. +1. Select **vnet-1**. -4. In **Settings**, select **Properties**. +1. In **Settings**, select **Properties**. -5. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVnetA`**. +1. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1`**. -6. Sign out of the portal as **UserA**. +1. Sign out of the portal as **user-1**. # [**PowerShell**](#tab/create-peering-powershell) -The resource ID of **myVNetA** is required to set up the peering connection from **myVNetB** to **myVNetA**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. +The resource ID of **vnet-1** is required to set up the peering connection from **vnet-2** to **vnet-1**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. ```azurepowershell-interactive $id = @{- Name = 'myVNetA' - ResourceGroupName = 'myResourceGroupA' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } $vnetA = Get-AzVirtualNetwork @id $vnetA.id # [**Azure CLI**](#tab/create-peering-cli) -The resource ID of **myVNetA** is required to set up the peering connection from **myVNetB** to **myVNetA**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetA**. +The resource ID of **vnet-1** is required to set up the peering connection from **vnet-2** to **vnet-1**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-1**. ```azurecli-interactive vnetidA=$(az network vnet show \- --name myVNetA \ - --resource-group myResourceGroupA \ + --name vnet-1 \ + --resource-group test-rg \ --query id \ --output tsv) echo $vnetidA -## Create virtual network - myVNetB +## Create virtual network - vnet-2 -In this section, you sign in as **UserB** and create a virtual network for the peering connection to **myVNetA**. +In this section, you sign in as **user-2** and create a virtual network for the peering connection to **vnet-1**. # [**Portal**](#tab/create-peering-portal) -1. Sign in to the portal as **UserB**. If you're using one account for both subscriptions, change to **SubscriptionB** in the portal. --2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +Repeat the steps in the [previous section](#create-virtual-network) to create a second virtual network with the following values: -3. Select **+ Create**. --4. In the **Basics** tab of **Create virtual network**, enter or select the following information: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your **SubscriptionB**. | - | Resource group | Select **Create new**. </br> Enter **myResourceGroupB** in **Name**. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNetB**. | - | Region | Select a region. | --5. Select **Next: IP Addresses**. --6. In **IPv4 address space**, enter **10.2.0.0/16**. --7. Select **+ Add subnet**. --8. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Subnet name | Enter **mySubnet**. | - | Subnet address range | Enter **10.2.0.0/24**. | --9. Select **Add**. --10. Select **Review + create**. --11. Select **Create**. +| Setting | Value | +| | | +| Subscription | **subscription-2** | +| Resource group | **test-rg-2** | +| Name | **vnet-2** | +| Address space | **10.1.0.0/16** | +| Subnet name | **subnet-1** | +| Subnet address range | **10.1.0.0/24** | # [**PowerShell**](#tab/create-peering-powershell) -### Sign in to SubscriptionB +### Sign in to subscription-2 -Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**. +Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-2**. ```azurepowershell-interactive Connect-AzAccount ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). ```azurepowershell-interactive-Set-AzContext -Subscription SubscriptionB +Set-AzContext -Subscription subscription-2 ``` -### Create a resource group - myResourceGroupB +### Create a resource group - test-rg-2 An Azure resource group is a logical container where Azure resources are deployed and managed. Create a resource group with [New-AzResourceGroup](/powershell/module/az.resourc ```azurepowershell-interactive $rsg = @{- Name = 'myResourceGroupB' - Location = 'westus3' + Name = 'test-rg-2' + Location = 'eastus2' } New-AzResourceGroup @rsg ``` ### Create the virtual network -Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a default virtual network named **myVNetB** in the **West US 3** location: +Create a virtual network with [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). This example creates a subnet-1 virtual network named **vnet-2** in the **West US 3** location: ```azurepowershell-interactive $vnet = @{- Name = 'myVNetB' - ResourceGroupName = 'myResourceGroupB' - Location = 'westus3' - AddressPrefix = '10.2.0.0/16' + Name = 'vnet-2' + ResourceGroupName = 'test-rg-2' + Location = 'eastus2' + AddressPrefix = '10.1.0.0/16' } $virtualNetwork = New-AzVirtualNetwork @vnet ``` ### Add a subnet -Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **default** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig): +Azure deploys resources to a subnet within a virtual network, so you need to create a subnet. Create a subnet configuration named **subnet-1** with [Add-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/add-azvirtualnetworksubnetconfig): ```azurepowershell-interactive $subnet = @{- Name = 'default' + Name = 'subnet-1' VirtualNetwork = $virtualNetwork- AddressPrefix = '10.2.0.0/24' + AddressPrefix = '10.1.0.0/24' } $subnetConfig = Add-AzVirtualNetworkSubnetConfig @subnet ``` $virtualNetwork | Set-AzVirtualNetwork # [**Azure CLI**](#tab/create-peering-cli) -### Sign in to SubscriptionB +### Sign in to subscription-2 -Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**. +Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**. ```azurecli-interactive az login ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [az account set](/cli/azure/account#az-account-set). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [az account set](/cli/azure/account#az-account-set). ```azurecli-interactive-az account set --subscription "SubscriptionB" +az account set --subscription "subscription-2" ``` -### Create a resource group - myResourceGroupB +### Create a resource group - test-rg-2 An Azure resource group is a logical container where Azure resources are deployed and managed. Create a resource group with [az group create](/cli/azure/group#az-group-create) ```azurecli-interactive az group create \- --name myResourceGroupB \ - --location westus3 + --name test-rg-2 \ + --location eastus2 ``` ### Create the virtual network -Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a default virtual network named **myVNetB** in the **West US 3** location. +Create a virtual network and subnet with [az network vnet create](/cli/azure/network/vnet#az-network-vnet-create). This example creates a subnet-1 virtual network named **vnet-2** in the **West US 3** location. ```azurecli-interactive az network vnet create \- --resource-group myResourceGroupB\ - --location westus3 \ - --name myVNetB \ - --address-prefixes 10.2.0.0/16 \ - --subnet-name default \ - --subnet-prefixes 10.2.0.0/24 + --resource-group test-rg-2\ + --location eastus2 \ + --name vnet-2 \ + --address-prefixes 10.1.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefixes 10.1.0.0/24 ``` -## Assign permissions for UserA +## Assign permissions for user-1 A user account in the other subscription that you want to peer with must be added to the network you previously created. If you're using a single account for both subscriptions, you can skip this section. # [**Portal**](#tab/create-peering-portal) -1. Remain signed in to the portal as **UserB**. +1. Remain signed in to the portal as **user-2**. -2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -3. Select **myVNetB**. +1. Select **vnet-2**. -4. Select **Access control (IAM)**. +1. Select **Access control (IAM)**. -5. Select **+ Add** -> **Add role assignment**. +1. Select **+ Add** -> **Add role assignment**. -6. In **Add role assignment** in the **Role** tab, select **Network Contributor**. +1. In **Add role assignment** in the **Role** tab, select **Network Contributor**. -7. Select **Next**. +1. Select **Next**. -8. In the **Members** tab, select **+ Select members**. +1. In the **Members** tab, select **+ Select members**. -9. In **Select members** in the search box, enter **UserA**. +1. In **Select members** in the search box, enter **user-1**. -10. Select **Select**. +1. Select **Select**. -11. Select **Review + assign**. +1. Select **Review + assign**. -12. Select **Review + assign**. +1. Select **Review + assign**. # [**PowerShell**](#tab/create-peering-powershell) -Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetA**. Assign **UserA** from **SubscriptionA** to **myVNetB** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment). +Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-1**. Assign **user-1** from **subscription-1** to **vnet-2** with [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment). -Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **UserA**. +Use [Get-AzADUser](/powershell/module/az.resources/get-azaduser) to obtain the object ID for **user-1**. -**UserA** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionA** that you wish to assign permissions to **myVNetB**. You can skip this step if you're using the same account for both subscriptions. +**user-1** is used in this example for the user account. Replace this value with the display name for the user from **subscription-1** that you wish to assign permissions to **vnet-2**. You can skip this step if you're using the same account for both subscriptions. ```azurepowershell-interactive $id = @{- Name = 'myVNetB' - ResourceGroupName = 'myResourceGroupB' + Name = 'vnet-2' + ResourceGroupName = 'test-rg-2' } $vnet = Get-AzVirtualNetwork @id -$obj = Get-AzADUser -DisplayName 'UserA' +$obj = Get-AzADUser -DisplayName 'user-1' $role = @{ ObjectId = $obj.id New-AzRoleAssignment @role # [**Azure CLI**](#tab/create-peering-cli) -Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetB**. Assign **UserA** from **SubscriptionA** to **myVNetB** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create). +Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-2**. Assign **user-1** from **subscription-1** to **vnet-2** with [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create). -Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **UserA**. +Use [az ad user list](/cli/azure/ad/user#az-ad-user-list) to obtain the object ID for **user-1**. -**UserA** is used in this example for the user account. Replace this value with the display name for the user from **SubscriptionA** that you wish to assign permissions to **myVNetB**. You can skip this step if you're using the same account for both subscriptions. +**user-1** is used in this example for the user account. Replace this value with the display name for the user from **subscription-1** that you wish to assign permissions to **vnet-2**. You can skip this step if you're using the same account for both subscriptions. ```azurecli-interactive-az ad user list --display-name UserA +az ad user list --display-name user-1 ``` ```output [ { "businessPhones": [],- "displayName": "UserA", + "displayName": "user-1", "givenName": null, "id": "ee0645cc-e439-4ffc-b956-79577e473969", "jobTitle": null,- "mail": "userA@contoso.com", + "mail": "user-1@contoso.com", "mobilePhone": null, "officeLocation": null, "preferredLanguage": null, "surname": null,- "userPrincipalName": "usera_contoso.com#EXT#@fabrikam.onmicrosoft.com" + "userPrincipalName": "user-1_contoso.com#EXT#@fabrikam.onmicrosoft.com" } ] ``` -Make note of the object ID of **UserA** in field **id**. In this example, it's **ee0645cc-e439-4ffc-b956-79577e473969**. +Make note of the object ID of **user-1** in field **id**. In this example, it's **ee0645cc-e439-4ffc-b956-79577e473969**. ```azurecli-interactive vnetid=$(az network vnet show \- --name myVNetB \ - --resource-group myResourceGroupB \ + --name vnet-2 \ + --resource-group test-rg-2 \ --query id \ --output tsv) az role assignment create \ -## Obtain resource ID of myVNetB +## Obtain resource ID of vnet-2 -The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use the following steps to obtain the resource ID of **myVNetB**. +The resource ID of **vnet-2** is required to set up the peering connection from **vnet-1** to **vnet-2**. Use the following steps to obtain the resource ID of **vnet-2**. # [**Portal**](#tab/create-peering-portal) -1. Remain signed in to the portal as **UserB**. +1. Remain signed in to the portal as **user-2**. -2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -3. Select **myVNetB**. +1. Select **vnet-2**. -4. In **Settings**, select **Properties**. +1. In **Settings**, select **Properties**. -5. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB`**. +1. Copy the information in the **Resource ID** field and save for the later steps. The resource ID is similar to the following example: **`/subscriptions/<Subscription Id>/resourceGroups/test-rg-2/providers/Microsoft.Network/virtualNetworks/vnet-2`**. -6. Sign out of the portal as **UserB**. +1. Sign out of the portal as **user-2**. # [**PowerShell**](#tab/create-peering-powershell) -The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **myVNetB**. +The resource ID of **vnet-2** is required to set up the peering connection from **vnet-1** to **vnet-2**. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) to obtain the resource ID for **vnet-2**. ```azurepowershell-interactive $id = @{- Name = 'myVNetB' - ResourceGroupName = 'myResourceGroupB' + Name = 'vnet-2' + ResourceGroupName = 'test-rg-2' } $vnetB = Get-AzVirtualNetwork @id $vnetB.id # [**Azure CLI**](#tab/create-peering-cli) -The resource ID of **myVNetB** is required to set up the peering connection from **myVNetA** to **myVNetB**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **myVNetB**. +The resource ID of **vnet-2** is required to set up the peering connection from **vnet-1** to **vnet-2**. Use [az network vnet show](/cli/azure/network/vnet#az-network-vnet-show) to obtain the resource ID for **vnet-2**. ```azurecli-interactive vnetidB=$(az network vnet show \- --name myVNetB \ - --resource-group myResourceGroupB \ + --name vnet-2 \ + --resource-group test-rg-2 \ --query id \ --output tsv) echo $vnetidB -## Create peering connection - myVNetA to myVNetB +## Create peering connection - vnet-1 to vnet-2 -You need the **Resource ID** for **myVNetB** from the previous steps to set up the peering connection. +You need the **Resource ID** for **vnet-2** from the previous steps to set up the peering connection. # [**Portal**](#tab/create-peering-portal) -1. Sign in to the [Azure portal](https://portal.azure.com) as **UserA**. If you're using one account for both subscriptions, change to **SubscriptionA** in the portal. +1. Sign in to the [Azure portal](https://portal.azure.com) as **user-1**. If you're using one account for both subscriptions, change to **subscription-1** in the portal. -2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -3. Select **myVNetA**. +1. Select **vnet-1**. -4. Select **Peerings**. +1. Select **Peerings**. -5. Select **+ Add**. +1. Select **+ Add**. -6. Enter or select the following information in **Add peering**: +1. Enter or select the following information in **Add peering**: | Setting | Value | | - | -- | | **This virtual network** | |- | Peering link name | Enter **myVNetAToMyVNetB**. | - | Traffic to remote virtual network | Leave the default of **Allow (default)**. | - | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | - | Virtual network gateway or Route Server | Leave the default of **None (default)**. | + | Peering link name | Enter **vnet-1-to-vnet-2**. | + | Allow access to remote virtual network | Leave the default of selected. | + | Allow traffic to remote virtual network | Select the checkbox. | + | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Leave the default of cleared. | + | Use remote virtual network gateway or route server | Leave the default of cleared. | | **Remote virtual network** | | | Peering link name | Leave blank. | | Virtual network deployment model | Select **Resource manager**. | | Select the box for **I know my resource ID**. | |- | Resource ID | Enter or paste the **Resource ID** for **myVNetB**. | + | Resource ID | Enter or paste the **Resource ID** for **vnet-2**. | -7. In the pull-down box, select the **Directory** that corresponds with **myVNetB** and **UserB**. +1. In the pull-down box, select the **Directory** that corresponds with **vnet-2** and **user-2**. -8. Select **Authenticate**. +1. Select **Authenticate**. -9. Select **Add**. + :::image type="content" source="./media/create-peering-different-subscriptions/vnet-1-to-vnet-2-peering.png" alt-text="Screenshot of peering from vnet-1 to vnet-2."::: -10. Sign out of the portal as **UserA**. +1. Select **Add**. ++1. Sign out of the portal as **user-1**. # [**PowerShell**](#tab/create-peering-powershell) -### Sign in to SubscriptionA +### Sign in to subscription-1 -Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**. +Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-1**. ```azurepowershell-interactive Connect-AzAccount ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). ```azurepowershell-interactive-Set-AzContext -Subscription SubscriptionA +Set-AzContext -Subscription subscription-1 ``` -### Sign in to SubscriptionB +### Sign in to subscription-2 -Authenticate to **SubscriptionB** so that the peering can be set up. +Authenticate to **subscription-2** so that the peering can be set up. -Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**. +Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-2**. ```azurepowershell-interactive Connect-AzAccount ``` -### Change to SubscriptionA (optional) +### Change to subscription-1 (optional) -You may have to switch back to **SubscriptionA** to continue with the actions in **SubscriptionA**. +You may have to switch back to **subscription-1** to continue with the actions in **subscription-1**. -Change context to **SubscriptionA**. +Change context to **subscription-1**. ```azurepowershell-interactive-Set-AzContext -Subscription SubscriptionA +Set-AzContext -Subscription subscription-1 ``` ### Create peering connection -Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetA** and **myVNetB**. +Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-1** and **vnet-2**. ```azurepowershell-interactive $netA = @{- Name = 'myVNetA' - ResourceGroupName = 'myResourceGroupA' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } $vnetA = Get-AzVirtualNetwork @netA $peer = @{- Name = 'myVNetAToMyVNetB' + Name = 'vnet-1-to-vnet-2' VirtualNetwork = $vnetA- RemoteVirtualNetworkId = '/subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/virtualNetworks/myVnetB' + RemoteVirtualNetworkId = '/subscriptions/<subscription-2-Id>/resourceGroups/test-rg-2/providers/Microsoft.Network/virtualNetworks/vnet-2' } Add-AzVirtualNetworkPeering @peer ``` -Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **myVNetA** to **myVNetB**. +Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **vnet-1** to **vnet-2**. ```azurepowershell-interactive $status = @{- ResourceGroupName = 'myResourceGroupA' - VirtualNetworkName = 'myVNetA' + ResourceGroupName = 'test-rg' + VirtualNetworkName = 'vnet-1' } Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState ``` PS /home/azureuser> Get-AzVirtualNetworkPeering @status | Format-Table VirtualNe VirtualNetworkName PeeringState -myVNetA Initiated +vnet-1 Initiated ``` # [**Azure CLI**](#tab/create-peering-cli) -### Sign in to SubscriptionA +### Sign in to subscription-1 -Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**. +Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**. ```azurecli-interactive az login ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionA** with [az account set](/cli/azure/account#az-account-set). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-1** with [az account set](/cli/azure/account#az-account-set). ```azurecli-interactive-az account set --subscription "SubscriptionA" +az account set --subscription "subscription-1" ``` -### Sign in to SubscriptionB +### Sign in to subscription-2 -Authenticate to **SubscriptionB** so that the peering can be set up. +Authenticate to **subscription-2** so that the peering can be set up. -Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionB**. +Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-2**. ```azurecli-interactive az login ``` -### Change to SubscriptionA (optional) +### Change to subscription-1 (optional) -You may have to switch back to **SubscriptionA** to continue with the actions in **SubscriptionA**. +You may have to switch back to **subscription-1** to continue with the actions in **subscription-1**. -Change context to **SubscriptionA**. +Change context to **subscription-1**. ```azurecli-interactive-az account set --subscription "SubscriptionA" +az account set --subscription "subscription-1" ``` ### Create peering connection -Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetA** and **myVNetB**. +Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-1** and **vnet-2**. ```azurecli-interactive az network vnet peering create \- --name myVNetAToMyVNetB \ - --resource-group myResourceGroupA \ - --vnet-name myVNetA \ - --remote-vnet /subscriptions/<SubscriptionB-Id>/resourceGroups/myResourceGroupB/providers/Microsoft.Network/VirtualNetworks/myVNetB \ + --name vnet-1-to-vnet-2 \ + --resource-group test-rg \ + --vnet-name vnet-1 \ + --remote-vnet /subscriptions/<subscription-2-Id>/resourceGroups/test-rg-2/providers/Microsoft.Network/VirtualNetworks/vnet-2 \ --allow-vnet-access ``` -Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **myVNetA** to **myVNetB**. +Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **vnet-1** to **vnet-2**. ```azurecli-interactive az network vnet peering list \- --resource-group myResourceGroupA \ - --vnet-name myVNetA \ + --resource-group test-rg \ + --vnet-name vnet-1 \ --output table ``` -The peering connection shows in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **myVNetB**. +The peering connection shows in **Peerings** in a **Initiated** state. To complete the peer, a corresponding connection must be set up in **vnet-2**. -## Create peering connection - myVNetB to myVNetA +## Create peering connection - vnet-2 to vnet-1 -You need the **Resource IDs** for **myVNetA** from the previous steps to set up the peering connection. +You need the **Resource IDs** for **vnet-1** from the previous steps to set up the peering connection. # [**Portal**](#tab/create-peering-portal) -1. Sign in to the [Azure portal](https://portal.azure.com) as **UserB**. If you're using one account for both subscriptions, change to **SubscriptionB** in the portal. +1. Sign in to the [Azure portal](https://portal.azure.com) as **user-2**. If you're using one account for both subscriptions, change to **subscription-2** in the portal. -2. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. +1. In the search box a the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -3. Select **myVNetB**. +1. Select **vnet-2**. -4. Select **Peerings**. +1. Select **Peerings**. -5. Select **+ Add**. +1. Select **+ Add**. -6. Enter or select the following information in **Add peering**: +1. Enter or select the following information in **Add peering**: | Setting | Value | | - | -- | | **This virtual network** | |- | Peering link name | Enter **myVNetBToMyVNetA**. | - | Traffic to remote virtual network | Leave the default of **Allow (default)**. | - | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. | - | Virtual network gateway or Route Server | Leave the default of **None (default)**. | + | Peering link name | Enter **vnet-2-to-vnet-1**. | + | Allow access to remote virtual network | Leave the default of selected. | + | Allow traffic to remote virtual network | Select the checkbox. | + | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Leave the default of cleared. | + | Use remote virtual network gateway or route server | Leave the default of cleared. | | **Remote virtual network** | | | Peering link name | Leave blank. | | Virtual network deployment model | Select **Resource manager**. | | Select the box for **I know my resource ID**. | |- | Resource ID | Enter or paste the **Resource ID** for **myVNetA**. | + | Resource ID | Enter or paste the **Resource ID** for **vnet-1**. | ++1. In the pull-down box, select the **Directory** that corresponds with **vnet-1** and **user-1**. -7. In the pull-down box, select the **Directory** that corresponds with **myVNetA** and **UserA**. +1. Select **Authenticate**. -8. Select **Authenticate**. + :::image type="content" source="./media/create-peering-different-subscriptions/vnet-2-to-vnet-1-peering.png" alt-text="Screenshot of peering from vnet-2 to vnet-1."::: -9. Select **Add**. +1. Select **Add**. # [**PowerShell**](#tab/create-peering-powershell) -### Sign in to SubscriptionB +### Sign in to subscription-2 -Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionB**. +Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-2**. ```azurepowershell-interactive Connect-AzAccount ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [Set-AzContext](/powershell/module/az.accounts/set-azcontext). ```azurepowershell-interactive-Set-AzContext -Subscription SubscriptionB +Set-AzContext -Subscription subscription-2 ``` -## Sign in to SubscriptionA +## Sign in to subscription-1 -Authenticate to **SubscriptionA** so that the peering can be set up. +Authenticate to **subscription-1** so that the peering can be set up. -Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **SubscriptionA**. +Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) to sign in to **subscription-1**. ```azurepowershell-interactive Connect-AzAccount ``` -### Change to SubscriptionB (optional) +### Change to subscription-2 (optional) -You may have to switch back to **SubscriptionB** to continue with the actions in **SubscriptionB**. +You may have to switch back to **subscription-2** to continue with the actions in **subscription-2**. -Change context to **SubscriptionB**. +Change context to **subscription-2**. ```azurepowershell-interactive-Set-AzContext -Subscription SubscriptionB +Set-AzContext -Subscription subscription-2 ``` ### Create peering connection -Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetB** and **myVNetA**. +Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-2** and **vnet-1**. ```azurepowershell-interactive $netB = @{- Name = 'myVNetB' - ResourceGroupName = 'myResourceGroupB' + Name = 'vnet-2' + ResourceGroupName = 'test-rg-2' } $vnetB = Get-AzVirtualNetwork @netB $peer = @{- Name = 'myVNetBToMyVNetA' + Name = 'vnet-2-to-vnet-1' VirtualNetwork = $vnetB- RemoteVirtualNetworkId = '/subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/virtualNetworks/myVNetA' + RemoteVirtualNetworkId = '/subscriptions/<subscription-1-Id>/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1' } Add-AzVirtualNetworkPeering @peer ``` -User [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **myVNetB** to **myVNetA**. +User [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to obtain the status of the peering connections from **vnet-2** to **vnet-1**. ```azurepowershell-interactive $status = @{- ResourceGroupName = 'myResourceGroupB' - VirtualNetworkName = 'myVNetB' + ResourceGroupName = 'test-rg-2' + VirtualNetworkName = 'vnet-2' } Get-AzVirtualNetworkPeering @status | Format-Table VirtualNetworkName, PeeringState ``` PS /home/azureuser> Get-AzVirtualNetworkPeering @status | Format-Table VirtualNe VirtualNetworkName PeeringState -myVNetB Connected +vnet-2 Connected ``` # [**Azure CLI**](#tab/create-peering-cli) -### Sign in to SubscriptionB +### Sign in to subscription-2 -Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionB**. +Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-2**. ```azurecli-interactive az login ``` -If you're using one account for both subscriptions, sign in to that account and change the subscription context to **SubscriptionB** with [az account set](/cli/azure/account#az-account-set). +If you're using one account for both subscriptions, sign in to that account and change the subscription context to **subscription-2** with [az account set](/cli/azure/account#az-account-set). ```azurecli-interactive-az account set --subscription "SubscriptionB" +az account set --subscription "subscription-2" ``` -### Sign in to SubscriptionA +### Sign in to subscription-1 -Authenticate to **SubscriptionA** so that the peering can be set up. +Authenticate to **subscription-1** so that the peering can be set up. -Use [az login](/cli/azure/reference-index#az-login) to sign in to **SubscriptionA**. +Use [az sign-in](/cli/azure/reference-index#az-login) to sign in to **subscription-1**. ```azurecli-interactive az login ``` -### Change to SubscriptionB (optional) +### Change to subscription-2 (optional) -You may have to switch back to **SubscriptionB** to continue with the actions in **SubscriptionB**. +You may have to switch back to **subscription-2** to continue with the actions in **subscription-2**. -Change context to **SubscriptionB**. +Change context to **subscription-2**. ```azurecli-interactive-az account set --subscription "SubscriptionB" +az account set --subscription "subscription-2" ``` ### Create peering connection -Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **myVNetB** and **myVNetA**. +Use [az network vnet peering create](/powershell/module/az.network/add-azvirtualnetworkpeering) to create a peering connection between **vnet-2** and **vnet-1**. ```azurecli-interactive az network vnet peering create \- --name myVNetBToMyVNetA \ - --resource-group myResourceGroupB \ - --vnet-name myVNetB \ - --remote-vnet /subscriptions/<SubscriptionA-Id>/resourceGroups/myResourceGroupA/providers/Microsoft.Network/VirtualNetworks/myVNetA \ + --name vnet-2-to-vnet-1 \ + --resource-group test-rg-2 \ + --vnet-name vnet-2 \ + --remote-vnet /subscriptions/<subscription-1-Id>/resourceGroups/test-rg/providers/Microsoft.Network/VirtualNetworks/vnet-1 \ --allow-vnet-access ``` -Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **myVNetB** to **myVNetA**. +Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to obtain the status of the peering connections from **vnet-2** to **vnet-1**. ```azurecli-interactive az network vnet peering list \- --resource-group myResourceGroupB \ - --vnet-name myVNetB \ + --resource-group test-rg-2 \ + --vnet-name vnet-2 \ --output table ``` -The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using default Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server or use Azure DNS. +The peering is successfully established after you see **Connected** in the **Peering status** column for both virtual networks in the peering. Any Azure resources you create in either virtual network are now able to communicate with each other through their IP addresses. If you're using subnet-1 Azure name resolution for the virtual networks, the resources in the virtual networks aren't able to resolve names across the virtual networks. If you want to resolve names across virtual networks in a peering, you must create your own DNS server or use Azure DNS. For more information about using your own DNS for name resolution, see, [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). |
virtual-network | Deploy Container Networking Docker Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-linux.md | -The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you'll learn how to install and configure the CNI plugin for a standalone Linux Docker host. +The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you learn how to install and configure the CNI plugin for a standalone Linux Docker host. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -## Create virtual network --A virtual network contains the virtual machine used in this article. In this section, you'll create a virtual network and subnet. You'll enable Azure Bastion during the virtual network deployment. The Azure Bastion host is used to securely connect to the virtual machine to complete the steps in this article. --1. Sign in to the [Azure portal](https://portal.azure.com). --2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. --3. Select **+ Create**. --4. Enter or select the following information in the **Basics** tab of **Create virtual network**: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNet**. | - | Region | Select a region. | --5. Select **Next: IP Addresses**. --6. In **IPv4 address space**, enter **10.1.0.0/16**. --7. Select **+ Add subnet**. --8. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Subnet name | Enter **mySubnet**. | - | Subnet address range | Enter **10.1.0.0/24**. | --9. Select **Add**. --10. Select **Next: Security**. --11. Select **Enable** in **BastionHost**. -- >[!NOTE] - >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)] --12. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Bastion name | Enter **myBastion**. | - | AzureBastionSubnet address space | Enter **10.1.1.0/26**. | - | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. </br> Select **OK**. | --13. Select **Review + create**. --14. Select **Create**. It can take a few minutes for the Bastion host to deploy. You can continue with the steps while the Bastion host is deploying. -## Create virtual machine --In this section, you'll create an Ubuntu virtual machine for the stand-alone Docker host. Ubuntu is used for the example in this article. The CNI plug-in supports Windows and other Linux distributions. --1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. --2. Select **+ Create** > **Azure virtual machine**. --3. Enter or select the following information in the **Basics** tab of **Create a virtual machine**: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. | - | **Instance details** | | - | Virtual machine name | Enter **myVM**. | - | Region | Select a region. | - | Availability options | Select **No infrastructure required**. | - | Security type | Select **Standard**. | - | Image | Select **Ubuntu Server 20.04 LTS -x64 Gen2**. | - | VM architecture | Leave the default of **x64**. | - | Run with Azure Spot discount | Leave the default of unchecked. | - | Size | Select a size. | - | **Administrator account** | | - | Authentication type | Select **Password**. | - | Username | Enter a username. | - | Password | Enter a password. | - | Confirm password | Reenter password. | - | **Inbound port rules** | | - | Public inbound ports | Select **None**. | --4. Select **Next: Disks**, then **Next: Networking**. --5. Enter or select the following information in the **Networking** tab: -- | Setting | Value | - | - | -- | - | **Network interface** | | - | Virtual network | Select **myVNet**. | - | Subnet | Select **mySubnet (10.1.0.0/24)**. | - | Public IP | Select **None**. | --6. Select **Review + create**. --7. Select **Create** ## Add IP configuration -The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container will start but won't have an IP address. +The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container starts but doesn't have an IP address. -In this section, you'll add an IP configuration to the virtual network interface of the virtual machine you created previously. +In this section, you add an IP configuration to the virtual network interface of the virtual machine you created previously. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In **Settings**, select **Networking**. +1. In **Settings**, select **Networking**. -4. Select the name of the network interface next to **Network Interface:**. The network interface is named **myvm** with a random number. In this example, it's **myvm27**. +1. Select the name of the network interface next to **Network Interface:**. The network interface is named **vm-1** with a random number. -5. In **Settings** of the network interface, select **IP configurations**. +1. In **Settings** of the network interface, select **IP configurations**. -6. in **IP configurations**, select **ipconfig1** in **Name**. +1. in **IP configurations**, select **ipconfig1** in **Name**. -7. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**. +1. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**. -8. Select **Save**. +1. Select **Save**. -9. Return to **IP configurations**. +1. Return to **IP configurations**. -10. Select **+ Add**. +1. Select **+ Add**. -11. Enter or select the following information for **Add IP configuration**: +1. Enter or select the following information for **Add IP configuration**: | Setting | Value | | - | -- |- | Name | Enter **ipconfig2**. | + | Name | Enter **ipconfig-2**. | | **Private IP address settings** | | | Allocation | Select **Static**. |- | IP address | Enter **10.1.0.5**. | + | IP address | Enter **10.0.0.5**. | -12. Select **OK**. +1. Select **OK**. -13. Verify **ipconfig2** has been added as a secondary IP configuration. +1. Verify **ipconfig-2** has been added as a secondary IP configuration. -Repeat steps 1 through 13 to add as many configurations as containers you wish to deploy on the container host. +Repeat the previous steps to add as many configurations as containers you wish to deploy on the container host. ## Install Docker Sign-in to the virtual machine you created previously with the Azure Bastion hos 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In the **Overview** of **myVM**, select **Connect** then **Bastion**. +1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. -4. Enter the username and password you created when you deployed the virtual machine in the previous steps. +1. Enter the username and password you created when you deployed the virtual machine in the previous steps. -5. Select **Connect**. +1. Select **Connect**. For install instructions for Docker on an Ubuntu container host, see [Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/). After Docker is installed on your virtual machine, continue with the steps in th ## Install CNI plugin and create a test container -The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you'll use **`git`** within the virtual machine to clone the repository for the plugin and then install and configure the plugin. +The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you use **`git`** within the virtual machine to clone the repository for the plugin and then install and configure the plugin. For more information about the Azure CNI plugin, see [Microsoft Azure Container Networking](https://github.com/Azure/azure-container-networking). 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In the **Overview** of **myVM**, select **Connect** then **Bastion**. +1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. -4. Enter the username and password you created when you deployed the virtual machine in the previous steps. +1. Enter the username and password you created when you deployed the virtual machine in the previous steps. -5. Select **Connect**. +1. Select **Connect**. -6. The application **jq** is required for the install script for the CNI plugin, use the following example to install the application: +1. The application **jq** is required for the install script for the CNI plugin, use the following example to install the application: ```bash sudo apt-get update sudo apt-get install jq ```-7. Next, you'll clone the repository for the CNI plugin. Use the following example to clone the repository: +1. Next, you clone the repository for the CNI plugin. Use the following example to clone the repository: ```bash git clone https://github.com/Azure/azure-container-networking.git ``` -8. Configure permissions and install the CNI plugin. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases). +1. Configure permissions and install the CNI plugin. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases). ```bash cd ./azure-container-networking/scripts For more information about the Azure CNI plugin, see [Microsoft Azure Container chmod u+x docker-run.sh ``` -9. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example will create an Alpine container with the CNI plugin script: +1. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example creates an Alpine container with the CNI plugin script: ```bash sudo ./docker-run.sh vnetdocker1 default alpine ``` -10. To verify that the container received the IP address you previously configured, connect to the container and view the IP: +1. To verify that the container received the IP address you previously configured, connect to the container and view the IP: ```bash sudo docker exec -it vnetdocker1 /bin/sh ``` -11. Use the **`ifconfig`** command in the following example to verify the IP address was assigned to the container: +1. Use the **`ifconfig`** command in the following example to verify the IP address was assigned to the container: ```bash ifconfig ``` :::image type="content" source="./media/deploy-container-networking-docker-linux/ifconfig-output.png" alt-text="Screenshot of ifconfig output in Bash prompt of test container."::: -## Clean up resources --If you're not going to continue to use this application, delete the virtual network and virtual machine with the following steps: --1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results. --2. Select **myResourceGroup**. --3. In the **Overview** of **myResourceGroup**, select **Delete resource group**. --4. In **TYPE THE RESOURCE GROUP NAME:**, enter **myResourceGroup**. --5. Select **Delete**. ## Next steps |
virtual-network | Deploy Container Networking Docker Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/deploy-container-networking-docker-windows.md | -The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you'll learn how to install and configure the CNI plugin for a standalone Windows Docker host. +The Azure CNI plugin enables per container/pod networking for stand-alone docker hosts and Kubernetes clusters. In this article, you learn how to install and configure the CNI plugin for a standalone Windows Docker host. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -## Create virtual network --A virtual network contains the virtual machine used in this article. In this section, you'll create a virtual network and subnet. You'll enable Azure Bastion during the virtual network deployment. The Azure Bastion host is used to securely connect to the virtual machine to complete the steps in this article. --1. Sign in to the [Azure portal](https://portal.azure.com). --2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. --3. Select **+ Create**. --4. Enter or select the following information in the **Basics** tab of **Create virtual network**: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNet**. | - | Region | Select a region. | --5. Select **Next: IP Addresses**. --6. In **IPv4 address space**, enter **10.1.0.0/16**. --7. Select **+ Add subnet**. --8. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Subnet name | Enter **mySubnet**. | - | Subnet address range | Enter **10.1.0.0/24**. | --9. Select **Add**. --10. Select **Next: Security**. --11. Select **Enable** in **BastionHost**. -- >[!NOTE] - >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)] --12. Enter or select the following information: -- | Setting | Value | - | - | -- | - | Bastion name | Enter **myBastion**. | - | AzureBastionSubnet address space | Enter **10.1.1.0/26**. | - | Public IP address | Select **Create new**. </br> Enter **myBastionIP** in **Name**. </br> Select **OK**. | --13. Select **Review + create**. --14. Select **Create**. It can take a few minutes for the network and Bastion host to deploy. Continue with the next steps when the deployment is complete or the virtual network creation is complete. -## Create virtual machine --In this section, you'll create a Windows Server 2022 virtual machine for the stand-alone Docker host. The CNI plug-in supports Windows and Linux. --1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. --2. Select **+ Create** > **Azure virtual machine**. --3. Enter or select the following information in the **Basics** tab of **Create a virtual machine**: -- | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **myResourceGroup**. | - | **Instance details** | | - | Virtual machine name | Enter **myVM**. | - | Region | Select a region. | - | Availability options | Select **No infrastructure required**. | - | Security type | Select **Standard**. | - | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. | - | VM architecture | Leave the default of **x64**. | - | Run with Azure Spot discount | Leave the default of unchecked. | - | Size | Select a size. | - | **Administrator account** | | - | Authentication type | Select **Password**. | - | Username | Enter a username. | - | Password | Enter a password. | - | Confirm password | Reenter password. | - | **Inbound port rules** | | - | Public inbound ports | Select **None**. | --4. Select **Next: Disks**, then **Next: Networking**. --5. Enter or select the following information in the **Networking** tab: -- | Setting | Value | - | - | -- | - | **Network interface** | | - | Virtual network | Select **myVNet**. | - | Subnet | Select **mySubnet (10.1.0.0/24)**. | - | Public IP | Select **None**. | --6. Select **Review + create**. --7. Select **Create** ## Add IP configuration -The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container will start but won't have an IP address. +The Azure CNI plugin allocates IP addresses to containers based on a pool of IP addresses you create on the virtual network interface of the virtual machine. For every container on the host, an IP configuration must exist on the virtual network interface. If the number of containers on the server outnumber the IP configurations on the virtual network interface, the container starts but doesn't have an IP address. -In this section, you'll add an IP configuration to the virtual network interface of the virtual machine you created previously. +In this section, you add an IP configuration to the virtual network interface of the virtual machine you created previously. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In **Settings**, select **Networking**. +1. In **Settings**, select **Networking**. -4. Select the name of the network interface next to **Network Interface:**. The network interface is named **myvm** with a random number. In this example, it's **myvm418**. +1. Select the name of the network interface next to **Network Interface:**. The network interface is named **vm-1** with a random number. - :::image type="content" source="./media/deploy-container-networking-docker-windows/select-nic-portal.png" alt-text="Screenshot of the network interface in settings for the virtual machine in the Azure portal."::: +1. In **Settings** of the network interface, select **IP configurations**. -5. In **Settings** of the network interface, select **IP configurations**. +1. in **IP configurations**, select **ipconfig1** in **Name**. -6. in **IP configurations**, select **ipconfig1** in **Name**. +1. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**. - :::image type="content" source="./media/deploy-container-networking-docker-windows/nic-ip-configuration.png" alt-text="Screenshot of IP configuration of the virtual machine network interface."::: +1. Select **Save**. -7. In the **ipconfig1** settings, change the assignment of the private IP address from **Dynamic** to **Static**. +1. Return to **IP configurations**. -8. Select **Save**. +1. Select **+ Add**. -9. Return to **IP configurations**. --10. Select **+ Add**. --11. Enter or select the following information for **Add IP configuration**: +1. Enter or select the following information for **Add IP configuration**: | Setting | Value | | - | -- |- | Name | Enter **ipconfig2**. | + | Name | Enter **ipconfig-2**. | | **Private IP address settings** | | | Allocation | Select **Static**. |- | IP address | Enter **10.1.0.5**. | --12. Select **OK**. + | IP address | Enter **10.0.0.5**. | -13. Verify **ipconfig2** has been added as a secondary IP configuration. +1. Select **OK**. - :::image type="content" source="./media/deploy-container-networking-docker-windows/verify-ip-configuration.png" alt-text="Screenshot of IP configuration of the virtual machine network interface with the secondary configuration."::: +1. Verify **ipconfig2** has been added as a secondary IP configuration. Repeat steps 1 through 13 to add as many configurations as containers you wish to deploy on the container host. To assign multiple IP addresses to a Windows virtual machine, the IP addressees 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In the **Overview** of **myVM**, select **Connect** then **Bastion**. +1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. -4. Enter the username and password you created when you deployed the virtual machine in the previous steps. +1. Enter the username and password you created when you deployed the virtual machine in the previous steps. -5. Select **Connect**. +1. Select **Connect**. -6. Open the network connections configuration on the virtual machine. Select **Start** -> **Run** and enter **`ncpa.cpl`**. +1. Open the network connections configuration on the virtual machine. Select **Start** -> **Run** and enter **`ncpa.cpl`**. -7. Select **OK**. +1. Select **OK**. -8. Select the network interface of the virtual machine, then **Properties**: +1. Select the network interface of the virtual machine, then **Properties**: :::image type="content" source="./media/deploy-container-networking-docker-windows/select-network-interface.png" alt-text="Screenshot of select network interface in Windows OS."::: -9. In **Ethernet Properties**, select **Internet Protocol Version 4 (TCP/IPv4)**, then **Properties**. +1. In **Ethernet Properties**, select **Internet Protocol Version 4 (TCP/IPv4)**, then **Properties**. -10. Enter or select the following information in the **General** tab: +1. Enter or select the following information in the **General** tab: | Setting | Value | | - | -- | | Select **Use the following IP address:** | |- | IP address: | Enter **10.1.0.4** | + | IP address: | Enter **10.0.0.4** | | Subnet mask: | Enter **255.255.255.0** |- | Default gateway | Enter **10.1.0.1** | + | Default gateway | Enter **10.0.0.1** | | Select **Use the following DNS server addresses:** | | | Preferred DNS server: | Enter **168.63.129.16** *This IP is the DHCP assigned IP address for the default Azure DNS* | - :::image type="content" source="./media/deploy-container-networking-docker-windows/ip-address-configuration.png" alt-text="Screenshot of the primary IP configuration in Windows."::: +1. Select **Advanced...**. -11. Select **Advanced...**. +1. in **IP addresses**, select **Add...**. -12. in **IP addresses**, select **Add...**. -- :::image type="content" source="./media/deploy-container-networking-docker-windows/advanced-ip-configuration.png" alt-text="Screenshot of the advanced IP configuration in Windows."::: --13. Enter or select the following information: +1. Enter or select the following information: | Setting | Value | | - | -- | | **TCP/IP Address** | |- | IP address: | Enter **10.1.0.5** | + | IP address: | Enter **10.0.0.5** | | Subnet mask: | Enter **255.255.255.0** | - :::image type="content" source="./media/deploy-container-networking-docker-windows/secondary-ip-address.png" alt-text="Screenshot of the secondary IP configuration addition."::: --14. Select **Add**. +1. Select **Add**. -15. To add more IP addresses that correspond with any extra IP configurations created previously, select **Add**. +1. To add more IP addresses that correspond with any extra IP configurations created previously, select **Add**. -16. Select **OK**. +1. Select **OK**. -17. Select **OK**. +1. Select **OK**. -18. Select **OK**. +1. Select **OK**. -The Bastion connection will drop for a few seconds as the network configuration is applied. Wait a few seconds then attempt to reconnect. Continue when a reconnection is successful. +The Bastion connection drops for a few seconds as the network configuration is applied. Wait a few seconds then attempt to reconnect. Continue when a reconnection is successful. ## Install Docker Sign-in to the virtual machine you created previously with the Azure Bastion hos 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In the **Overview** of **myVM**, select **Connect** then **Bastion**. +1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. -4. Enter the username and password you created when you deployed the virtual machine in the previous steps. +1. Enter the username and password you created when you deployed the virtual machine in the previous steps. -5. Select **Connect**. +1. Select **Connect**. -6. Open **Windows PowerShell** on **myVM**. +1. Open **Windows PowerShell** on **vm-1**. -7. The following example installs **Docker CE/Moby**: +1. The following example installs **Docker CE/Moby**: ```powershell Invoke-WebRequest -UseBasicParsing "https://raw.githubusercontent.com/microsoft/Windows-Containers/Main/helpful_tools/Install-DockerCE/install-docker-ce.ps1" -o install-docker-ce.ps1 Sign-in to the virtual machine you created previously with the Azure Bastion hos .\install-docker-ce.ps1 ``` -The virtual machine will reboot to install the container support in Windows. Reconnect to the virtual machine and the Docker install will continue. +The virtual machine reboots to install the container support in Windows. Reconnect to the virtual machine and the Docker install continues. For more information about Windows containers, see, [Get started: Prep Windows for containers](/virtualization/windowscontainers/quick-start/set-up-environment?tabs=dockerce#windows-server-1). After Docker is installed on your virtual machine, continue with the steps in th ## Install CNI plugin and jq -The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you'll download the CNI plugin repository within the virtual machine and then install and configure the plugin. +The Azure CNI plugin is maintained as a GitHub project and is available for download from the project's GitHub page. For this article, you download the CNI plugin repository within the virtual machine and then install and configure the plugin. For more information about the Azure CNI plugin, see [Microsoft Azure Container Networking](https://github.com/Azure/azure-container-networking). 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -2. Select **myVM**. +1. Select **vm-1**. -3. In the **Overview** of **myVM**, select **Connect** then **Bastion**. +1. In the **Overview** of **vm-1**, select **Connect** then **Bastion**. -4. Enter the username and password you created when you deployed the virtual machine in the previous steps. +1. Enter the username and password you created when you deployed the virtual machine in the previous steps. -5. Select **Connect**. +1. Select **Connect**. -6. Use the following example to download and extract the CNI plugin to a temporary folder in the virtual machine: +1. Use the following example to download and extract the CNI plugin to a temporary folder in the virtual machine: ```powershell Invoke-WebRequest -Uri https://github.com/Azure/azure-container-networking/archive/refs/heads/master.zip -OutFile azure-container-networking.zip For more information about the Azure CNI plugin, see [Microsoft Azure Container Expand-Archive azure-container-networking.zip -DestinationPath azure-container-networking ``` -7. To install the CNI plugin, change to the scripts directory of the CNI plugin folder you downloaded in the previous step. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases). +1. To install the CNI plugin, change to the scripts directory of the CNI plugin folder you downloaded in the previous step. The install script command requires a version number for the CNI plugin. At the time of the writing of this article, the newest version is **`v1.4.39`**. To obtain the latest version number of the plugin or previous versions, see [Releases](https://github.com/Azure/azure-container-networking/releases). ```powershell cd .\azure-container-networking\azure-container-networking-master\scripts\ For more information about the Azure CNI plugin, see [Microsoft Azure Container .\Install-CniPlugin.ps1 v1.4.39 ``` -8. The CNI plugin comes with a built-in network configuration file for the plugin. Use the following example to copy the file to the network configuration directory: +1. The CNI plugin comes with a built-in network configuration file for the plugin. Use the following example to copy the file to the network configuration directory: ```powershell Copy-Item -Path "c:\k\azurecni\bin\10-azure.conflist" -Destination "c:\k\azurecni\netconf" The script that creates the containers with the Azure CNI plugin requires the ap 1. Open a web browser in the virtual machine and download the **jq** application. -2. The download is a self-contained executable for the application. Copy the executable **`jq-win64.exe`** to the **`C:\Windows`** directory. +1. The download is a self-contained executable for the application. Copy the executable **`jq-win64.exe`** to the **`C:\Windows`** directory. ## Create test container -1. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example will create a Windows Server container with the CNI plugin script: +1. To start a container with the CNI plugin, you must use a special script that comes with the plugin to create and start the container. The following example creates a Windows Server container with the CNI plugin script: ```powershell cd .\azure-container-networking\azure-container-networking-master\scripts\ .\docker-exec.ps1 vnetdocker1 default mcr.microsoft.com/windows/servercore/iis add ``` - It can take a few minutes for the image for the container to download for the first time. When the container starts and initializes the network, the Bastion connection will disconnect. Wait a few seconds and the connection will reestablish. + It can take a few minutes for the image for the container to download for the first time. When the container starts and initializes the network, the Bastion connection disconnects. Wait a few seconds and the connection reestablish. -2. To verify that the container received the IP address you previously configured, connect to the container and view the IP: +1. To verify that the container received the IP address you previously configured, connect to the container and view the IP: ```powershell docker exec -it vnetdocker1 powershell ``` -3. Use the **`ipconfig`** command in the following example to verify the IP address was assigned to the container: +1. Use the **`ipconfig`** command in the following example to verify the IP address was assigned to the container: ```powershell ipconfig ``` :::image type="content" source="./media/deploy-container-networking-docker-windows/ipconfig-output.png" alt-text="Screenshot of ipconfig output in PowerShell prompt of test container."::: -4. Exit the container and close the Bastion connection to **myVM**. --## Clean up resources --If you're not going to continue to use this application, delete the virtual network and virtual machine with the following steps: --1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results. --2. Select **myResourceGroup**. --3. In the **Overview** of **myResourceGroup**, select **Delete resource group**. --4. In **TYPE THE RESOURCE GROUP NAME:**, enter **myResourceGroup**. +1. Exit the container and close the Bastion connection to **vm-1**. -5. Select **Delete**. ## Next steps |
virtual-network | How To Create Encryption Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-create-encryption-portal.md | Azure Virtual Network encryption is a feature of Azure Virtual Network. Virtual - An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -## Create a virtual network --In this section, you create a virtual network and enable virtual network encryption. --1. Sign in to the [Azure portal](https://portal.azure.com/). --1. In the search box at the top of the portal, begin typing **Virtual networks**. When **Virtual networks** appears in the search results, select it. --1. In **Virtual networks**, select **+ Create**. --1. Enter or select the following information in the **Basics** tab of **Create virtual network**: -- | Setting | Value | - | - | -- | - | **Project details** | | - | **Subscription** | Select your subscription. | - | **Resource group** | Select **Create new**, then enter **test-rg** in **Name**. Select **OK**. | - | **Instance details** | | - | Virtual network name | Enter **vnet-1**. | - | Region | Select **(US) East US 2**. | --1. Select **Review + create**. --1. Select **Create**. > [!IMPORTANT] > Azure Virtual Network encryption requires supported virtual machine SKUs in the virtual network for traffic to be encrypted. The setting **dropUnencrypted** will drop traffic between unsupported virtual machine SKUs if they are deployed in the virtual network. For more information, see [Azure Virtual Network encryption requirements](virtual-network-encryption-overview.md#requirements). |
virtual-network | Add Dual Stack Ipv6 Vm Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-cli.md | Title: Add a dual-stack network to an existing virtual machine - Azure CLI description: Learn how to add a dual-stack network to an existing virtual machine using the Azure CLI.--++ Previously updated : 08/24/2022 Last updated : 08/24/2023 ms.devlang: azurecli # Add a dual-stack network to an existing virtual machine using the Azure CLI -In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address. +In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address. ## Prerequisites In this article, you'll add IPv6 support to an existing virtual network. You'll ## Add IPv6 to virtual network -In this section, you'll add an IPv6 address space and subnet to your existing virtual network. +In this section, you add an IPv6 address space and subnet to your existing virtual network. Use [az network vnet update](/cli/azure/network/vnet#az-network-vnet-update) to update the virtual network. az network vnet subnet update \ ## Create IPv6 public IP address -In this section, you'll create a IPv6 public IP address for the virtual machine. +In this section, you create a IPv6 public IP address for the virtual machine. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create the public IP address. |
virtual-network | Add Dual Stack Ipv6 Vm Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-portal.md | Title: Add a dual-stack network to an existing virtual machine - Azure portal description: Learn how to add a dual stack network to an existing virtual machine using the Azure portal.--++ Previously updated : 08/19/2022 Last updated : 08/24/2023 # Add a dual-stack network to an existing virtual machine using the Azure portal -In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address. +In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address. ## Prerequisites In this article, you'll add IPv6 support to an existing virtual network. You'll ## Add IPv6 to virtual network -In this section, you'll add an IPv6 address space and subnet to your existing virtual network. +In this section, you add an IPv6 address space and subnet to your existing virtual network. 1. Sign in to the [Azure portal](https://portal.azure.com). In this section, you'll add an IPv6 address space and subnet to your existing vi ## Create IPv6 public IP address -In this section, you'll create a IPv6 public IP address for the virtual machine. +In this section, you create a IPv6 public IP address for the virtual machine. 1. In the search box at the top of the portal, enter **Public IP address**. Select **Public IP addresses** in the search results. In this section, you'll create a IPv6 public IP address for the virtual machine. ## Add IPv6 configuration to virtual machine -The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You'll stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface. +The virtual machine must be stopped to add the IPv6 configuration to the existing virtual machine. You stop the virtual machine and add the IPv6 configuration to the existing virtual machine's network interface. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. |
virtual-network | Add Dual Stack Ipv6 Vm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-powershell.md | Title: Add a dual-stack network to an existing virtual machine - Azure PowerShell description: Learn how to add a dual-stack network to an existing virtual machine using Azure PowerShell.--++ Previously updated : 08/24/2022 Last updated : 08/24/2023 # Add a dual-stack network to an existing virtual machine using Azure PowerShell -In this article, you'll add IPv6 support to an existing virtual network. You'll configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network will support private IPv6 addresses. The existing virtual machine network configuration will contain a public and private IPv4 and IPv6 address. +In this article, you add IPv6 support to an existing virtual network. You configure an existing virtual machine with both IPv4 and IPv6 addresses. When completed, the existing virtual network supports private IPv6 addresses. The existing virtual machine network configuration contains a public and private IPv4 and IPv6 address. ## Prerequisites If you choose to install and use PowerShell locally, this article requires the A ## Add IPv6 to virtual network -In this section, you'll add an IPv6 address space and subnet to your existing virtual network. +In this section, you add an IPv6 address space and subnet to your existing virtual network. Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the virtual network. Set-AzVirtualNetwork -VirtualNetwork $vnet ## Create IPv6 public IP address -In this section, you'll create a IPv6 public IP address for the virtual machine. +In this section, you create a IPv6 public IP address for the virtual machine. Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create the public IP address. |
virtual-network | Associate Public Ip Address Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/associate-public-ip-address-vm.md | Title: Associate a public IP address to a virtual machine description: Learn how to associate a public IP address to a virtual machine (VM) by using the Azure portal, the Azure CLI, or Azure PowerShell. -+ Previously updated : 03/17/2023- Last updated : 08/24/2023+ |
virtual-network | Configure Public Ip Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-application-gateway.md | Title: Manage a public IP address with an Azure Application Gateway description: Learn about the ways a public IP address is used with an Azure Application Gateway and how to change and manage the configuration.--++ Previously updated : 06/28/2021 Last updated : 08/24/2023 Azure Application Gateway is a web traffic load balancer that manages traffic to An Application Gateway frontend can be a private IP address, public IP address, or both. The V1 SKU of Application Gateway supports basic dynamic public IPs. The V2 SKU supports standard SKU public IPs that are static only. Application Gateway V2 SKU doesn't support an internal IP address as it's only frontend. For more information, see [Application Gateway frontend IP address configuration](../../application-gateway/configuration-frontend-ip.md). -In this article, you'll learn how to create an Application Gateway using an existing public IP in your subscription. +In this article, you learn how to create an Application Gateway using an existing public IP in your subscription. ## Prerequisites In this article, you'll learn how to create an Application Gateway using an exis ## Create Application Gateway existing public IP -In this section, you'll create an Application Gateway resource. You'll select the IP address you created in the prerequisites as the public IP for the Application Gateway. +In this section, you create an Application Gateway resource. You select the IP address you created in the prerequisites as the public IP for the Application Gateway. 1. Sign in to the [Azure portal](https://portal.azure.com). |
virtual-network | Configure Public Ip Bastion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-bastion.md | Title: Manage a public IP address with Azure Bastion description: Learn about the ways a public IP address is used with Azure Bastion and how to change the configuration.--++ Previously updated : 06/28/2021 Last updated : 08/24/2023 Azure Bastion is deployed to provide secure management connectivity to virtual m An Azure Bastion host requires a public IP address for its configuration. -In this article, you'll learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion doesn't support public IP prefixes. +In this article, you learn how to create an Azure Bastion host using an existing public IP in your subscription. Azure Bastion doesn't support the change of the public IP address after creation. Azure Bastion doesn't support public IP prefixes. >[!NOTE] >[!INCLUDE [Pricing](../../../includes/bastion-pricing.md)] In this article, you'll learn how to create an Azure Bastion host using an exist ## Create Azure Bastion using existing IP -In this section, you'll create an Azure Bastion host. You'll select the IP address you created in the prerequisites as the public IP for bastion host. +In this section, you create an Azure Bastion host. You select the IP address you created in the prerequisites as the public IP for bastion host. 1. Sign in to the [Azure portal](https://portal.azure.com). |
virtual-network | Configure Public Ip Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-firewall.md | Title: Manage a public IP address by using Azure Firewall description: Learn about the ways a public IP address is used with Azure Firewall and how to change the configuration.--++ Previously updated : 03/28/2023 Last updated : 08/24/2023 |
virtual-network | Configure Public Ip Load Balancer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-load-balancer.md | Title: Manage a public IP address with a load balancer description: Learn about the ways a public IP address is used with an Azure Load Balancer and how to change the configuration.--++ Previously updated : 12/15/2022 Last updated : 08/24/2023 Finally, the article reviews unique aspects of using public IPs and public IP pr ## Create load balancer using existing public IP -In this section, you'll create a standard SKU load balancer. You'll select the IP address you created in the prerequisites as the frontend IP of the load balancer. +In this section, you create a standard SKU load balancer. You select the IP address you created in the prerequisites as the frontend IP of the load balancer. 1. Sign in to the [Azure portal](https://portal.azure.com). In this section, you'll create a standard SKU load balancer. You'll select the I ## Change or remove public IP address -In this section, you'll change the frontend IP address of the load balancer. +In this section, you change the frontend IP address of the load balancer. An Azure Load Balancer must have an IP address associated with a frontend. A separate public IP address can be utilized as a frontend for ingress and egress traffic. -To change the IP, you'll associate a new public IP address previously created with the load balancer frontend. +To change the IP, you associate a new public IP address previously created with the load balancer frontend. 1. Sign in to the [Azure portal](https://portal.azure.com). Standard load balancer supports outbound rules for Source Network Address Transl Multiple IPs avoid SNAT port exhaustion. Each Frontend IP provides 64,000 ephemeral ports that the load balancer can use. For more information, see [Outbound Rules](../../load-balancer/outbound-rules.md). -In this section, you'll change the frontend configuration used for outbound connections to use a public IP prefix. +In this section, you change the frontend configuration used for outbound connections to use a public IP prefix. 1. Sign in to the [Azure portal](https://portal.azure.com). In this section, you'll change the frontend configuration used for outbound conn * Cross-region load balancers are a special type of standard public load balancer that can span multiple regions. The frontend of a cross-region load balancer can only be used with the global tier option of standard SKU public IPs. Traffic sent to the frontend IP of a cross-region load balancer is distributed across the regional public load balancers. The regional frontend IPs are contained in the backend pool of the cross-region load balancer. For more information, see [Cross-region load balancer](../../load-balancer/cross-region-overview.md). -* By default, a public load balancer won't allow you to use multiple load-balancing rules with the same backend port. If a multiple rule configuration to the same backend port is required, then enable the floating IP option for a load-balancing rule. This setting overwrites the destination IP address of the traffic sent to the backend pool. Without floating IP enabled, the destination will be the backend pool private IP. With floating IP enabled, the destination IP will be the load balancer frontend public IP. The backend instance must have this public IP configured in its network configuration to correctly receive this traffic. A loopback interface with the frontend IP address must be configured in the instance. For more information, see [Azure Load Balancer Floating IP configuration](../../load-balancer/load-balancer-floating-ip.md). +* By default, a public load balancer can't use multiple load-balancing rules with the same backend port. If a multiple rule configuration to the same backend port is required, then enable the floating IP option for a load-balancing rule. This setting overwrites the destination IP address of the traffic sent to the backend pool. Without floating IP enabled, the destination is the backend pool private IP. With floating IP enabled, the destination IP is the load balancer frontend public IP. The backend instance must have this public IP configured in its network configuration to correctly receive this traffic. A loopback interface with the frontend IP address must be configured in the instance. For more information, see [Azure Load Balancer Floating IP configuration](../../load-balancer/load-balancer-floating-ip.md). * With a load balancer setup, members of backend pool can often also be assigned instance-level public IPs. With this architecture, sending traffic directly to these IPs bypasses the load balancer. |
virtual-network | Configure Public Ip Nat Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-nat-gateway.md | Title: Manage a public IP address with a NAT gateway description: Learn about the ways a public IP address is used with an Azure Virtual Network NAT gateway and how to change the configuration.--++ Previously updated : 12/15/2022 Last updated : 08/24/2023 In this article, you learn how to: ## Create NAT gateway using existing public IP -In this section, you'll create a NAT gateway resource. You'll select the IP address you created in the prerequisites as the public IP for the NAT gateway. +In this section, you create a NAT gateway resource. You select the IP address you created in the prerequisites as the public IP for the NAT gateway. 1. Sign in to the [Azure portal](https://portal.azure.com). In this section, you'll create a NAT gateway resource. You'll select the IP addr ## Change or remove public IP address -In this section, you'll change the IP address of the NAT gateway. +In this section, you change the IP address of the NAT gateway. -To change the IP, you'll associate a new public IP address created previously with the NAT gateway. A NAT gateway must have at least one IP address assigned. +To change the IP, you associate a new public IP address created previously with the NAT gateway. A NAT gateway must have at least one IP address assigned. 1. Sign in to the [Azure portal](https://portal.azure.com). Public IP prefixes extend the extensibility of SNAT for outbound connections fro > [!NOTE] > When assigning a public IP prefix to a NAT gateway, the entire range will be used. -In this section, you'll change the outbound IP configuration to use a public IP prefix you created previously. +In this section, you change the outbound IP configuration to use a public IP prefix you created previously. > [!NOTE] > You can choose to remove the single IP address associated with the NAT gateway and reuse, or leave it associated to the NAT gateway to increase the outbound SNAT ports. NAT gateway supports a combination of public IPs and prefixes in the outbound IP configuration. If you created a public IP prefix with 16 addresses, remove the single public IP. The number of allocated IPs can't exceed 16. |
virtual-network | Configure Public Ip Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-vm.md | Title: Manage a public IP address with an Azure Virtual Machine description: Learn about the ways a public IP address is used with Azure Virtual Machines and how to change the configuration.--++ Previously updated : 06/28/2021 Last updated : 08/24/2023 |
virtual-network | Configure Public Ip Vpn Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-public-ip-vpn-gateway.md | Title: Manage a public IP address with a VPN gateway description: Learn about the ways a public IP address is used with a VPN gateway and how to change the configuration.--++ |
virtual-network | Configure Routing Preference Virtual Machine Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-routing-preference-virtual-machine-cli.md | Title: 'Tutorial: Configure routing preference for a VM - Azure CLI' -description: In this tutorial, learn how to create a VM with a public IP address with routing preference choice using the Azure CLI. --+description: In this tutorial, learn how to configure routing preference for a VM using a public IP address with the Azure CLI. ++ Previously updated : 10/01/2021 Last updated : 08/24/2023 ms.devlang: azurecli |
virtual-network | Configure Routing Preference Virtual Machine Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-routing-preference-virtual-machine-powershell.md | Title: 'Tutorial: Configure routing preference for a VM - Azure PowerShell' -description: In this tutorial, learn how to create a VM with a public IP address with routing preference choice using Azure PowerShell. --+description: In this tutorial, learn how to configure routing preference for a VM using a public IP address with Azure PowerShell. ++ Previously updated : 10/01/2021 Last updated : 08/24/2023 |
virtual-network | Create Custom Ip Address Prefix Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md | Title: Create a custom IPv4 address prefix - Azure CLI -description: Learn about how to create a custom IP address prefix using the Azure CLI -+description: Learn how to create a custom IP address prefix using the Azure CLI ++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv4 address prefix using the Azure CLI |
virtual-network | Create Custom Ip Address Prefix Ipv6 Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-cli.md | Title: Create a custom IPv6 address prefix - Azure CLI -description: Learn about how to create a custom IPv6 address prefix using Azure CLI -+description: Learn how to create a custom IPv6 address prefix using Azure CLI ++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv6 address prefix using Azure CLI -A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. +A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. You continue to own the range, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. The steps in this article detail the process to: The steps in this article detail the process to: * Provision the range for IP allocation -* Enable the range to be advertised by Microsoft +* Commission the IPv6 prefixes to advertise the range to the Internet ## Differences between using BYOIPv4 and BYOIPv6 > [!IMPORTANT] > Onboarded custom IPv6 address prefixes have several unique attributes which make them different than custom IPv4 address prefixes. -* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size. +* Custom IPv6 prefixes use a *parent*/*child* model. In this model, the Microsoft Wide Area Network (WAN) advertises the global (parent) range, and the respective Azure regions advertise the regional (child) ranges. Global ranges must be /48 in size, while regional ranges must always be /64 size. You can have multiple /64 ranges per region. * Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes. -* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error. +* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes beyond this space results in an error. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.37 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed. - Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.-- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.- - A sample customer range (2a05:f500:2::/48) is used for this example. This range won't be validated by Azure. Replace the example range with yours. +- A customer owned IPv6 range to provision in Azure. + - In this example, a sample customer range (2a05:f500:2::/48) is used. This range won't be validated by Azure. Replace the example range with yours. > [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs). ## Pre-provisioning steps -To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range. +To utilize the Azure BYOIP feature, you must perform preparation steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-cli.md#pre-provisioning-steps) for details. All these steps should be completed for the IPv6 global (parent) range. ## Provisioning for IPv6 The following command creates a custom IP prefix in the specified region and res ### Provision a regional custom IPv6 address prefix -After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.) +After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The *children* custom IP prefixes are advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges are advertised from a specific region, zones can be utilized.) ```azurecli-interactive az network custom-ip prefix create \ After the global custom IP prefix is in a **Provisioned** state, regional custom --zone 1 2 3 ``` -Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised. +Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised. > [!IMPORTANT] > Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range. -### Commission the custom IPv6 address prefixes +## Commission the custom IPv6 address prefixes When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix. az network custom-ip prefix update \ > [!NOTE] > The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes. -It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned). +It's possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned). > [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. |
virtual-network | Create Custom Ip Address Prefix Ipv6 Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-portal.md | Title: Create a custom IPv6 address prefix - Azure portal -description: Learn about how to onboard a custom IPv6 address prefix using the Azure portal -+description: Learn how to onboard a custom IPv6 address prefix using the Azure portal ++ Previously updated : 05/03/2022- Last updated : 08/24/2023 # Create a custom IPv6 address prefix using the Azure portal The steps in this article detail the process to: > [!IMPORTANT] > Onboarded custom IPv6 address prefixes have several unique attributes which make them different than custom IPv4 address prefixes. -* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size. +* Custom IPv6 prefixes use a *parent*/*child* model. In this model, the Microsoft Wide Area Network (WAN) advertises the global (parent) range, and the respective Azure regions advertise the regional (child) ranges. Global ranges must be /48 in size, while regional ranges must always be /64 size. You can have multiple /64 ranges per region. * Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes. The steps in this article detail the process to: ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.+- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but wouldn't be validated by Azure; you need to replace the example range with yours. > [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs). ## Pre-provisioning steps -To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-portal.md#pre-provisioning-steps) for details. Note that all these steps should be completed for the IPv6 global (parent) range. +To utilize the Azure BYOIP feature, you must perform and number of steps prior to the provisioning of your IPv6 address range. Refer to the [IPv4 instructions](create-custom-ip-address-prefix-portal.md#pre-provisioning-steps) for details. All these steps should be completed for the IPv6 global (parent) range. ## Provisioning for IPv6 Sign in to the [Azure portal](https://portal.azure.com). 6. Select **Create**. -The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. +The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. ### Provision a regional custom IPv6 address prefix -After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.) +After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.) In the same **Create a custom IP prefix** page as before, enter or select the following information: In the same **Create a custom IP prefix** page as before, enter or select the fo | Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. | | Availability Zones | Select **Zone-redundant**. | -Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised. +Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised. > [!IMPORTANT] > Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range. ### Commission the custom IPv6 address prefixes -When commissioning custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix. +When you commission custom IPv6 prefixes, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IPv6 prefix isn't connected to commissioning the global custom IPv6 prefix. :::image type="content" source="./media/create-custom-ip-address-prefix-ipv6/any-region-prefix.png" alt-text="Diagram of custom IPv6 prefix showing parent prefix and child prefixes across multiple regions."::: To commission a custom IPv6 prefix (regional or global) using the portal: 3. In **Custom IP Prefixes**, select the desired custom IPv6 prefix. -4. In **Overview** page of the custom IPv6 prefix, select the **Commission** button near the top of the screen. If the range is global it will begin advertising from the Microsoft WAN. If the range is regional it will advertise only from the specific region. +4. In **Overview** page of the custom IPv6 prefix, select the **Commission** button near the top of the screen. If the range is global, it begins advertising from the Microsoft WAN. If the range is regional, it advertises only from the specific region. Using the example ranges above, the sequence would be to first commission myCustomIPv6RegionalPrefix, followed by a commission of myCustomIPv6GlobalPrefix. > [!NOTE] > The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes. -It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned). +It's possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned). > [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. |
virtual-network | Create Custom Ip Address Prefix Ipv6 Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-powershell.md | Title: Create a custom IPv6 address prefix - Azure PowerShell -description: Learn about how to create a custom IPv6 address prefix using Azure PowerShell -+description: Learn how to create a custom IPv6 address prefix using Azure PowerShell ++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv6 address prefix using Azure PowerShell -A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. +A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. You maintain ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. The steps in this article detail the process to: The steps in this article detail the process to: > [!IMPORTANT] > Onboarded custom IPv6 address prefixes have several unique attributes which make them different than custom IPv4 address prefixes. -* Custom IPv6 prefixes use a "parent"/"child" model, where the global (parent) range is advertised by the Microsoft Wide Area Network (WAN) and the regional (child) range(s) are advertised by their respective region(s). Global ranges must be /48 in size, while regional ranges must always be /64 size. +* Custom IPv6 prefixes use a *parent*/*child* model. In this model, the Microsoft Wide Area Network (WAN) advertises the global (parent) range, and the respective Azure regions advertise the regional (child) ranges. Global ranges must be /48 in size, while regional ranges must always be /64 size. You can have multiple /64 ranges per region. * Only the global range needs to be validated using the steps detailed in the [Create Custom IP Address Prefix](create-custom-ip-address-prefix-portal.md) articles. The regional ranges are derived from the global range in a similar manner to the way public IP prefixes are derived from custom IP prefixes. -* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes that span beyond this will result in an error. +* Public IPv6 prefixes must be derived from the regional ranges. Only the first 2048 IPv6 addresses of each regional /64 custom IP prefix can be utilized as valid IPv6 space. Attempting to create public IPv6 prefixes beyond this space results in an error. ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.-- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but would not be validated by Azure; you will need to replace the example range with yours.+- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary. +- A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but wouldn't be validated by Azure; you need to replace the example range with yours. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure. $myCustomIPv6GlobalPrefix = New-AzCustomIPPrefix @prefix ### Provision a regional custom IPv6 address prefix -After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they are created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.) +After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.) ```azurepowershell-interactive $prefix =@{ $prefix =@{ } $myCustomIPv6RegionalPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3 ```-Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they are not yet being advertised. +Similar to IPv4 custom IP prefixes, after the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised. > [!IMPORTANT] > Public IPv6 prefixes derived from regional custom IPv6 prefixes can only utilize the first 2048 IPs of the /64 range. Update-AzCustomIpPrefix -ResourceId $myCustomIPv6GlobalPrefix.Id -Commission > [!NOTE] > The estimated time to fully complete the commissioning process for a custom IPv6 global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IPv6 regional prefix is 30 minutes. -It is possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes; however, this will mean the global range is being advertised to the Internet before the regional prefixes are ready, so this is not recommended for migrations of active ranges. Additionally, it is possible to decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes or to decommission a regional custom IP prefix while the global prefix is still active (commissioned). +It's possible to commission the global custom IPv6 prefix prior to the regional custom IPv6 prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IPv6 prefix while there are still active (commissioned) regional custom IPv6 prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned). > [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. |
virtual-network | Create Custom Ip Address Prefix Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md | Title: Create a custom IPv4 address prefix - Azure portal -description: Learn about how to onboard a custom IP address prefix using the Azure portal -+description: Learn how to onboard a custom IP address prefix using the Azure portal ++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv4 address prefix using the Azure portal -A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. +A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. You maintain ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. The steps in this article detail the process to: To utilize the Azure BYOIP feature, you must perform the following steps prior t ### Requirements and prefix readiness -* The address range must be owned by you and registered under your name with the one of the 5 major Regional Internet Registries: +* The address range must be owned by you and registered under your name with the one of the five major Regional Internet Registries: * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/) To utilize the Azure BYOIP feature, you must perform the following steps prior t * The address range must be no smaller than a /24 so it will be accepted by Internet Service Providers. -* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR will require the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR. +* A Route Origin Authorization (ROA) document that authorizes Microsoft to advertise the address range must be filled out by the customer on the appropriate Routing Internet Registry (RIR) website or via their API. The RIR requires the ROA to be digitally signed with the Resource Public Key Infrastructure (RPKI) of your RIR. For this ROA: Sign in to the [Azure portal](https://portal.azure.com). 6. Select **Create**. -The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. +The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. > [!NOTE] > The estimated time to complete the provisioning process is 30 minutes. The range will be pushed to the Azure IP Deployment Pipeline. The deployment pro ## Create a public IP prefix from custom IP prefix -When you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier. +When you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier. 1. In the search box at the top of the portal, enter **Custom IP**. When you create a prefix, you must create static IP addresses from the prefix. I 6. Select **Review + create**, and then **Create** on the following page. -10. Repeat steps 1-5 to return to the **Overview** page for **myCustomIPPrefix**. You'll see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix). +10. Repeat steps 1-5 to return to the **Overview** page for **myCustomIPPrefix**. You see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix). ## Commission the custom IP address prefix The operation is asynchronous. You can check the status by reviewing the **Commi > The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. To prevent these issues during initial deployment, you can choose the regional only commissioning option where your custom IP prefix will only be advertised within the Azure region it is deployed in. See [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information. +> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. To prevent these issues during initial deployment, you can choose the regional only commissioning option where your custom IP prefix will only be advertised within the Azure region it is deployed in. For more information, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information. ## Next steps |
virtual-network | Create Custom Ip Address Prefix Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-powershell.md | Title: Create a custom IP address prefix - Azure PowerShell -description: Learn about how to create a custom IPv4 address prefix using Azure PowerShell -+description: Learn how to create a custom IPv4 address prefix using Azure PowerShell ++ Previously updated : 03/31/2022- Last updated : 08/24/2023 # Create a custom IPv4 address prefix using Azure PowerShell -A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. +A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. You continue ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses. The steps in this article detail the process to: The steps in this article detail the process to: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell. - Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).-- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command Get-InstalledModule -Name "Az.Network". If the module requires an update, use the command Update-Module -Name "Az.Network" if necessary.+- Ensure your `Az.Network` module is 5.1.1 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary. - A customer owned IPv4 range to provision in Azure. - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours. To utilize the Azure BYOIP feature, you must perform the following steps prior t ### Requirements and prefix readiness -* The address range must be owned by you and registered under your name with the one of the 5 major Regional Internet Registries: +* The address range must be owned by you and registered under your name with the one of the five major Regional Internet Registries: * [American Registry for Internet Numbers (ARIN)](https://www.arin.net/) * [Réseaux IP Européens Network Coordination Centre (RIPE NCC)](https://www.ripe.net/) * [Asia Pacific Network Information Centre Regional Internet Registries (APNIC)](https://www.apnic.net/) $prefix =@{ $myCustomIpPrefix = New-AzCustomIPPrefix @prefix -Zone 1,2,3 ``` -The range will be pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. To determine the status, execute the following command: +The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. To determine the status, execute the following command: ```azurepowershell-interactive Get-AzCustomIpPrefix -ResourceId $myCustomIpPrefix.Id As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell > The estimated time to fully complete the commissioning process is 3-4 hours. > [!IMPORTANT]-> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in-- see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information. +> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in--see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information. ## Next steps |
virtual-network | Create Public Ip Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-cli.md | Title: 'Quickstart: Create a public IP - Azure CLI' -description: Learn how to create a public IP using the Azure CLI +description: Learn how to create a public IP address using the Azure CLI -++ Previously updated : 10/01/2021- Last updated : 08/24/2023 ms.devlang: azurecli # Quickstart: Create a public IP address using the Azure CLI -In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices. +In this quickstart, you learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices. [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] To create an IPv6 address, modify the **`--version`** parameter to **IPv6**. # [**Basic SKU**](#tab/create-public-ip-basic) -In this section, you'll create a basic IP. Basic public IPs don't support availability zones. +In this section, you create a basic IP. Basic public IPs don't support availability zones. Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a basic static public IPv4 address named **myBasicPublicIP** in **QuickStartCreateIP-rg**. If it's acceptable for the IP address to change over time, **Dynamic** IP assign ## Create a zonal or no-zone IP address -In this section, you'll learn how to create a zonal or no-zone public IP address. +In this section, you learn how to create a zonal or no-zone public IP address. # [**Zonal**](#tab/create-public-ip-zonal) To create an IPv6 address, modify the **`--version`** parameter to **IPv6**. # [**Non-zonal**](#tab/create-public-ip-non-zonal) -In this section, you'll create a non-zonal IP address. +In this section, you create a non-zonal IP address. >[!NOTE] >The following command works for API version 2020-08-01 or later. For more information about the API version currently being used, please refer to [Resource Providers and Types](../../azure-resource-manager/management/resource-providers-and-types.md). Standard SKU static public IPv4 addresses support Routing Preference or the Glob # [**Routing Preference**](#tab/routing-preference) -By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user. +By default, the routing preference for public IP addresses is set to **Microsoft network**, which delivers traffic over Microsoft's global wide area network to the user. The selection of **Internet** minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate. |
virtual-network | Create Public Ip Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-portal.md | Title: 'Quickstart: Create a public IP address - Azure portal' description: In this quickstart, you learn how to create a public IP address for a Standard SKU and a Basic SKU. You also learn about routing preferences and tiers.--++ Previously updated : 03/24/2023 Last updated : 08/24/2023 |
virtual-network | Create Public Ip Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-powershell.md | Title: 'Quickstart: Create a public IP - PowerShell' description: In this quickstart, learn how to create a public IP using Azure PowerShell-++ Previously updated : 10/01/2021- Last updated : 08/24/2023 # Quickstart: Create a public IP address using PowerShell -In this quickstart, you'll learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices. +In this quickstart, you learn how to create an Azure public IP address. Public IP addresses in Azure are used for public connections to Azure resources. Public IP addresses are available in two SKUs: basic, and standard. Two tiers of public IP addresses are available: regional, and global. The routing preference of a public IP address is set when created. Internet routing and Microsoft Network routing are the available choices. ## Prerequisites New-AzResourceGroup @rg > >The following command works for Az.Network module version 4.5.0 or later. For more information about the PowerShell modules currently being used, please refer to the [PowerShellGet documentation](/powershell/module/powershellget/). -In this section, you'll create a public IP with zones. Public IP addresses can be zone-redundant or zonal. +In this section, you create a public IP with zones. Public IP addresses can be zone-redundant or zonal. Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a standard zone-redundant public IPv4 address named **myStandardPublicIP** in **QuickStartCreateIP-rg**. New-AzPublicIpAddress @ip >[!NOTE] >Standard SKU public IP is recommended for production workloads. For more information about SKUs, see **[Public IP addresses](public-ip-addresses.md)**. -In this section, you'll create a basic IP. Basic public IPs don't support availability zones. +In this section, you create a basic IP. Basic public IPs don't support availability zones. Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a basic static public IPv4 address named **myBasicPublicIP** in **QuickStartCreateIP-rg**. If it's acceptable for the IP address to change over time, **Dynamic** IP assign ## Create a zonal or no-zone public IP address -In this section, you'll learn how to create a zonal or no-zone public IP address. +In this section, you learn how to create a zonal or no-zone public IP address. # [**Zonal**](#tab/create-public-ip-zonal) New-AzPublicIpAddress @ip # [**Non-zonal**](#tab/create-public-ip-non-zonal) -In this section, you'll create a non-zonal IP address. +In this section, you create a non-zonal IP address. >[!NOTE] >The following command works for Az.Network module version 4.5.0 or later. For more information about the PowerShell modules currently being used, please refer to the [PowerShellGet documentation](/powershell/module/powershellget/). Standard SKU static public IPv4 addresses support Routing Preference or the Glob # [**Routing Preference**](#tab/routing-preference) -By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user. +By default, the routing preference for public IP addresses is set to **Microsoft network**, which delivers traffic over Microsoft's global wide area network to the user. The selection of **Internet** minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate. |
virtual-network | Create Public Ip Prefix Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-cli.md | Title: 'Quickstart: Create a public IP address prefix - Azure CLI' description: Learn how to create a public IP address prefix using the Azure CLI. -++ Previously updated : 10/01/2021- Last updated : 08/24/2023 ms.devlang: azurecli Create a resource group with [az group create](/cli/azure/group#az-group-create) ## Create a public IP address prefix -In this section, you'll create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell. +In this section, you create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell. The prefixes in the examples are: The removal of the **`--zone`** parameter is the default selection for standard -# [**Routing Preference Interent IPv4 prefix**](#tab/ipv4-routing-pref) +# [**Routing Preference Internet IPv4 prefix**](#tab/ipv4-routing-pref) To create a IPv4 public IP prefix with routing preference Internet, enter **RoutingPreference=Internet** in the **`--ip-tags`** parameter. The removal of the **`--zone`** parameter is the default selection for standard ## Create a static public IP address from a prefix -Once you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier. +Once you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier. Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) in the **myPublicIpPrefix** prefix. To create a IPv6 public IP prefix, enter **IPv6** in the **`--version`** paramet ## Delete a prefix -In this section, you'll learn how to delete a prefix. +In this section, you learn how to delete a prefix. To delete a public IP prefix, use [az network public-ip prefix delete](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-delete). |
virtual-network | Create Public Ip Prefix Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-portal.md | Title: 'Quickstart: Create a public IP address prefix - Azure portal' description: Learn how to create a public IP address prefix using the Azure portal. -++ Previously updated : 06/05/2023- Last updated : 08/24/2023 |
virtual-network | Create Public Ip Prefix Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-powershell.md | Title: 'Quickstart: Create a public IP address prefix - PowerShell' description: Learn how to create a public IP address prefix using PowerShell. -++ Previously updated : 10/01/2021- Last updated : 08/24/2023 New-AzResourceGroup @rg ## Create a public IP address prefix -In this section, you'll create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell. +In this section, you create a zone redundant, zonal, and non-zonal public IP prefix using Azure PowerShell. The prefixes in the examples are: The removal of the **`-Zone`** parameter is the default selection for standard p ## Create a static public IP address from a prefix -Once you create a prefix, you must create static IP addresses from the prefix. In this section, you'll create a static IP address from the prefix you created earlier. +Once you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier. Create a public IP address with [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) in the **myPublicIpPrefix** prefix. New-AzPublicIpAddress @ipv6 ## Delete a prefix -In this section, you'll learn how to delete a prefix. +In this section, you learn how to delete a prefix. To delete a public IP prefix, use [Remove-AzPublicIpPrefix](/powershell/module/az.network/remove-azpublicipprefix). |
virtual-network | Create Public Ip Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-template.md | Title: 'Quickstart: Create a public IP using a Resource Manager template' description: Learn how to create a public IP using a Resource Manager template -++ Previously updated : 10/01/2021- Last updated : 08/24/2023 For more information on resources this public IP can be associated to and the di ## Create standard SKU public IP with zones -In this section, you'll create a public IP with zones. Public IP addresses can be zone-redundant or zonal. +In this section, you create a public IP with zones. Public IP addresses can be zone-redundant or zonal. ### Zone redundant Template section to add: ## Create standard public IP without zones -In this section, you'll create a non-zonal IP address. +In this section, you create a non-zonal IP address. The code in this section creates a standard no-zone public IPv4 address named **myStandardPublicIP**. The code section is valid for all regions with or without [Availability Zones](../../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones). Template section to add: ## Create a basic public IP -In this section, you'll create a basic IP. Basic public IPs don't support availability zones. +In this section, you create a basic IP. Basic public IPs don't support availability zones. The code in this section creates a basic public IPv4 address named **myBasicPublicIP**. Standard SKU static public IPv4 addresses support Routing Preference or the Glob ### Routing preference -By default, the routing preference for public IP addresses is set to "Microsoft network", which delivers traffic over Microsoft's global wide area network to the user. +By default, the routing preference for public IP addresses is set to **Microsoft network**, which delivers traffic over Microsoft's global wide area network to the user. The selection of **Internet** minimizes travel on Microsoft's network, instead using the transit ISP network to deliver traffic at a cost-optimized rate. |
virtual-network | Create Vm Dual Stack Ipv6 Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md | Title: Create an Azure virtual machine with a dual-stack network - Azure CLI description: In this article, learn how to use the Azure CLI to create a virtual machine with a dual-stack virtual network in Azure.--++ Previously updated : 04/19/2023 Last updated : 08/24/2023 ms.devlang: azurecli |
virtual-network | Create Vm Dual Stack Ipv6 Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md | Title: Create an Azure virtual machine with a dual-stack network - Azure portal description: In this article, learn how to use the Azure portal to create a virtual machine with a dual-stack virtual network in Azure.--++ Previously updated : 08/17/2022 Last updated : 08/24/2023 |
virtual-network | Create Vm Dual Stack Ipv6 Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-powershell.md | Title: Create an Azure virtual machine with a dual-stack network - PowerShell description: In this article, learn how to use PowerShell to create a virtual machine with a dual-stack virtual network in Azure.--++ Previously updated : 08/15/2022 Last updated : 08/24/2023 |
virtual-network | Custom Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md | Title: Custom IP address prefix (BYOIP) description: Learn about what an Azure custom IP address prefix is and how it enables customers to utilize their own ranges in Azure. -++ Previously updated : 05/27/2023- Last updated : 08/24/2023 # Custom IP address prefix (BYOIP) |
virtual-network | Default Outbound Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md | Title: Default outbound access in Azure description: Learn about default outbound access in Azure. -++ Previously updated : 05/28/2023- Last updated : 08/24/2023 |
virtual-network | Ip Services Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ip-services-overview.md | Title: What is Azure Virtual Network IP Services? description: Overview of Azure Virtual Network IP Services. Learn how IP services work and how to use IP resources in Azure.--++ Last updated : 08/24/2023 Previously updated : 04/19/2023 |
virtual-network | Ipv6 Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-overview.md | Title: Overview of IPv6 for Azure Virtual Network description: IPv6 description of IPv6 endpoints and data paths in an Azure virtual network. -++ Last updated : 08/24/2023 Previously updated : 05/03/2023- # What is IPv6 for Azure Virtual Network? The current IPv6 for Azure Virtual Network release has the following limitations - While it's possible to create NSG rules for IPv4 and IPv6 within the same NSG, it isn't currently possible to combine an IPv4 subnet with an IPv6 subnet in the same rule when specifying IP prefixes. -- When using a dual stack configuration with a load balancer, health probes will not function for IPv6 if a Network Security Group is not active.+- When using a dual stack configuration with a load balancer, health probes won't function for IPv6 if a Network Security Group isn't active. - ICMPv6 isn't currently supported in Network Security Groups. - Azure Virtual WAN currently supports IPv4 traffic only. -- Azure Firewall doesn't currently support IPv6. It can operate in a dual stack VNet using only IPv4, but the firewall subnet must be IPv4-only.+- Azure Firewall doesn't currently support IPv6. It can operate in a dual stack virtual network using only IPv4, but the firewall subnet must be IPv4-only. ## Pricing |
virtual-network | Ipv6 Virtual Machine Scale Set | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/ipv6-virtual-machine-scale-set.md | Title: Deploy virtual machine scale sets with IPv6 in Azure description: This article shows how to deploy virtual machine scale sets with IPv6 in an Azure virtual network. -++ Last updated : 08/24/2023 Previously updated : 03/31/2020- # Deploy virtual machine scale sets with IPv6 in Azure -This article shows you how to deploy a dual stack (IPv4 + IPv6) Virtual Machine Scale Set with a dual stack external load balancer in an Azure virtual network. The process to create an IPv6-capable virtual machine scale set is nearly identical to the process for creating individual VMs described [here](../../load-balancer/ipv6-configure-standard-load-balancer-template-json.md). You'll start with the steps that are similar to ones described for individual VMs: +This article shows you how to deploy a dual stack (IPv4 + IPv6) Virtual Machine Scale Set with a dual stack external load balancer in an Azure virtual network. The process to create an IPv6-capable virtual machine scale set is nearly identical to the process for creating individual VMs described [here](../../load-balancer/ipv6-configure-standard-load-balancer-template-json.md). You start with the steps that are similar to ones described for individual VMs: 1. Create IPv4 and IPv6 Public IPs. 2. Create a dual stack load balancer. 3. Create network security group (NSG) rules. |
virtual-network | Manage Custom Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md | Title: Manage a custom IP address prefix description: Learn about custom IP address prefixes and how to manage and delete them. -++ Last updated : 08/24/2023 Previously updated : 05/27/2023-+ # Manage a custom IP address prefix |
virtual-network | Manage Public Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-public-ip-address-prefix.md | Title: Create, change, or delete an Azure public IP address prefix description: Learn about public IP address prefixes and how to create, change, or delete them.-++ Last updated : 08/24/2023 Previously updated : 03/30/2023- # Manage a public IP address prefix |
virtual-network | Monitor Public Ip Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip-reference.md | Title: Monitoring Public IP addresses data reference description: Important reference material needed when you monitor Public IP addresses -++ Last updated : 08/24/2023 - Previously updated : 06/29/2022 # Monitoring Public IP addresses data reference |
virtual-network | Monitor Public Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip.md | Title: Monitoring Public IP addresses description: Start here to learn how to monitor Public IP addresses--++ Last updated : 08/24/2023 Previously updated : 06/29/2022 # Monitoring Public IP addresses |
virtual-network | Private Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/private-ip-addresses.md | Title: Private IP addresses in Azure description: Learn about private IP addresses in Azure.-++ Last updated : 08/24/2023 Previously updated : 05/03/2023- # Private IP addresses |
virtual-network | Public Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md | Title: Azure Public IP address prefix description: Learn about what an Azure public IP address prefix is and how it can help you assign public IP addresses to your resources. -++ Last updated : 08/24/2023 Previously updated : 04/19/2023- # Public IP address prefix |
virtual-network | Public Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md | Title: Public IP addresses in Azure description: Learn about public IP addresses in Azure. - Previously updated : 05/28/2023-++ Last updated : 08/24/2023 # Public IP addresses |
virtual-network | Public Ip Basic Upgrade Guidance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md | Title: Upgrading a basic public IP address to standard SKU - Guidance description: Overview of upgrade options and guidance for migrating basic public IP to standard public IP for future basic public IP address retirement- - Previously updated : 05/28/2023++ Last updated : 08/24/2023 #customer-intent: As an cloud engineer with Basic public IP services, I need guidance and direction on migrating my workloads off basic to Standard SKUs |
virtual-network | Public Ip Upgrade Classic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-classic.md | Title: Migrate a classic reserved IP address to a public IP address description: In this article, learn how to upgrade a classic deployment model reserved IP to an Azure Resource Manager public IP address.--++ Last updated : 08/24/2023 Previously updated : 05/20/2021 |
virtual-network | Public Ip Upgrade Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-cli.md | Title: 'Upgrade a public IP address - Azure CLI' description: In this article, learn how to upgrade a basic SKU public IP address using the Azure CLI.--++ Last updated : 08/24/2023 Previously updated : 10/28/2022 ms.devlang: azurecli |
virtual-network | Public Ip Upgrade Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-portal.md | Title: 'Upgrade a public IP address - Azure portal' description: In this article, you learn how to upgrade a basic SKU public IP address using the Azure portal.--++ Last updated : 08/24/2023 Previously updated : 10/28/2022 |
virtual-network | Public Ip Upgrade Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-powershell.md | Title: 'Upgrade a public IP address - Azure PowerShell' description: In this article, you learn how to upgrade a basic SKU public IP address using Azure PowerShell.--++ Last updated : 08/24/2023 Previously updated : 10/28/2022 |
virtual-network | Public Ip Upgrade Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-vm.md | Title: 'Upgrade public IP addresses attached to a VM from Basic to Standard' description: This article shows you how to upgrade a public IP address attached to a VM to a standard public IP address + Last updated : 08/24/2023 Previously updated : 06/01/2023- # Upgrade public IP addresses attached to VM from Basic to Standard There is no way to evaluate upgrading a Public IP without completing the action. Yes, the process of upgrading a Zonal Basic SKU Public IP to a Zonal Standard SKU Public IP is identical and works in the script. +## Use Resource Graph to list VMs with Public IPs requiring upgrade ++### Query to list virtual machines with Basic SKU public IP addresses ++This query returns a list of virtual machine IDs with Basic SKU public IP addresses attached. ++```kusto +Resources +| where type =~ 'microsoft.compute/virtualmachines' +| project vmId = tolower(id), vmNics = properties.networkProfile.networkInterfaces +| join ( + Resources | + where type =~ 'microsoft.network/networkinterfaces' | + project nicVMId = tolower(tostring(properties.virtualMachine.id)), allVMNicID = tolower(id), nicIPConfigs = properties.ipConfigurations) + on $left.vmId == $right.nicVMId +| join ( + Resources + | where type =~ 'microsoft.network/publicipaddresses' and isnotnull(properties.ipConfiguration.id) + | where sku.name == 'Basic' // exclude to find all VMs with Public IPs + | project pipId = id, pipSku = sku.name, pipAssociatedNicId = tolower(tostring(split(properties.ipConfiguration.id, '/ipConfigurations/')[0]))) + on $left.allVMNicID == $right.pipAssociatedNicId +| project vmId, pipId, pipSku +``` ++### [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az graph query -q "Resources | where type =~ 'microsoft.compute/virtualmachines' | project vmId = tolower(id), vmNics = properties.networkProfile.networkInterfaces | join (Resources | where type =~ 'microsoft.network/networkinterfaces' | project nicVMId = tolower(tostring(properties.virtualMachine.id)), allVMNicID = tolower(id), nicIPConfigs = properties.ipConfigurations) on \$left.vmId == \$right.nicVMId | join ( Resources | where type =~ 'microsoft.network/publicipaddresses' and isnotnull(properties.ipConfiguration.id) | where sku.name == 'Basic' | project pipId = id, pipSku = sku.name, pipAssociatedNicId = tolower(tostring(split(properties.ipConfiguration.id, '/ipConfigurations/')[0]))) on \$left.allVMNicID == \$right.pipAssociatedNicId | project vmId, pipId, pipSku" +``` ++### [Azure PowerShell](#tab/azure-powershell) ++```azurepowershell-interactive +Search-AzGraph -Query "Resources | where type =~ 'microsoft.compute/virtualmachines' | project vmId = tolower(id), vmNics = properties.networkProfile.networkInterfaces | join (Resources | where type =~ 'microsoft.network/networkinterfaces' | project nicVMId = tolower(tostring(properties.virtualMachine.id)), allVMNicID = tolower(id), nicIPConfigs = properties.ipConfigurations) on `$left.vmId == `$right.nicVMId | join ( Resources | where type =~ 'microsoft.network/publicipaddresses' and isnotnull(properties.ipConfiguration.id) | where sku.name == 'Basic' | project pipId = id, pipSku = sku.name, pipAssociatedNicId = tolower(tostring(split(properties.ipConfiguration.id, '/ipConfigurations/')[0]))) on `$left.allVMNicID == `$right.pipAssociatedNicId | project vmId, pipId, pipSku" +``` ++### [Portal](#tab/azure-portal) ++Try this query in Azure Resource Graph Explorer: ++- Azure portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0A%7C%20project%20vmId%20%3D%20tolower%28id%29%2C%20vmNics%20%3D%20properties.networkProfile.networkInterfaces%0A%7C%20join%20%28%0A%20%20Resources%20%7C%0A%20%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%20%7C%0A%20%20project%20nicVMId%20%3D%20tolower%28tostring%28properties.virtualMachine.id%29%29%2C%20allVMNicID%20%3D%20tolower%28id%29%2C%20nicIPConfigs%20%3D%20properties.ipConfigurations%29%0A%20%20on%20%24left.vmId%20%3D%3D%20%24right.nicVMId%0A%7C%20join%20%28%0A%20%20Resources%0A%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%20and%20isnotnull%28properties.ipConfiguration.id%29%0A%20%20%7C%20where%20sku.name%20%3D%3D%20%27Basic%27%0A%20%20%7C%20project%20pipId%20%3D%20id%2C%20pipSku%20%3D%20sku.name%2C%20pipAssociatedNicId%20%3D%20tolower%28tostring%28split%28properties.ipConfiguration.id%2C%20%27%2FipConfigurations%2F%27%29%5B0%5D%29%29%29%0A%20%20on%20%24left.allVMNicID%20%3D%3D%20%24right.pipAssociatedNicId%0A%7C%20project%20vmId%2C%20pipId%2C%20pipSku" target="_blank">portal.azure.com</a> +- Azure Government portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0A%7C%20project%20vmId%20%3D%20tolower%28id%29%2C%20vmNics%20%3D%20properties.networkProfile.networkInterfaces%0A%7C%20join%20%28%0A%20%20Resources%20%7C%0A%20%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%20%7C%0A%20%20project%20nicVMId%20%3D%20tolower%28tostring%28properties.virtualMachine.id%29%29%2C%20allVMNicID%20%3D%20tolower%28id%29%2C%20nicIPConfigs%20%3D%20properties.ipConfigurations%29%0A%20%20on%20%24left.vmId%20%3D%3D%20%24right.nicVMId%0A%7C%20join%20%28%0A%20%20Resources%0A%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%20and%20isnotnull%28properties.ipConfiguration.id%29%0A%20%20%7C%20where%20sku.name%20%3D%3D%20%27Basic%27%0A%20%20%7C%20project%20pipId%20%3D%20id%2C%20pipSku%20%3D%20sku.name%2C%20pipAssociatedNicId%20%3D%20tolower%28tostring%28split%28properties.ipConfiguration.id%2C%20%27%2FipConfigurations%2F%27%29%5B0%5D%29%29%29%0A%20%20on%20%24left.allVMNicID%20%3D%3D%20%24right.pipAssociatedNicId%0A%7C%20project%20vmId%2C%20pipId%2C%20pipSkuu" target="_blank">portal.azure.us</a> +- Microsoft Azure operated by 21Vianet portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0A%7C%20where%20type%20%3D~%20%27microsoft.compute%2Fvirtualmachines%27%0A%7C%20project%20vmId%20%3D%20tolower%28id%29%2C%20vmNics%20%3D%20properties.networkProfile.networkInterfaces%0A%7C%20join%20%28%0A%20%20Resources%20%7C%0A%20%20where%20type%20%3D~%20%27microsoft.network%2Fnetworkinterfaces%27%20%7C%0A%20%20project%20nicVMId%20%3D%20tolower%28tostring%28properties.virtualMachine.id%29%29%2C%20allVMNicID%20%3D%20tolower%28id%29%2C%20nicIPConfigs%20%3D%20properties.ipConfigurations%29%0A%20%20on%20%24left.vmId%20%3D%3D%20%24right.nicVMId%0A%7C%20join%20%28%0A%20%20Resources%0A%20%20%7C%20where%20type%20%3D~%20%27microsoft.network%2Fpublicipaddresses%27%20and%20isnotnull%28properties.ipConfiguration.id%29%0A%20%20%7C%20where%20sku.name%20%3D%3D%20%27Basic%27%0A%20%20%7C%20project%20pipId%20%3D%20id%2C%20pipSku%20%3D%20sku.name%2C%20pipAssociatedNicId%20%3D%20tolower%28tostring%28split%28properties.ipConfiguration.id%2C%20%27%2FipConfigurations%2F%27%29%5B0%5D%29%29%29%0A%20%20on%20%24left.allVMNicID%20%3D%3D%20%24right.pipAssociatedNicId%0A%7C%20project%20vmId%2C%20pipId%2C%20pipSku" target="_blank">portal.azure.cn</a> ++ ## Next steps * [Upgrading a Basic public IP address to Standard SKU - Guidance](public-ip-basic-upgrade-guidance.md) |
virtual-network | Remove Public Ip Address Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/remove-public-ip-address-vm.md | Title: Dissociate a public IP address from an Azure VM description: Learn how to dissociate a public IP address from an Azure virtual machine (VM) using the Azure portal, Azure CLI or Azure PowerShell. -++ Last updated : 08/24/2023 Previously updated : 12/16/2022- |
virtual-network | Routing Preference Azure Kubernetes Service Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-azure-kubernetes-service-cli.md | Title: 'Tutorial: Configure routing preference for an Azure Kubernetes Service - Azure CLI' description: Use this tutorial to learn how to configure routing preference for an Azure Kubernetes Service.--++ Last updated : 08/24/2023 Previously updated : 10/01/2021 ms.devlang: azurecli |
virtual-network | Routing Preference Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-cli.md | Title: Configure routing preference for a public IP address using Azure CLI description: Learn how to create a public IP with an Internet traffic routing preference by using the Azure CLI. - Last updated : 08/24/2023++ Previously updated : 02/22/2021- # Configure routing preference for a public IP address using Azure CLI |
virtual-network | Routing Preference Mixed Network Adapter Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-mixed-network-adapter-portal.md | Title: 'Tutorial: Configure both routing preference options for a virtual machine - Azure portal' description: Use this tutorial to learn how to configure both routing preference options for a virtual machine using the Azure portal.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021 |
virtual-network | Routing Preference Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-overview.md | Title: Routing preference in Azure description: Learn about how you can choose how your traffic routes between Azure and the Internet with routing preference.- Last updated : 08/24/2023++ # Customer intent: As an Azure customer, I want to learn more about routing choices for my internet egress traffic. Previously updated : 05/08/2023- |
virtual-network | Routing Preference Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-portal.md | Title: Configure routing preference for a public IP address - Azure portal description: Learn how to create a public IP with an Internet traffic routing preference - Last updated : 08/24/2023++ Previously updated : 02/22/2021- # Configure routing preference for a public IP address using the Azure portal |
virtual-network | Routing Preference Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-powershell.md | Title: Configure routing preference for a public IP address - Azure PowerShell description: Learn how to Configure routing preference for a public IP address using Azure PowerShell. - Last updated : 08/24/2023++ Previously updated : 02/22/2021- |
virtual-network | Routing Preference Unmetered | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-unmetered.md | Title: What is routing preference unmetered? description: Learn about how you can configure routing preference for your resources egressing data to CDN provider.- Last updated : 08/24/2023++ # Customer intent: As an Azure customer, I want to learn more about enabling routing preference for my CDN origin resources. Previously updated : 05/08/2023- # What is routing preference unmetered? |
virtual-network | Tutorial Routing Preference Virtual Machine Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/tutorial-routing-preference-virtual-machine-portal.md | Title: 'Tutorial: Configure routing preference for a VM - Azure portal' description: In this tutorial, learn how to create a VM with a public IP address with routing preference choice using the Azure portal.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021 |
virtual-network | Virtual Network Deploy Static Pip Arm Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-cli.md | Title: Create a VM with a static public IP address - Azure CLI description: Create a virtual machine (VM) with a static public IP address using the Azure CLI. Static public IP addresses are addresses that never change.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021 ms.devlang: azurecli |
virtual-network | Virtual Network Deploy Static Pip Arm Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-portal.md | Title: Create a VM with a static public IP address - Azure portal description: Learn how to create a VM with a static public IP address using the Azure portal. - Last updated : 08/24/2023++ Previously updated : 12/16/2022- # Create a virtual machine with a static public IP address using the Azure portal |
virtual-network | Virtual Network Deploy Static Pip Arm Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-ps.md | Title: Create a VM with a static public IP address - Azure PowerShell description: Create a virtual machine (VM) with a static public IP address using Azure PowerShell. Static public IP addresses are addresses that never change.-- Last updated : 08/24/2023++ Previously updated : 10/01/2021 |
virtual-network | Virtual Network Multiple Ip Addresses Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md | Title: Assign multiple IP addresses to VMs - Azure CLI description: Learn how to create a virtual machine with multiple IP addresses using the Azure CLI. - Last updated : 08/24/2023++ Previously updated : 04/19/2023- # Assign multiple IP addresses to virtual machines using the Azure CLI |
virtual-network | Virtual Network Multiple Ip Addresses Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-portal.md | Title: Assign multiple IP addresses to VMs - Azure portal description: Learn how to assign multiple IP addresses to a virtual machine using the Azure portal. - Last updated : 08/24/2023++ Previously updated : 12/08/2022- # Assign multiple IP addresses to virtual machines using the Azure portal |
virtual-network | Virtual Network Multiple Ip Addresses Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-powershell.md | Title: Assign multiple IP addresses to VMs - Azure PowerShell description: Learn how to create a virtual machine with multiple IP addresses using Azure PowerShell. - Last updated : 08/24/2023++ Previously updated : 12/12/2022- |
virtual-network | Virtual Network Network Interface Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-network-interface-addresses.md | Title: Configure IP addresses for an Azure network interface description: Learn how to add, change, and remove private and public IP addresses for a network interface. - Last updated : 08/24/2023++ Previously updated : 12/06/2022- |
virtual-network | Virtual Network Public Ip Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-public-ip-address.md | Title: Create, change, or delete an Azure public IP address description: Manage public IP addresses. Learn how a public IP address is a resource with configurable settings. - Last updated : 08/24/2023++ Previously updated : 05/28/2023- |
virtual-network | Virtual Networks Static Private Ip Arm Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-cli.md | Title: 'Create a VM with a static private IP address - Azure CLI' description: Learn how to create a virtual machine with a static private IP address using the Azure CLI.-- Last updated : 08/24/2023++ Previously updated : 10/28/2022 ms.devlang: azurecli |
virtual-network | Virtual Networks Static Private Ip Arm Pportal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-pportal.md | Title: 'Create a VM with a static private IP address - Azure portal' description: Learn how to create a virtual machine with a static private IP address using the Azure portal.-- Last updated : 08/24/2023++ Previously updated : 03/17/2023 |
virtual-network | Virtual Networks Static Private Ip Arm Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-ps.md | Title: 'Create a VM with a static private IP address - Azure PowerShell' description: Learn how to create a virtual machine with a static private IP address using Azure PowerShell.-- Last updated : 08/24/2023++ Previously updated : 10/28/2022 |
virtual-network | Virtual Networks Static Private Ip Classic Pportal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-pportal.md | Title: Configure private IP addresses for VMs (Classic) - Azure portal description: Learn how to configure private IP addresses for virtual machines (Classic) using the Azure portal. - Last updated : 08/24/2023++ Previously updated : 03/22/2023- |
virtual-network | Virtual Networks Static Private Ip Classic Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-classic-ps.md | Title: Configure private IP addresses for VMs (Classic) - Azure PowerShell description: Learn how to configure private IP addresses for virtual machines (Classic) using PowerShell. - Last updated : 08/24/2023++ Previously updated : 03/22/2023- |
virtual-network | Manage Subnet Delegation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-subnet-delegation.md | Subnet delegation gives explicit permissions to the service to create service-sp ## Prerequisites +# [**Portal**](#tab/manage-subnet-delegation-portal) + - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions. +# [**PowerShell**](#tab/manage-subnet-delegation-powershell) -- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ++- If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions. - Azure PowerShell installed locally or Azure Cloud Shell. Subnet delegation gives explicit permissions to the service to create service-sp - Ensure your `Az.Network` module is 4.3.0 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name Az.Network` if necessary. If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.+# [**Azure CLI**](#tab/manage-subnet-delegation-cli) -## Create the virtual network --In this section, you create a virtual network and the subnet that you'll later delegate to an Azure service. -# [**Portal**](#tab/manage-subnet-delegation-portal) --1. Sign-in to the [Azure portal](https://portal.azure.com). --1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. --1. Select **+ Create**. --1. Enter or select the following information in the **Basics** tab of **Create virtual network**: +- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - | Setting | Value | - | - | -- | - | **Project details** | | - | Subscription | Select your subscription. | - | Resource group | Select **Create new**. </br> Enter **myResourceGroup** in **Name**. </br> Select **OK**. | - | **Instance details** | | - | Name | Enter **myVNet**. | - | Region | Select **East US 2** | +- If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions. -1. Select **Next: Security**, then **Next: IP Addresses**. -1. Select **Add an IP address space**, in the **Add an IP address space** pane, enter or select the following information, then select **Add**. +- This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. - | Setting | Value | - | - | -- | - | Address space type | Leave as default **IPV6**. | - | Starting address | Enter **10.1.0.0**. | - | Address space size | Select **/16**. | + -1. Select **+ Add subnet** in the new IP address space. +## Create the virtual network -1. Enter or select the following information in **Add a subnet**. Then select **Add**. +In this section, you create a virtual network and the subnet that you delegate to an Azure service. - | Setting | Value | - | - | -- | - | Name | Enter **mySubnet**. | - | Starting address | Enter **10.1.0.0**. | - | Subnet size | Select **/16**. | +# [**Portal**](#tab/manage-subnet-delegation-portal) -1. Select **Review + create**, then select **Create**. # [**PowerShell**](#tab/manage-subnet-delegation-powershell) ### Create a resource group-Create a resource group with [New-AzResourceGroup](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed. -The following example creates a resource group named **myResourceGroup** in the **eastus2** location: +Create a resource group with [`New-AzResourceGroup`](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed. ++The following example creates a resource group named **test-rg** in the **eastus2** location: ```azurepowershell-interactive $rg = @{- Name = 'myResourceGroup' + Name = 'test-rg' Location = 'eastus2' } New-AzResourceGroup @rg ``` ### Create virtual network -Create a virtual network named **myVnet** with a subnet named **mySubnet** using [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) in the **myResourceGroup** using [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork). +Create a virtual network named **vnet-1** with a subnet named **subnet-1** using [`New-AzVirtualNetworkSubnetConfig`](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) in the **test-rg** using [`New-AzVirtualNetwork`](/powershell/module/az.network/new-azvirtualnetwork). -The IP address space for the virtual network is **10.1.0.0/16**. The subnet within the virtual network is **10.1.0.0/24**. +The IP address space for the virtual network is **10.0.0.0/16**. The subnet within the virtual network is **10.0.0.0/24**. ```azurepowershell-interactive $sub = @{- Name = 'mySubnet' - AddressPrefix = '10.1.0.0/24' + Name = 'subnet-1' + AddressPrefix = '10.0.0.0/24' } $subnet = New-AzVirtualNetworkSubnetConfig @sub $net = @{- Name = 'myVNet' - ResourceGroupName = 'myResourceGroup' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' Location = 'eastus2'- AddressPrefix = '10.1.0.0/16' + AddressPrefix = '10.0.0.0/16' Subnet = $subnet } New-AzVirtualNetwork @net New-AzVirtualNetwork @net ### Create a resource group -Create a resource group with [az group create](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed. +Create a resource group with [`az group create`](/cli/azure/group). An Azure resource group is a logical container into which Azure resources are deployed and managed. -The following example creates a resource group named **myResourceGroup** in the **eastu2** location: +The following example creates a resource group named **test-rg** in the **eastu2** location: ```azurecli-interactive az group create \- --name myResourceGroup \ + --name test-rg \ --location eastus2 ``` ### Create a virtual network-Create a virtual network named **myVnet** with a subnet named **mySubnet** in the **myResourceGroup** using [az network vnet create](/cli/azure/network/vnet). ++Create a virtual network named **vnet-1** with a subnet named **subnet-1** in the **test-rg** using [`az network vnet create`](/cli/azure/network/vnet). ```azurecli-interactive az network vnet create \- --resource-group myResourceGroup \ + --resource-group test-rg \ --location eastus2 \- --name myVNet \ - --address-prefix 10.1.0.0/16 \ - --subnet-name mySubnet \ - --subnet-prefix 10.1.0.0/24 + --name vnet-1 \ + --address-prefix 10.0.0.0/16 \ + --subnet-name subnet-1 \ + --subnet-prefix 10.0.0.0/24 ``` In this section, you delegate the subnet that you created in the preceding secti 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -1. Select **myVNet**. +1. Select **vnet-1**. 1. Select **Subnets** in **Settings**. -1. Select **mySubnet**. +1. Select **subnet-1**. 1. Enter or select the following information: In this section, you delegate the subnet that you created in the preceding secti # [**PowerShell**](#tab/manage-subnet-delegation-powershell) -Use [Add-AzDelegation](/powershell/module/az.network/add-azdelegation) to update the subnet named **mySubnet** with a delegation named **myDelegation** to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation: +Use [`Add-AzDelegation`](/powershell/module/az.network/add-azdelegation) to update the subnet named **subnet-1** with a delegation named **myDelegation** to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation: ```azurepowershell-interactive $net = @{- Name = 'myVNet' - ResourceGroupName = 'myResourceGroup' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } $vnet = Get-AzVirtualNetwork @net $sub = @{- Name = 'mySubnet' + Name = 'subnet-1' VirtualNetwork = $vnet } $subnet = Get-AzVirtualNetworkSubnetConfig @sub $subnet = Add-AzDelegation @del Set-AzVirtualNetwork -VirtualNetwork $vnet ```-Use [Get-AzDelegation](/powershell/module/az.network/get-azdelegation) to verify the delegation: +Use [`Get-AzDelegation`](/powershell/module/az.network/get-azdelegation) to verify the delegation: ```azurepowershell-interactive $sub = @{- Name = 'myVNet' - ResourceGroupName = 'myResourceGroup' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } -$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'mySubnet' +$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'subnet-1' $dg = @{ Name ='myDelegation' Get-AzDelegation @dg Actions : {Microsoft.Network/virtualNetworks/subnets/join/action} Name : myDelegation Etag : W/"9cba4b0e-2ceb-444b-b553-454f8da07d8a"- Id : /subscriptions/3bf09329-ca61-4fee-88cb-7e30b9ee305b/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/mySubnet/delegations/myDelegation + Id : /subscriptions/3bf09329-ca61-4fee-88cb-7e30b9ee305b/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1/subnets/subnet-1/delegations/myDelegation ``` # [**Azure CLI**](#tab/manage-subnet-delegation-cli) -Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to update the subnet named **mySubnet** with a delegation to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation: +Use [`az network virtual network subnet update`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to update the subnet named **subnet-1** with a delegation to an Azure service. In this example **Microsoft.Sql/managedInstances** is used for the example delegation: ```azurecli-interactive az network vnet subnet update \- --resource-group myResourceGroup \ - --name mySubnet \ - --vnet-name myVNet \ + --resource-group test-rg \ + --name subnet-1 \ + --vnet-name vnet-1 \ --delegations Microsoft.Sql/managedInstances ``` -To verify the delegation was applied, use [az network vnet subnet show](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is delegated to the subnet in the property **serviceName**: +To verify the delegation was applied, use [`az network vnet subnet show`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is delegated to the subnet in the property **serviceName**: ```azurecli-interactive az network vnet subnet show \- --resource-group myResourceGroup \ - --name mySubnet \ - --vnet-name myVNet \ + --resource-group test-rg \ + --name subnet-1 \ + --vnet-name vnet-1 \ --query delegations ``` az network vnet subnet show \ "Microsoft.Network/virtualNetworks/subnets/unprepareNetworkPolicies/action" ], "etag": "W/\"30184721-8945-4e4f-9cc3-aa16b26589ac\"",- "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/resourceGroups/myResourceGroup/providers/Microsoft.Network/virtualNetworks/myVNet/subnets/mySubnet/delegations/0", + "id": "/subscriptions/23250d6d-28f0-41dd-9776-61fc80805b6e/resourceGroups/test-rg/providers/Microsoft.Network/virtualNetworks/vnet-1/subnets/subnet-1/delegations/0", "name": "0", "provisioningState": "Succeeded",- "resourceGroup": "myResourceGroup", + "resourceGroup": "test-rg", "serviceName": "Microsoft.Sql/managedInstances", "type": "Microsoft.Network/virtualNetworks/subnets/delegations" } az network vnet subnet show \ ## Remove subnet delegation from an Azure service -In this section, you'll remove a subnet delegation for an Azure service. +In this section, you remove a subnet delegation for an Azure service. # [**Portal**](#tab/manage-subnet-delegation-portal) In this section, you'll remove a subnet delegation for an Azure service. 1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. -1. Select **myVNet**. +1. Select **vnet-1**. 1. Select **Subnets** in **Settings**. -1. Select **mySubnet**. +1. Select **subnet-1**. 1. Enter or select the following information: In this section, you'll remove a subnet delegation for an Azure service. # [**PowerShell**](#tab/manage-subnet-delegation-powershell) -Use [Remove-AzDelegation](/powershell/module/az.network/remove-azdelegation) to remove the delegation from the subnet named **mySubnet**: +Use [`Remove-AzDelegation`](/powershell/module/az.network/remove-azdelegation) to remove the delegation from the subnet named **subnet-1**: ```azurepowershell-interactive $net = @{- Name = 'myVNet' - ResourceGroupName = 'myResourceGroup' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } $vnet = Get-AzVirtualNetwork @net $sub = @{- Name = 'mySubnet' + Name = 'subnet-1' VirtualNetwork = $vnet } $subnet = Get-AzVirtualNetworkSubnetConfig @sub $subnet = Remove-AzDelegation @del Set-AzVirtualNetwork -VirtualNetwork $vnet ```-Use [Get-AzDelegation](/powershell/module/az.network/get-azdelegation) to verify the delegation was removed: +Use [`Get-AzDelegation`](/powershell/module/az.network/get-azdelegation) to verify the delegation was removed: ```azurepowershell-interactive $sub = @{- Name = 'myVNet' - ResourceGroupName = 'myResourceGroup' + Name = 'vnet-1' + ResourceGroupName = 'test-rg' } -$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'mySubnet' +$subnet = Get-AzVirtualNetwork @sub | Get-AzVirtualNetworkSubnetConfig -Name 'subnet-1' $dg = @{ Name ='myDelegation' Get-AzDelegation: Sequence contains no matching element # [**Azure CLI**](#tab/manage-subnet-delegation-cli) -Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to remove the delegation from the subnet named **mySubnet**: +Use [`az network vnet subnet update`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to remove the delegation from the subnet named **subnet-1**: ```azurecli-interactive az network vnet subnet update \- --resource-group myResourceGroup \ - --name mySubnet \ - --vnet-name myVNet \ + --resource-group test-rg \ + --name subnet-1 \ + --vnet-name vnet-1 \ --remove delegations ```-To verify the delegation was removed, use [az network vnet subnet show](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is removed from the subnet in the property **serviceName**: +To verify the delegation was removed, use [`az network vnet subnet show`](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-show). Verify the service is removed from the subnet in the property **serviceName**: ```azurecli-interactive az network vnet subnet show \- --resource-group myResourceGroup \ - --name mySubnet \ - --vnet-name myVNet \ + --resource-group test-rg \ + --name subnet-1 \ + --vnet-name vnet-1 \ --query delegations ``` Output from command is a null bracket: Output from command is a null bracket: -## Clean up resources --When no longer needed, delete the resource group and all resources it contains: --1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it. --1. Select **Delete resource group**. --1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. ## Next steps - Learn how to [manage subnets in Azure](virtual-network-manage-subnet.md). |
virtual-network | Manage Virtual Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md | Title: Create, change, or delete an Azure virtual network description: Create and delete a virtual network and change settings, like DNS servers and IP address spaces, for an existing virtual network.- - Previously updated : 11/16/2022 Last updated : 08/23/2023 The account you log into, or connect to Azure with, must be assigned to the [net | Name | Enter a name for the virtual network you're creating. | The name must be unique in the resource group that you select to create the virtual network in. <br> You can't change the name after the virtual network is created. <br> For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks. | | Region | Select an Azure [region](https://azure.microsoft.com/regions/). | A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region using [virtual network peering](virtual-network-peering-overview.md). <br> Any Azure resource that you connect to the virtual network must be in the same region as the virtual network. | -1. Select **IP Addresses** tab or **Next: IP Addresses >**, and enter the following IP address information: +1. Select **IP Addresses** tab or **Next: Security >**, **Next: IP Addresses >** and enter the following IP address information: + - **IPv4 Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network. You can't add the following address ranges: |
virtual-network | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 08/08/2023 Last updated : 08/25/2023 |
virtual-network | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 08/03/2023 Last updated : 08/25/2023 |
virtual-network | Setup Dpdk Mana | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk-mana.md | + + Title: Microsoft Azure Network Adapter (MANA) and DPDK on Linux +description: Learn about MANA and DPDK for Linux Azure VMs. +++ Last updated : 07/10/2023++++# Microsoft Azure Network Adapter (MANA) and DPDK on Linux ++The Microsoft Azure Network Adapter (MANA) is new hardware for Azure virtual machines to enables higher throughput and reliability. +To make use of MANA, users must modify their DPDK initialization routines. MANA requires two changes compared to legacy hardware: +- [MANA EAL arguments](#mana-dpdk-eal-arguments) for the poll-mode driver (PMD) differ from previous hardware. +- The Linux kernel must release control of the MANA network interfaces before DPDK initialization begins. ++The setup procedure for MANA DPDK is outlined in the [example code.](#example-testpmd-setup-and-netvsc-test). ++## Introduction ++Legacy Azure Linux VMs rely on the mlx4 or mlx5 drivers and the accompanying hardware for accelerated networking. Azure DPDK users would select specific interfaces to include or exclude by passing bus addresses to the DPDK EAL. The setup procedure for MANA DPDK differs slightly, since the assumption of one bus address per Accelerated Networking interface no longer holds true. Rather than using a PCI bus address, the MANA PMD uses the MAC address to determine which interface it should bind to. ++## MANA DPDK EAL Arguments +The MANA PMD probes all devices and ports on the system when no `--vdev` argument is present; the `--vdev` argument is not mandatory. In testing environments it's often desirable to leave one (primary) interface available for servicing the SSH connection to the VM. To use DPDK with a subset of the available VFs, users should pass both the bus address of the MANA device and the MAC address of the interfaces in the `--vdev` argument. For more detail, example code is available to demonstrate [DPDK EAL initialization on MANA](#example-testpmd-setup-and-netvsc-test). ++For general information about the DPDK Environment Abstraction Layer (EAL): +- [DPDK EAL Arguments for Linux](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html#eal-in-a-linux-userland-execution-environment) +- [DPDK EAL Overview](https://doc.dpdk.org/guides/prog_guide/env_abstraction_layer.html) ++## DPDK requirements for MANA ++Utilizing DPDK on MANA hardware requires the Linux kernel 6.2 or later or a backport of the Ethernet and InfiniBand drivers from the latest Linux kernel. It also requires specific versions of DPDK and user-space drivers. ++MANA DPDK requires the following set of drivers: +1. [Linux kernel Ethernet driver](https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/microsoft/mana) (5.15 kernel and later) +1. [Linux kernel InfiniBand driver](https://github.com/torvalds/linux/tree/master/drivers/infiniband/hw/mana) (6.2 kernel and later) +1. [DPDK MANA poll-mode driver](https://github.com/DPDK/dpdk/tree/main/drivers/net/mana) (DPDK 22.11 and later) +1. [Libmana user-space drivers](https://github.com/linux-rdma/rdma-core/tree/master/providers/mana) (rdma-core v44 and later) ++>[!NOTE] +>MANA DPDK is not available for Windows; it will only work on Linux VMs. ++## Example: Check for MANA ++>[!NOTE] +>This article assumes the pciutils package containing the lspci command is installed on the system. ++```bash +# check for pci devices with ID: +# vendor: Microsoft Corporation (1414) +# class: Ethernet Controller (0200) +# device: Microsft Azure Network Adapter VF (00ba) +if [[ -n `lspci -d 1414:00ba:0200` ]]; then + echo "MANA device is available." +else + echo "MANA was not detected." +fi ++``` ++## Example: DPDK installation (Ubuntu 22.04) ++>[!NOTE] +>This article assumes compatible kernel and rdma-core are installed on the system. ++```bash +DEBIAN_FRONTEND=noninteractive sudo apt-get install -q -y build-essential libudev-dev libnl-3-dev libnl-route-3-dev ninja-build libssl-dev libelf-dev python3-pip meson libnuma-dev ++pip3 install pyelftools ++# Try latest LTS DPDK, example uses DPDK tag v23.07-rc3 +git clone https://github.com/DPDK/dpdk.git -b v23.07-rc3 --depth 1 +pushd dpdk +meson build +cd build +ninja +sudo ninja install +popd +``` ++## Example: Testpmd setup and netvsc test ++Note the following example code for running DPDK with MANA. The direct-to-vf 'netvsc' configuration on Azure is recommended for maximum performance with MANA. ++>[!NOTE] +>DPDK requires either 2MB or 1GB hugepages to be enabled ++```bash +# Enable 2MB hugepages. +echo 1024 | tee /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages ++# Assuming use of eth1 for DPDK in this demo +PRIMARY="eth1" ++# $ ip -br link show master eth1 +# > enP30832p0s0 UP f0:0d:3a:ec:b4:0a <... # truncated +# grab interface name for device bound to primary +SECONDARY="`ip -br link show master $PRIMARY | awk '{ print $1 }'`" +# Get mac address for MANA interface (should match primary) +MANA_MAC="`ip -br link show master $PRIMARY | awk '{ print $3 }'`" +++# $ ethtool -i enP30832p0s0 | grep bus-info +# > bus-info: 7870:00:00.0 +# get MANA device bus info to pass to DPDK +BUS_INFO="`ethtool -i $SECONDARY | grep bus-info | awk '{ print $2 }'`" ++# Set MANA interfaces DOWN before starting DPDK +ip link set $PRIMARY down +ip link set $SECONDARY down +++## Move synthetic channel to user mode and allow it to be used by NETVSC PMD in DPDK +DEV_UUID=$(basename $(readlink /sys/class/net/$PRIMARY/device)) +NET_UUID="f8615163-df3e-46c5-913f-f2d2f965ed0e" +modprobe uio_hv_generic +echo $NET_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/new_id +echo $DEV_UUID > /sys/bus/vmbus/drivers/hv_netvsc/unbind +echo $DEV_UUID > /sys/bus/vmbus/drivers/uio_hv_generic/bind ++# MANA single queue test +dpdk-testpmd -l 1-3 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --txd=128 --rxd=128 --stats 2 ++# MANA multiple queue test (example assumes > 9 cores) +dpdk-testpmd -l 1-9 --vdev="$BUS_INFO,mac=$MANA_MAC" -- --forward-mode=txonly --auto-start --nb-cores=8 --txd=128 --rxd=128 --txq=8 --rxq=8 --stats 2 ++``` ++## Troubleshooting ++### Fail to set interface down. +Failure to set the MANA bound device to DOWN can result in low or zero packet throughput. +The failure to release the device can result the EAL error message related to transmit queues. +``` +mana_start_tx_queues(): Failed to create qp queue index 0 +mana_dev_start(): failed to start tx queues -19 +``` ++### Failure to enable huge pages. ++Try enabling huge pages and ensuring the information is visible in meminfo. +``` +EAL: No free 2048 kB hugepages reported on node 0 +EAL: FATAL: Cannot get hugepage information. +EAL: Cannot get hugepage information. +EAL: Error - exiting with code: 1 +Cause: Cannot init EAL: Permission denied +``` ++### Low throughput with use of --vdev="net_vdev_netvsc0,iface=eth1" ++Failover configuration of either the `net_failsafe` or `net_vdev_netvsc` poll-mode-drivers isn't recommended for high performance on Azure. The netvsc configuration with DPDK version 20.11 or higher may give better results. For optimal performance, ensure your Linux kernel, rdma-core, and DPDK packages meet the listed requirements for DPDK and MANA. |
virtual-network | Setup Dpdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/setup-dpdk.md | DPDK consists of sets of user-space libraries that provide access to lower-level DPDK can run on Azure virtual machines that are supporting multiple operating system distributions. DPDK provides key performance differentiation in driving network function virtualization implementations. These implementations can take the form of network virtual appliances (NVAs), such as virtual routers, firewalls, VPNs, load balancers, evolved packet cores, and denial-of-service (DDoS) applications. +A list of setup instructions for DPDK on MANA VMs is available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md) + ## Benefit **Higher packets per second (PPS)**: Bypassing the kernel and taking control of packets in the user space reduces the cycle count by eliminating context switches. It also improves the rate of packets that are processed per second in Azure Linux virtual machines. The following distributions from the Azure Marketplace are supported: The noted versions are the minimum requirements. Newer versions are supported too. +A list of requirements for DPDK on MANA VMs is available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md) + **Custom kernel support** For any Linux kernel version that's not listed, see [Patches for building an Azure-tuned Linux kernel](https://github.com/microsoft/azure-linux-kernel). For more information, you can also contact [aznetdpdk@microsoft.com](mailto:aznetdpdk@microsoft.com). In addition, DPDK uses RDMA verbs to create data queues on the Network Adapter. ## Install DPDK manually (recommended) +DPDK installation instructions for MANA VMs are available here: [Microsoft Azure Network Adapter (MANA) and DPDK on Linux](setup-dpdk-mana.md) + ### Install build dependencies # [RHEL, CentOS](#tab/redhat) |
virtual-network | Tutorial Connect Virtual Networks Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-portal.md | Title: 'Tutorial: Connect virtual networks with VNet peering - Azure portal' description: In this tutorial, you learn how to connect virtual networks with virtual network peering using the Azure portal.- - Previously updated : 06/24/2022 Last updated : 08/22/2023 # Customer intent: I want to connect two virtual networks so that virtual machines in one virtual network can communicate with virtual machines in the other virtual network. In this tutorial, you learn how to: > * Deploy a virtual machine (VM) into each virtual network > * Communicate between VMs -This tutorial uses the Azure portal. You can also complete it using [Azure CLI](tutorial-connect-virtual-networks-cli.md) or [PowerShell](tutorial-connect-virtual-networks-powershell.md). --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - ## Prerequisites -* An Azure subscription +- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com). -## Create virtual networks --1. On the Azure portal, select **+ Create a resource**. --1. Search for **Virtual Network**, and then select **Create**. -- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vnet.png" alt-text="Screenshot of create a resource for virtual network."::: --1. On the **Basics** tab, enter or select the following information and accept the defaults for the remaining settings: -- |Setting|Value| - ||| - |Subscription| Select your subscription.| - |Resource group| Select **Create new** and enter *myResourceGroup*.| - |Name| Enter *myVirtualNetwork1*.| - |Region| Select **East US**.| --- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-basic-tab.png" alt-text="Screenshot of create virtual network basics tab."::: --1. On the **IP Addresses** tab, enter *10.0.0.0/16* for the **IPv4 address Space** field. Select the **+ Add subnet** button below and enter *Subnet1* for **Subnet Name** and *10.0.0.0/24* for the **Subnet Address range**. -- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/ip-addresses-tab.png" alt-text="Screenshot of create a virtual network IP addresses tab."::: --1. Select **Review + create** and then select **Create**. -1. Repeat steps 1-5 again to create a second virtual network with the following settings: -- | Setting | Value | - | | | - | Name | myVirtualNetwork2 | - | Address space | 10.1.0.0/16 | - | Resource group | myResourceGroup | - | Subnet name | Subnet2 | - | Subnet address range | 10.1.0.0/24 | --## Peer virtual networks --1. In the search box at the top of the Azure portal, look for *myVirtualNetwork1*. When **myVirtualNetwork1** appears in the search results, select it. -- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/search-vnet.png" alt-text="Screenshot of searching for myVirtualNetwork1."::: +Repeat the previous steps to create a second virtual network with the following values: -1. Under **Settings**, select **Peerings**, and then select **+ Add**, as shown in the following picture: +>[!NOTE] +>The second virtual network can be in the same region as the first virtual network or in a different region. You can skip the **Security** tab and the Bastion deployment for the second virtual network. After the network peer, you can connect to both virtual machines with the same Bastion deployment. - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-peering.png" alt-text="Screenshot of creating peerings for myVirtualNetwork1."::: - -1. Enter or select the following information, accept the defaults for the remaining settings, and then select **Add**. -- | Setting | Value | - | | | - | **This virtual network** | | - | Peering link name | Enter *myVirtualNetwork1-myVirtualNetwork2* for the name of the peering from **myVirtualNetwork1** to the remote virtual network. | - | **Remote virtual network** | | - | Peering link name | Enter *myVirtualNetwork2-myVirtualNetwork1* for the name of the peering from the remote virtual network to **myVirtualNetwork1**. | - | Subscription | Select your subscription of the remote virtual network. | - | Virtual network | Select **myVirtualNetwork2** for the name of the remote virtual network. The remote virtual network can be in the same region of **myVirtualNetwork1** or in a different region. | -- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-inline.png" alt-text="Screenshot of virtual network peering configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/peering-settings-bidirectional-expanded.png"::: -- In the **Peerings** page, the **Peering status** is **Connected**, as shown in the following picture: +| Setting | Value | +| | | +| Name | **vnet-2** | +| Address space | **10.1.0.0/16** | +| Resource group | **test-rg** | +| Subnet name | **subnet-1** | +| Subnet address range | **10.1.0.0/24** | - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/peering-status-connected.png" alt-text="Screenshot of virtual network peering connection status."::: +<a name="peer-virtual-networks"></a> - If you don't see a **Connected** status, select the **Refresh** button. ## Create virtual machines -Create a VM in each virtual network so that you can test the communication between them. +Create a virtual machine in each virtual network to test the communication between them. -### Create the first VM -1. On the Azure portal, select **+ Create a resource**. --1. Select **Compute**, and then **Create** under **Virtual machine**. -- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm.png" alt-text="Screenshot of create a resource for virtual machines."::: --1. Enter or select the following information on the **Basics** tab. Accept the defaults for the remaining settings, and then select **Create**: -- | Setting | Value | - | | | - | Resource group| Select **myResourceGroup**. | - | Name | Enter *myVm1*. | - | Location | Select **(US) East US**. | - | Image | Select an OS image. For this tutorial, *Windows Server 2019 Datacenter - Gen2* is selected. | - | Size | Select a VM size. For this tutorial, *Standard_D2s_v3* is selected. | - | Username | Enter a username. For this tutorial, the username *azure* is used. | - | Password | Enter a password of your choosing. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-). | - - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-inline.png" alt-text="Screenshot of virtual machine basic tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-basic-tab-expanded.png"::: --1. On the **Networking** tab, select the following values: -- | Setting | Value | - | | | - | Virtual network | Select **myVirtualNetwork1**. | - | Subnet | Select **Subnet1**. | - | NIC network security group | Select **Basic**. | - | Public inbound ports | Select **Allow selected ports**. | - | Select inbound ports | Select **RDP (3389)**. | -- :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-inline.png" alt-text="Screenshot of virtual machine networking tab configuration." lightbox="./media/tutorial-connect-virtual-networks-portal/create-vm-networking-tab-expanded.png"::: --1. Select the **Review + Create** and then **Create** to start the VM deployment. --### Create the second VM --Repeat steps 1-5 again to create a second virtual machine with the following changes: +Repeat the previous steps to create a second virtual machine in the second virtual network with the following values: | Setting | Value | | | |-| Name | myVm2 | -| Virtual network | myVirtualNetwork2 | +| Virtual machine name | **vm-2** | +| Region | **East US 2** or same region as **vnet-2**. | +| Virtual network | Select **vnet-2**. | +| Subnet | Select **subnet-1 (10.1.0.0/24)**. | +| Public IP | **None** | +| Network security group name | **nsg-2** | -The VMs take a few minutes to create. Don't continue with the remaining steps until both VMs are created. +Wait for the virtual machines to be created before continuing with the next steps. +## Connect to a virtual machine -## Communicate between VMs --Test the communication between the two virtual machines over the virtual network peering by pinging from **myVm2** to **myVm1**. +Use `ping` to test the communication between the virtual machines. -1. In the search box at the top of the portal, look for *myVm1*. When **myVm1** appears in the search results, select it. - - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/search-vm.png" alt-text="Screenshot of searching for myVm1."::: +1. In the portal, search for and select **Virtual machines**. -1. To connect to the virtual machine, select **Connect** and then select **RDP** from the drop-down. Select **Download RDP file** to download the remote desktop file. +1. On the **Virtual machines** page, select **vm-1**. - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/connect-to-virtual-machine.png" alt-text="Screenshot of connect to virtual machine button."::: +1. In the **Overview** of **vm-1**, select **Connect**. -1. To connect to the VM, open the downloaded RDP file. If prompted, select **Connect**. +1. In the **Connect to virtual machine** page, select the **Bastion** tab. - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-connect.png" alt-text="Screenshot of connection screen for remote desktop."::: +1. Select **Use Bastion**. -1. Enter the username and password you specified when creating **myVm1** (you may need to select **More choices**, then **Use a different account**, to specify the credentials you entered when you created the VM), then select **OK**. +1. Enter the username and password you created when you created the VM, and then select **Connect**. - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials.png" alt-text="Screenshot of R D P credential screen."::: +## Communicate between VMs -1. You may receive a certificate warning during the sign-in process. Select **Yes** to continue with the connection. +1. At the bash prompt for **vm-1**, enter `ping -c 4 vm-2`. -1. In a later step, ping is used to communicate with **myVm1** from **myVm2**. Ping uses the Internet Control Message Protocol (ICMP), which is denied through the Windows Firewall, by default. On **myVm1**, enable ICMP through the Windows firewall, so that you can ping this VM from **myVm2** in a later step, using PowerShell: + You get a reply similar to the following message: - ```powershell - New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4 + ```output + azureuser@vm-1:~$ ping -c 4 vm-2 + PING vm-2.3bnkevn3313ujpr5l1kqop4n4d.cx.internal.cloudapp.net (10.1.0.4) 56(84) bytes of data. + 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=1 ttl=64 time=1.83 ms + 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=2 ttl=64 time=0.987 ms + 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=3 ttl=64 time=0.864 ms + 64 bytes from vm-2.internal.cloudapp.net (10.1.0.4): icmp_seq=4 ttl=64 time=0.890 ms ``` - Though ping is used to communicate between VMs in this tutorial, allowing ICMP through the Windows Firewall for production deployments isn't recommended. --1. To connect to **myVm2** from **myVm1**, enter the following command from a command prompt on **myVm1**: -- ``` - mstsc /v:10.1.0.4 - ``` -1. Enter the username and password you specified when creating **myVm2** and select **Yes** if you receive a certificate warning during the sign-in process. - - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/rdp-credentials-to-second-vm.png" alt-text="Screenshot of R D P credential screen for R D P session from first virtual machine to second virtual machine."::: - -1. Since you enabled ping on **myVm1**, you can now ping it from **myVm2**: -- ```powershell - ping 10.0.0.4 - ``` - - :::image type="content" source="./media/tutorial-connect-virtual-networks-portal/myvm2-ping-myvm1.png" alt-text="Screenshot of second virtual machine pinging first virtual machine."::: +1. Close the Bastion connection to **vm-1**. -1. Disconnect your RDP sessions to both *myVm1* and *myVm2*. +1. Repeat the steps in [Connect to a virtual machine](#connect-to-a-virtual-machine) to connect to **vm-2**. -## Clean up resources +1. At the bash prompt for **vm-2**, enter `ping -c 4 vm-1`. -When no longer needed, delete the resource group and all resources it contains: + You get a reply similar to the following message: -1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it. + ```output + azureuser@vm-2:~$ ping -c 4 vm-1 + PING vm-1.3bnkevn3313ujpr5l1kqop4n4d.cx.internal.cloudapp.net (10.0.0.4) 56(84) bytes of data. + 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=1 ttl=64 time=0.695 ms + 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=2 ttl=64 time=0.896 ms + 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=3 ttl=64 time=3.43 ms + 64 bytes from vm-1.internal.cloudapp.net (10.0.0.4): icmp_seq=4 ttl=64 time=0.780 ms + ``` -1. Select **Delete resource group**. +1. Close the Bastion connection to **vm-2**. -1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. ## Next steps In this tutorial, you: * Created virtual network peering between two virtual networks.-* Tested the communication between two virtual machines over the virtual network peering using ping command. ++* Tested the communication between two virtual machines over the virtual network peering with `ping`. To learn more about a virtual network peering: |
virtual-network | Tutorial Create Route Table Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-portal.md | Title: 'Tutorial: Route network traffic with a route table - Azure portal' description: In this tutorial, learn how to route network traffic with a route table using the Azure portal.- -- Previously updated : 06/27/2022 Last updated : 08/21/2023 + # Customer intent: I want to route traffic from one subnet, to a different subnet, through a network virtual appliance. In this tutorial, you learn how to: > * Associate a route table to a subnet > * Route traffic from one subnet to another through an NVA -This tutorial uses the Azure portal. You can also complete it using the [Azure CLI](tutorial-create-route-table-cli.md) or [PowerShell](tutorial-create-route-table-powershell.md). --If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. --## Overview --This diagram shows the resources created in this tutorial along with the expected network routes. -- ## Prerequisites -* An Azure subscription +- An Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com). -## Create a virtual network --In this section, you'll create a virtual network, three subnets, and a bastion host. You'll use the bastion host to securely connect to the virtual machines. -1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Virtual network**, or search for *Virtual Network* in the portal search box. +## Create subnets -2. Select **Create**. +A **DMZ** and **Private** subnet are needed for this tutorial. The **DMZ** subnet is where you deploy the NVA, and the **Private** subnet is where you deploy the virtual machines that you want to route traffic to. The **subnet-1** is the subnet created in the previous steps. Use **subnet-1** for the public virtual machine. -2. On the **Basics** tab of **Create virtual network**, enter or select this information: +1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results. - | Setting | Value | - | - | -- | - | Subscription | Select your subscription.| - | Resource group | Select **Create new**, enter *myResourceGroup*. </br> Select **OK**. | - | Name | Enter *myVirtualNetwork*. | - | Region | Select **East US**.| --3. Select the **IP Addresses** tab, or select the **Next: IP Addresses** button at the bottom of the page. --4. In **IPv4 address space**, select the existing address space and change it to *10.0.0.0/16*. +1. In **Virtual networks**, select **vnet-1**. -4. Select **+ Add subnet**, then enter *Public* for **Subnet name** and *10.0.0.0/24* for **Subnet address range**. +1. In **vnet-1**, select **Subnets** from the **Settings** section. -5. Select **Add**. +1. In the virtual network's subnet list, select **+ Subnet**. -6. Select **+ Add subnet**, then enter *Private* for **Subnet name** and *10.0.1.0/24* for **Subnet address range**. +1. In **Add subnet**, enter or select the following information: -7. Select **Add**. --8. Select **+ Add subnet**, then enter *DMZ* for **Subnet name** and *10.0.2.0/24* for **Subnet address range**. + | Setting | Value | + | - | -- | + | Name | Enter **subnet-private**. | + | Subnet address range | Enter **10.0.2.0/24**. | -9. Select **Add**. + :::image type="content" source="./media/tutorial-create-route-table-portal/create-private-subnet.png" alt-text="Screenshot of private subnet creation in virtual network."::: -10. Select the **Security** tab, or select the **Next: Security** button at the bottom of the page. +1. Select **Save**. -11. Under **BastionHost**, select **Enable**. Enter this information: +1. Select **+ Subnet**. - | Setting | Value | - |--|-| - | Bastion name | Enter *myBastionHost*. | - | AzureBastionSubnet address space | Enter *10.0.3.0/24*. | - | Public IP Address | Select **Create new**. </br> Enter *myBastionIP* for **Name**. </br> Select **OK**. | +1. In **Add subnet**, enter or select the following information: - >[!NOTE] - >[!INCLUDE [Pricing](../../includes/bastion-pricing.md)] + | Setting | Value | + | - | -- | + | Name | Enter **subnet-dmz**. | + | Subnet address range | Enter **10.0.3.0/24**. | -12. Select the **Review + create** tab or select the **Review + create** button. + :::image type="content" source="./media/tutorial-create-route-table-portal/create-dmz-subnet.png" alt-text="Screenshot of DMZ subnet creation in virtual network."::: -13. Select **Create**. +1. Select **Save**. ## Create an NVA virtual machine -Network virtual appliances (NVAs) are virtual machines that help with network functions, such as routing and firewall optimization. In this section, you'll create an NVA using a **Windows Server 2019 Datacenter** virtual machine. You can select a different operating system if you want. +Network virtual appliances (NVAs) are virtual machines that help with network functions, such as routing and firewall optimization. In this section, create an NVA using an **Ubuntu 22.04** virtual machine. ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -1. From the Azure portal menu, select **+ Create a resource** > **Compute** > **Virtual machine**, or search for *Virtual machine* in the portal search box. +1. Select **+ Create** then **Azure virtual machine**. -1. Select **Create**. - -2. On the **Basics** tab of **Create a virtual machine**, enter or select this information: +1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: - | Setting | Value | - |--|-| - | **Project Details** | | + | Setting | Value | + | - | -- | + | **Project details** | | | Subscription | Select your subscription. |- | Resource Group | Select **myResourceGroup**. | - | **Instance details** | | - | Virtual machine name | Enter *myVMNVA*. | - | Region | Select **(US) East US**. | - | Availability Options | Select **No infrastructure redundancy required**. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-nva**. | + | Region | Select **(US) East US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. |- | Image | Select **Windows Server 2019 Datacenter - Gen2**. | - | Azure Spot instance | Select **No**. | - | Size | Choose VM size or take default setting. | - | **Administrator account** | | + | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. | + | VM architecture | Leave the default of **x64**. | + | Size | Select a size. | + | **Administrator account** | | + | Authentication type | Select **Password**. | | Username | Enter a username. |- | Password | Enter a password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| + | Password | Enter a password. | | Confirm password | Reenter password. |- | **Inbound port rules** | | + | **Inbound port rules** | | | Public inbound ports | Select **None**. | -3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. - -4. In the Networking tab, select or enter: +1. Select **Next: Disks** then **Next: Networking**. ++1. In the Networking tab, enter or select the following information: | Setting | Value |- |-|-| - | **Network interface** | | - | Virtual network | Select **myVirtualNetwork**. | - | Subnet | Select **DMZ** | - | Public IP | Select **None** | - | NIC network security group | Select **Basic**| - | Public inbound ports network | Select **None**. | - -5. Select the **Review + create** tab, or select **Review + create** button at the bottom of the page. - -6. Review the settings, and then select **Create**. + | - | -- | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-dmz (10.0.3.0/24)**. | + | Public IP | Select **None**. | + | NIC network security group | Select **Advanced**. | + | Configure network security group | Select **Create new**. </br> In **Name** enter **nsg-nva**. </br> Select **OK**. | -## Create public and private virtual machines +1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. -You'll create two virtual machines in **myVirtualNetwork** virtual network, then you'll allow Internet Control Message Protocol (ICMP) on them so you can use *tracert* tool to trace traffic. +## Create public and private virtual machines -> [!NOTE] -> For production environments, we don't recommend allowing ICMP through the Windows Firewall. +Create two virtual machines in the **vnet-1** virtual network. One virtual machine is in the **subnet-1** subnet, and the other virtual machine is in the **subnet-private** subnet. Use the same virtual machine image for both virtual machines. ### Create public virtual machine -1. From the Azure portal menu, select **Create a resource** > **Compute** > **Virtual machine**. - -2. In **Create a virtual machine**, enter or select this information in the **Basics** tab: +The public virtual machine is used to simulate a machine in the public internet. The public and private virtual machine are used to test the routing of network traffic through the NVA virtual machine. ++1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. - | Setting | Value | - |--|-| - | **Project Details** | | +1. Select **+ Create** then **Azure virtual machine**. ++1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: ++ | Setting | Value | + | - | -- | + | **Project details** | | | Subscription | Select your subscription. |- | Resource Group | Select **myResourceGroup**. | - | **Instance details** | | - | Virtual machine name | Enter *myVMPublic*. | - | Region | Select **(US) East US**. | - | Availability Options | Select **No infrastructure redundancy required**. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-public**. | + | Region | Select **(US) East US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. |- | Image | Select **Windows Server 2019 Datacenter - Gen2**. | - | Azure Spot instance | Select **No**. | - | Size | Choose VM size or take default setting. | - | **Administrator account** | | + | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. | + | VM architecture | Leave the default of **x64**. | + | Size | Select a size. | + | **Administrator account** | | + | Authentication type | Select **Password**. | | Username | Enter a username. |- | Password | Enter a password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| + | Password | Enter a password. | | Confirm password | Reenter password. |- | **Inbound port rules** | | + | **Inbound port rules** | | | Public inbound ports | Select **None**. | -3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. - -4. In the Networking tab, select or enter: +1. Select **Next: Disks** then **Next: Networking**. ++1. In the Networking tab, enter or select the following information: | Setting | Value |- |-|-| - | **Network interface** | | - | Virtual network | Select **myVirtualNetwork**. | - | Subnet | Select **Public**. | + | - | -- | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-1 (10.0.0.0/24)**. | | Public IP | Select **None**. |- | NIC network security group | Select **Basic**. | - | Public inbound ports network | Select **None**. | - -5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. - -6. Review the settings, and then select **Create**. + | NIC network security group | Select **None**. | ++1. Leave the rest of the options at the defaults and select **Review + create**. ++1. Select **Create**. ### Create private virtual machine -1. From the Azure portal menu, select **Create a resource** > **Compute** > **Virtual machine**. - -2. In **Create a virtual machine**, enter or select this information in the **Basics** tab: +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. ++1. Select **+ Create** then **Azure virtual machine**. - | Setting | Value | - |--|-| - | **Project Details** | | +1. In **Create a virtual machine** enter or select the following information in the **Basics** tab: ++ | Setting | Value | + | - | -- | + | **Project details** | | | Subscription | Select your subscription. |- | Resource Group | Select **myResourceGroup**. | - | **Instance details** | | - | Virtual machine name | Enter *myVMPrivate*. | - | Region | Select **(US) East US**. | - | Availability Options | Select **No infrastructure redundancy required**. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Virtual machine name | Enter **vm-private**. | + | Region | Select **(US) East US 2**. | + | Availability options | Select **No infrastructure redundancy required**. | | Security type | Select **Standard**. |- | Image | Select **Windows Server 2019 Datacenter - Gen2**. | - | Azure Spot instance | Select **No**. | - | Size | Choose VM size or take default setting. | - | **Administrator account** | | + | Image | Select **Ubuntu Server 22.04 LTS - x64 Gen2**. | + | VM architecture | Leave the default of **x64**. | + | Size | Select a size. | + | **Administrator account** | | + | Authentication type | Select **Password**. | | Username | Enter a username. |- | Password | Enter a password. The password must be at least 12 characters long and meet the [defined complexity requirements](../virtual-machines/windows/faq.yml?toc=%2fazure%2fvirtual-network%2ftoc.json#what-are-the-password-requirements-when-creating-a-vm-).| + | Password | Enter a password. | | Confirm password | Reenter password. |- | **Inbound port rules** | | + | **Inbound port rules** | | | Public inbound ports | Select **None**. | -3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**. - -4. In the Networking tab, select or enter: +1. Select **Next: Disks** then **Next: Networking**. ++1. In the Networking tab, enter or select the following information: | Setting | Value |- |-|-| - | **Network interface** | | - | Virtual network | Select **myVirtualNetwork**. | - | Subnet | Select **Private**. | + | - | -- | + | **Network interface** | | + | Virtual network | Select **vnet-1**. | + | Subnet | Select **subnet-private (10.0.2.0/24)**. | | Public IP | Select **None**. |- | NIC network security group | Select **Basic**. | - | Public inbound ports network | Select **None**. | - -5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. - -6. Review the settings, and then select **Create**. + | NIC network security group | Select **None**. | -### Allow ICMP in Windows firewall +1. Leave the rest of the options at the defaults and select **Review + create**. -1. Select **Go to resource** or Search for *myVMPrivate* in the portal search box. +1. Select **Create**. -1. In the **Overview** page of **myVMPrivate**, select **Connect** then **Bastion**. +## Enable IP forwarding -1. Enter the username and password you created for **myVMPrivate** virtual machine previously. +To route traffic through the NVA, turn on IP forwarding in Azure and in the operating system of **vm-nva**. When IP forwarding is enabled, any traffic received by **vm-nva** that's destined for a different IP address, isn't dropped and is forwarded to the correct destination. -1. Select **Connect** button. +### Enable IP forwarding in Azure -1. Open Windows PowerShell after you connect. +In this section, you turn on IP forwarding for the network interface of the **vm-nva** virtual machine. -1. Enter this command: +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. - ```powershell - New-NetFirewallRule ΓÇôDisplayName "Allow ICMPv4-In" ΓÇôProtocol ICMPv4 - ``` +1. In **Virtual machines**, select **vm-nva**. -1. From PowerShell, open a remote desktop connection to the **myVMPublic** virtual machine: +1. In **vm-nva**, select **Networking** from the **Settings** section. - ```powershell - mstsc /v:myvmpublic - ``` +1. Select the name of the interface next to **Network Interface:**. The name begins with **vm-nva** and has a random number assigned to the interface. The name of the interface in this example is **vm-nva124**. -1. After you connect to **myVMPublic** VM, open Windows PowerShell and enter the same command from step 6. + :::image type="content" source="./media/tutorial-create-route-table-portal/nva-network-interface.png" alt-text="Screenshot of network interface of NVA virtual machine."::: -1. Close the remote desktop connection to **myVMPublic** VM. +1. In the network interface overview page, select **IP configurations** from the **Settings** section. -## Turn on IP forwarding +1. In **IP configurations**, select the box next to **Enable IP forwarding**. -To route traffic through the NVA, turn on IP forwarding in Azure and in the operating system of **myVMNVA** virtual machine. Once IP forwarding is enabled, any traffic received by **myVMNVA** VM that's destined for a different IP address, won't be dropped and will be forwarded to the correct destination. + :::image type="content" source="./media/tutorial-create-route-table-portal/enable-ip-forwarding.png" alt-text="Screenshot of enablement of IP forwarding."::: -### Turn on IP forwarding in Azure +1. Select **Apply**. -In this section, you'll turn on IP forwarding for the network interface of **myVMNVA** virtual machine in Azure. +### Enable IP forwarding in the operating system -1. Search for *myVMNVA* in the portal search box. +In this section, turn on IP forwarding for the operating system of the **vm-nva** virtual machine to forward network traffic. Use the Azure Bastion service to connect to the **vm-nva** virtual machine. -3. In the **myVMNVA** overview page, select **Networking** from the **Settings** section. +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. -4. In the **Networking** page of **myVMNVA**, select the network interface next to **Network Interface:**. The name of the interface will begin with **myvmnva**. +1. In **Virtual machines**, select **vm-nva**. - :::image type="content" source="./media/tutorial-create-route-table-portal/virtual-machine-networking.png" alt-text="Screenshot showing Networking page of network virtual appliance virtual machine in Azure portal." border="true"::: +1. Select **Bastion** in the **Operations** section. -5. In the network interface overview page, select **IP configurations** from the **Settings** section. +1. Enter the username and password you entered when the virtual machine was created. -6. In the **IP configurations** page, set **IP forwarding** to **Enabled**, then select **Save**. +1. Select **Connect**. - :::image type="content" source="./media/tutorial-create-route-table-portal/enable-ip-forwarding.png" alt-text="Screenshot showing Enabled I P forwarding in Azure portal." border="true"::: +1. Enter the following information at the prompt of the virtual machine to enable IP forwarding: -### Turn on IP forwarding in the operating system + ```bash + sudo vim /etc/sysctl.conf + ``` -In this section, you'll turn on IP forwarding for the operating system of **myVMNVA** virtual machine to forward network traffic. You'll use the same bastion connection to **myVMPrivate** VM, that you started in the previous steps, to open a remote desktop connection to **myVMNVA** VM. +1. In the Vim editor, remove the **`#`** from the line **`net.ipv4.ip_forward=1`**: -1. From PowerShell on **myVMPrivate** VM, open a remote desktop connection to the **myVMNVA** VM: + Press the **Insert** key. - ```powershell - mstsc /v:myvmnva + ```bash + # Uncomment the next line to enable packet forwarding for IPv4 + net.ipv4.ip_forward=1 ``` -2. After you connect to **myVMNVA** VM, open Windows PowerShell and enter this command to turn on IP forwarding: + Press the **Esc** key. - ```powershell - Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters -Name IpEnableRouter -Value 1 - ``` + Enter **`:wq`** and press **Enter**. -3. Restart **myVMNVA** VM. +1. Close the Bastion session. - ```powershell - Restart-Computer - ``` +1. Restart the virtual machine. ## Create a route table -In this section, you'll create a route table. +In this section, create a route table to define the route of the traffic through the NVA virtual machine. The route table is associated to the **subnet-1** subnet where the **vm-public** virtual machine is deployed. -1. From the Azure portal menu, select **+ Create a resource** > **Networking** > **Route table**, or search for *Route table* in the portal search box. +1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -3. Select **Create**. +1. Select **+ Create**. -4. On the **Basics** tab of **Create route table**, enter or select this information: +1. In **Create Route table** enter or select the following information: | Setting | Value | | - | -- | | **Project details** | |- | Subscription | Select your subscription.| - | Resource group | Select **myResourceGroup**. | - | **Instance details** | | - | Region | Select **East US**. | - | Name | Enter *myRouteTablePublic*. | - | Propagate gateway routes | Select **Yes**. | + | Subscription | Select your subscription. | + | Resource group | Select **test-rg**. | + | **Instance details** | | + | Region | Select **East US 2**. | + | Name | Enter **route-table-public**. | + | Propagate gateway routes | Leave the default of **Yes**. | - :::image type="content" source="./media/tutorial-create-route-table-portal/create-route-table.png" alt-text="Screenshot showing Basics tab of Create route table in Azure portal." border="true"::: +1. Select **Review + create**. -5. Select the **Review + create** tab, or select the blue **Review + create** button at the bottom of the page. +1. Select **Create**. ## Create a route -In this section, you'll create a route in the route table that you created in the previous steps. +In this section, create a route in the route table that you created in the previous steps. ++1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results. -1. Select **Go to resource** or Search for *myRouteTablePublic* in the portal search box. +1. Select **route-table-public**. -3. In the **myRouteTablePublic** page, select **Routes** from the **Settings** section. +1. In **Settings** select **Routes**. -4. In the **Routes** page, select the **+ Add** button. +1. Select **+ Add** in **Routes**. -5. In **Add route**, enter or select this information: +1. Enter or select the following information in **Add route**: | Setting | Value | | - | -- |- | Route name | Enter *ToPrivateSubnet*. | - | Address prefix destination | Select **IP Addresses**. | - | Destination IP addresses/CIDR ranges| Enter *10.0.1.0/24* (The address range of the **Private** subnet created earlier). | + | Route name | Enter **to-private-subnet**. | + | Destination type | Select **IP Addresses**. | + | Destination IP addresses/CIDR ranges | Enter **10.0.2.0/24**. | | Next hop type | Select **Virtual appliance**. |- | Next hop address | Enter *10.0.2.4* (The address of **myVMNVA** VM created earlier in the **DMZ** subnet). | -- :::image type="content" source="./media/tutorial-create-route-table-portal/add-route-inline.png" alt-text="Screenshot showing Add route configuration in Azure portal." lightbox="./media/tutorial-create-route-table-portal/add-route-expanded.png"::: --6. Select **Add**. + | Next hop address | Enter **10.0.3.4**. </br> **_This is the IP address you of vm-nva you created in the earlier steps._**. | -## Associate a route table to a subnet + :::image type="content" source="./media/tutorial-create-route-table-portal/add-route.png" alt-text="Screenshot of route creation in route table."::: -In this section, you'll associate the route table that you created in the previous steps to a subnet. +1. Select **Add**. -1. Search for *myVirtualNetwork* in the portal search box. +1. Select **Subnets** in **Settings**. -3. In the **myVirtualNetwork** page, select **Subnets** from the **Settings** section. +1. Select **+ Associate**. -4. In the virtual network's subnet list, select **Public**. +1. Enter or select the following information in **Associate subnet**: -5. In **Route table**, select **myRouteTablePublic** that you created in the previous steps. --6. Select **Save** to associate your route table to the **Public** subnet. + | Setting | Value | + | - | -- | + | Virtual network | Select **vnet-1 (test-rg)**. | + | Subnet | Select **subnet-1**. | - :::image type="content" source="./media/tutorial-create-route-table-portal/associate-route-table-inline.png" alt-text="Screenshot showing Associate route table to the Public subnet in the virtual network in Azure portal." lightbox="./media/tutorial-create-route-table-portal/associate-route-table-expanded.png"::: +1. Select **OK**. ## Test the routing of network traffic -You'll test routing of network traffic using [tracert](/windows-server/administration/windows-commands/tracert) tool from **myVMPublic** VM to **myVMPrivate** VM, and then you'll test the routing in the opposite direction. +Test routing of network traffic from **vm-public** to **vm-private**. Test routing of network traffic from **vm-private** to **vm-public**. -### Test network traffic from myVMPublic VM to myVMPrivate VM +### Test network traffic from vm-public to vm-private -1. From PowerShell on **myVMPrivate** VM, open a remote desktop connection to the **myVMPublic** VM: +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. - ```powershell - mstsc /v:myvmpublic - ``` +1. In **Virtual machines**, select **vm-public**. -2. After you connect to **myVMPublic** VM, open Windows PowerShell and enter this *tracert* command to trace the routing of network traffic from **myVMPublic** VM to **myVMPrivate** VM: +1. Select **Bastion** in the **Operations** section. +1. Enter the username and password you entered when the virtual machine was created. - ```powershell - tracert myvmprivate - ``` +1. Select **Connect**. - The response is similar to this example: +1. In the prompt, enter the following command to trace the routing of network traffic from **vm-public** to **vm-private**: - ```powershell - Tracing route to myvmprivate.q04q2hv50taerlrtdyjz5nza1f.bx.internal.cloudapp.net [10.0.1.4] - over a maximum of 30 hops: + ```bash + tracepath vm-private + ``` - 1 1 ms * 2 ms myvmnva.internal.cloudapp.net [10.0.2.4] - 2 2 ms 1 ms 1 ms myvmprivate.internal.cloudapp.net [10.0.1.4] + The response is similar to the following example: - Trace complete. + ```output + azureuser@vm-public:~$ tracepath vm-private + 1?: [LOCALHOST] pmtu 1500 + 1: vm-nva.internal.cloudapp.net 1.766ms + 1: vm-nva.internal.cloudapp.net 1.259ms + 2: vm-private.internal.cloudapp.net 2.202ms reached + Resume: pmtu 1500 hops 2 back 1 ``` - You can see that there are two hops in the above response for *tracert* ICMP traffic from **myVMPublic** VM to **myVMPrivate** VM. The first hop is **myVMNVA** VM, and the second hop is the destination **myVMPrivate** VM. + You can see that there are two hops in the above response for **`tracepath`** ICMP traffic from **vm-public** to **vm-private**. The first hop is **vm-nva**. The second hop is the destination **vm-private**. - Azure sent the traffic from **Public** subnet through the NVA and not directly to **Private** subnet because you previously added **ToPrivateSubnet** route to **myRouteTablePublic** route table and associated it to **Public** subnet. + Azure sent the traffic from **subnet-1** through the NVA and not directly to **subnet-private** because you previously added the **to-private-subnet** route to **route-table-public** and associated it to **subnet-1**. -1. Close the remote desktop connection to **myVMPublic** VM. +1. Close the Bastion session. -### Test network traffic from myVMPrivate VM to myVMPublic VM +### Test network traffic from vm-private to vm-public -1. From PowerShell on **myVMPrivate** VM, and enter this *tracert* command to trace the routing of network traffic from **myVmPrivate** VM to **myVmPublic** VM. -- ```powershell - tracert myvmpublic - ``` +1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. - The response is similar to this example: +1. In **Virtual machines**, select **vm-private**. - ```powershell - Tracing route to myvmpublic.q04q2hv50taerlrtdyjz5nza1f.bx.internal.cloudapp.net [10.0.0.4] - over a maximum of 30 hops: +1. Select **Bastion** in the **Operations** section. - 1 1 ms 1 ms 1 ms myvmpublic.internal.cloudapp.net [10.0.0.4] +1. Enter the username and password you entered when the virtual machine was created. - Trace complete. - ``` +1. Select **Connect**. - You can see that there's one hop in the above response, which is the destination **myVMPublic** virtual machine. +1. In the prompt, enter the following command to trace the routing of network traffic from **vm-private** to **vm-public**: - Azure sent the traffic directly from **Private** subnet to **Public** subnet. By default, Azure routes traffic directly between subnets. + ```bash + tracepath vm-public + ``` -1. Close the bastion session. + The response is similar to the following example: -## Clean up resources + ```output + azureuser@vm-private:~$ tracepath vm-public + 1?: [LOCALHOST] pmtu 1500 + 1: vm-public.internal.cloudapp.net 2.584ms reached + 1: vm-public.internal.cloudapp.net 2.147ms reached + Resume: pmtu 1500 hops 1 back 2 + ``` -When the resource group is no longer needed, delete **myResourceGroup** and all the resources it contains: + You can see that there's one hop in the above response, which is the destination **vm-public**. -1. Enter *myResourceGroup* in the **Search** box at the top of the Azure portal. When you see **myResourceGroup** in the search results, select it. + Azure sent the traffic directly from **subnet-private** to **subnet-1**. By default, Azure routes traffic directly between subnets. -1. Select **Delete resource group**. +1. Close the Bastion session. -1. Enter *myResourceGroup* for **TYPE THE RESOURCE GROUP NAME:** and select **Delete**. ## Next steps In this tutorial, you: * Created a route table and associated it to a subnet.+ * Created a simple NVA that routed traffic from a public subnet to a private subnet. -You can deploy different pre-configured NVAs from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking), which provide many useful network functions. +You can deploy different preconfigured NVAs from the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/category/networking), which provide many useful network functions. To learn more about routing, see [Routing overview](virtual-networks-udr-overview.md) and [Manage a route table](manage-route-table.md). |
virtual-network | Virtual Network Encryption Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md | Virtual network encryption has the following requirements: - Global Peering is supported in regions where virtual network encryption is supported. -- Traffic to unsupported Virtual Machines is unencrypted. Use Virtual Network Flow Logs to confirm flow encryption between virtual machines. For more information about Virtual Network Flow Logs, see [Virtual Network Flow Logs](/azure/network-watcher/network-watcher-nsg-flow-logging-portal).+- Traffic to unsupported Virtual Machines is unencrypted. Use Virtual Network Flow Logs to confirm flow encryption between virtual machines. For more information, see [VNet flow logs](../network-watcher/vnet-flow-logs-overview.md). - The start/stop of existing virtual machines may be required after enabling encryption in a virtual network. ## Availability |
virtual-network | Virtual Network Manage Peering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md | Title: Create, change, or delete an Azure virtual network peering description: Learn how to create, change, or delete a virtual network peering. With virtual network peering, you connect virtual networks in the same region and across regions.- tags: azure-resource-manager - Previously updated : 11/14/2022 Last updated : 08/24/2023 # Create, change, or delete a virtual network peering -Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global VNet Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing the [virtual network peering tutorial](tutorial-connect-virtual-networks-portal.md). +Learn how to create, change, or delete a virtual network peering. Virtual network peering enables you to connect virtual networks in the same region and across regions (also known as Global Virtual Network Peering) through the Azure backbone network. Once peered, the virtual networks are still managed as separate resources. If you're new to virtual network peering, you can learn more about it in the [virtual network peering overview](virtual-network-peering-overview.md) or by completing the [virtual network peering tutorial](tutorial-connect-virtual-networks-portal.md). ## Prerequisites If you don't have an Azure account with an active subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). Complete one of these tasks before starting the remainder of this article: -- **Portal users**: Sign in to the [Azure portal](https://portal.azure.com) with an Azure account that has the [necessary permissions](#permissions) to work with peerings.+# [**Portal**](#tab/peering-portal) ++Sign in to the [Azure portal](https://portal.azure.com) with an Azure account that has the [necessary permissions](#permissions) to work with peerings. ++# [**PowerShell**](#tab/peering-powershell) -- **PowerShell users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected.+Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/powershell), or run PowerShell locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **PowerShell** if it isn't already selected. - If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings. +If you're running PowerShell locally, use Azure PowerShell module version 1.0.0 or later. Run `Get-Module -ListAvailable Az.Network` to find the installed version. If you need to install or upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). Run `Connect-AzAccount` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings. -- **Azure CLI users**: Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected.+# [**Azure CLI**](#tab/peering-cli) - If you're running Azure CLI locally, use Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings. +Either run the commands in the [Azure Cloud Shell](https://shell.azure.com/bash), or run Azure CLI locally from your computer. The Azure Cloud Shell is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account. In the Azure Cloud Shell browser tab, find the **Select environment** dropdown list, then pick **Bash** if it isn't already selected. +If you're running Azure CLI locally, use Azure CLI version 2.0.31 or later. Run `az --version` to find the installed version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). Run `az login` to sign in to Azure with an account that has the [necessary permissions](#permissions) to work with VNet peerings. -The account you log into, or connect to Azure with, must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md) that gets assigned the appropriate actions listed in [Permissions](#permissions). +The account you use connect to Azure must be assigned to the [network contributor](../role-based-access-control/built-in-roles.md#network-contributor) role or to a [custom role](../role-based-access-control/custom-roles.md) that gets assigned the appropriate actions listed in [Permissions](#permissions). ++ ## Create a peering Before creating a peering, familiarize yourself with the [requirements and const # [**Portal**](#tab/peering-portal) -1. In the search box at the top of the Azure portal, enter *Virtual networks* in the search box. When **Virtual networks** appear in the search results, select it. Don't select **Virtual networks (classic)**, as you can't create a peering from a virtual network deployed through the classic deployment model. -- :::image type="content" source="./media/virtual-network-manage-peering/search-vnet.png" alt-text="Screenshot of searching for virtual networks."::: --1. Select the virtual network in the list that you want to create a peering for. +1. In the search box at the top of the Azure portal, enter **Virtual network**. Select **Virtual networks** in the search results. - :::image type="content" source="./media/virtual-network-manage-peering/select-vnet.png" alt-text="Screenshot of selecting VNetA from the virtual networks page."::: +1. In **Virtual networks**, select the network you want to create a peering for. -1. Select **Peerings** under **Settings** and then select **+ Add**. +1. Select **Peerings** in **Settings**. - :::image type="content" source="./media/virtual-network-manage-peering/vneta-peerings.png" alt-text="Screenshot of peerings page for VNetA."::: +1. Select **+ Add**. 1. <a name="add-peering"></a>Enter or select values for the following settings, and then select **Add**. Before creating a peering, familiarize yourself with the [requirements and const | -- | -- | | **This virtual network** | | | Peering link name | The name of the peering on this virtual network. The name must be unique within the virtual network. |- | Traffic to remote virtual network | - Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Allow**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). </br> - Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Selecting the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* | - | Traffic forwarded from remote virtual network | Select **Allow (default)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* | - | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br> - If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use this virtual network's gateway or Router Server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network's gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **None (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. | + | Allow access to remote virtual network | Option is selected by **default**. </br></br> - Select **Allow access to remote virtual network (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). | + | Allow traffic to remote virtual network | Option is deselected by **default**. </br></br> - Select **Allow traffic to remote virtual network** if you want traffic to flow to the peered virtual network. You can deselect this setting if you have a peering between virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is deselected, traffic doesn't flow between the peered virtual networks. Traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Deselecting the **Allow traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* | + | Allow traffic forwarded from the remote virtual network (allow gateway transit) | Option is deselected by **default**. </br></br> - Select **Allow traffic forwarded from the remote virtual network (allow gateway transit)** if you want traffic **forwarded** by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. </br> For example, consider three virtual networks named ****Spoke1****, ****Spoke2****, and ****Hub****. A peering exists between each spoke virtual network and the ****Hub**** virtual network, but peerings don't exist between the spoke virtual networks. </br> A network virtual appliance is deployed in the **Hub** virtual network. User-defined routes are applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the **hub** virtual network, traffic doesn't flow between the spoke virtual networks because the **hub** isn't forwarding the traffic between the virtual networks. </br> While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* | + | Use remote virtual network gateway or route server | Option is deselected by **default**. </br></br> - Select **Use remote virtual network gateway or route Server** </br></br> If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. </br> For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br></br> If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **deselected (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*.| | **Remote virtual network** | | | Peering link name | The name of the peering on the remote virtual network. The name must be unique within the virtual network. | | Virtual network deployment model | Select which deployment model the virtual network you want to peer with was deployed through. |- | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, check this checkbox. Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. | - | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). If the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're creating the peering from, first add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. + | I know my resource ID | If you have read access to the virtual network you want to peer with, leave this checkbox unchecked. If you don't have read access to the virtual network or subscription you want to peer with, select this checkbox. </br> Enter the full resource ID of the virtual network you want to peer with in the **Resource ID** box that appeared when you checked the checkbox. </br> The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br></br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br></br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. | + | Resource ID | This field appears when you check **I know my resource ID** checkbox. The resource ID you enter must be for a virtual network that exists in the same, or [supported different](#requirements-and-constraints) Azure [region](https://azure.microsoft.com/regions) as this virtual network. </br> The full resource ID looks similar to `/subscriptions/<Id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>`. </br> You can get the resource ID for a virtual network by viewing the properties for a virtual network. To learn how to view the properties for a virtual network, see [Manage virtual networks](manage-virtual-network.md#view-virtual-networks-and-settings). User permissions must be assigned if the subscription is associated to a different Azure Active Directory tenant than the subscription with the virtual network you're peering. Add a user from each tenant as a [guest user](../active-directory/external-identities/add-users-administrator.md#add-guest-users-to-the-directory) in the opposite tenant. | Subscription | Select the [subscription](../azure-glossary-cloud-terminology.md#subscription) of the virtual network you want to peer with. One or more subscriptions are listed, depending on how many subscriptions your account has read access to. If you checked the **I know my resource ID** checkbox, this setting isn't available. | | Virtual network | Select the virtual network you want to peer with. You can select a virtual network created through either Azure deployment model. If you want to select a virtual network in a different region, you must select a virtual network in a [supported region](#cross-region). You must have read access to the virtual network for it to be visible in the list. If a virtual network is listed, but grayed out, it may be because the address space for the virtual network overlaps with the address space for this virtual network. If virtual network address spaces overlap, they can't be peered. If you checked the **I know my resource ID** checkbox, this setting isn't available. |- | Traffic to remote virtual network | - Select **Allow (default)** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Allow**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). </br> - Select **Block all traffic to the remote virtual network** if you don't want traffic to flow to the peered virtual network by default. You can select this setting if you have peering between two virtual networks but occasionally want to disable default traffic flow between the two. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks by default; however, traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Selecting the **Block all traffic to remote virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* | - | Traffic forwarded from remote virtual network | Select **Allow (default)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named Spoke1, Spoke2, and Hub. A peering exists between each spoke virtual network and the Hub virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the Hub virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the hub virtual network, traffic doesn't flow between the spoke virtual networks because the hub isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* | - | Virtual network gateway or Route Server | Select **Use this virtual network's gateway or Route Server**: </br>- If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use this virtual network's gateway or Router Server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use the remote virtual network's gateway or Route Server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **None (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a hub (see the hub and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> Select **Use the remote virtual network's gateway or Route Server**: </br>- If you want to allow traffic from this virtual network to flow through a virtual network gateway deployed in the virtual network you're peering with. For example, the virtual network you're peering with has a VPN gateway that enables communication to an on-premises network. Selecting this setting allows traffic from this virtual network to flow through the VPN gateway in the peered virtual network. </br>- If you want this virtual network to use the remote Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select this setting, the peered virtual network must have a virtual network gateway deployed in it, and must have the **Use this virtual network's gateway or Route Server** setting selected. If you leave this setting as **None (default)**, traffic from this virtual network can still flow to the peered virtual network, but can't flow through a virtual network gateway in the peered virtual network. Only one peering for this virtual network can have this setting enabled. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. | + | Allow access to current virtual network | Option is selected by **default**. </br></br> - Select **Allow access to current virtual network** if you want to enable communication between the two virtual networks through the default `VirtualNetwork` flow. Enabling communication between virtual networks allows resources that are connected to either virtual network to communicate with each other over the Azure private network. The **VirtualNetwork** service tag for network security groups encompasses the virtual network and peered virtual network when this setting is set to **Selected**. To learn more about service tags, see [Azure service tags](./service-tags-overview.md). | + | Allow traffic to current virtual network | Option is selected by **default**. </br></br> - Select **Allow traffic to current virtual network** if you want traffic to flow to the peered virtual network by default. You can deselect this setting if you have a peering between two virtual networks but occasionally want to disable traffic flow between them. You may find enabling/disabling is more convenient than deleting and re-creating peerings. When this setting is selected, traffic doesn't flow between the peered virtual networks. Traffic may still flow if explicitly allowed through a [network security group](./network-security-groups-overview.md) rule that includes the appropriate IP addresses or application security groups. </br></br> **NOTE:** *Deselecting the **Allow traffic to current virtual network** setting only changes the definition of the **VirtualNetwork** service tag. It *doesn't* fully prevent traffic flow across the peer connection, as explained in this setting description.* | + | Allow traffic forwarded from current virtual network (allow gateway transit) | Option is deselected by **default**. </br></br> - Select **Allow traffic forwarded from current virtual network (allow gateway transit)** if you want traffic *forwarded* by a network virtual appliance in the remote virtual network (that didn't originate from the remote virtual network) to flow to this virtual network through a peering. For example, consider three virtual networks named **Spoke1**, **Spoke2**, and **Hub**. A peering exists between each spoke virtual network and the **Hub** virtual network, but peerings doesn't exist between the spoke virtual networks. A network virtual appliance gets deployed in the **Hub** virtual network, and user-defined routes gets applied to each spoke virtual network that route traffic between the subnets through the network virtual appliance. If this setting isn't selected for the peering between each spoke virtual network and the **hub** virtual network, traffic doesn't flow between the spoke virtual networks because the **hub** isn't forwarding the traffic between the virtual networks. While enabling this capability allows the forwarded traffic through the peering, it doesn't create any user-defined routes or network virtual appliances. User-defined routes and network virtual appliances are created separately. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). </br></br> **NOTE:** *You don't need to select this setting if traffic is forwarded between virtual networks through an Azure VPN Gateway.* | + | Use current virtual network gateway or route server | Option is deselected by **default**. </br></br> Select **Use this virtual network's gateway or Route Server**: </br> - If you have a virtual network gateway deployed in this virtual network and want to allow traffic from the peered virtual network to flow through the gateway. For example, this virtual network may be attached to an on-premises network through a virtual network gateway. The gateway can be an ExpressRoute or VPN gateway. Selecting this setting allows traffic from the peered virtual network to flow through the gateway deployed in this virtual network to the on-premises network. </br>- If you have a Route Server deployed in this virtual network and you want the peered virtual network to communicate with the Route Server to exchange routes. For more information, see [Azure Route Server](../route-server/overview.md). </br></br> If you select **Use current virtual network gateway or route server**, the peered virtual network can't have a gateway configured. The peered virtual network must have the **Use remote virtual network gateway or route server** selected when setting up the peering from the other virtual network to this virtual network. If you leave this setting as **Deselected (default)**, traffic from the peered virtual network still flows to this virtual network, but can't flow through a virtual network gateway deployed in this virtual network. If the peering is between a virtual network (Resource Manager) and a virtual network (classic), the gateway must be in the virtual network (Resource Manager).</br></br> In addition to forwarding traffic to an on-premises network, a VPN gateway can forward network traffic between virtual networks that are peered with the virtual network the gateway is in, without the virtual networks needing to be peered with each other. Using a VPN gateway to forward traffic is useful when you want to use a VPN gateway in a **hub** (see the **hub** and spoke example described for **Allow forwarded traffic**) virtual network to route traffic between spoke virtual networks that aren't peered with each other. To learn more about allowing use of a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md). This scenario requires implementing user-defined routes that specify the virtual network gateway as the next hop type. Learn about [user-defined routes](virtual-networks-udr-overview.md#user-defined). You can only specify a VPN gateway as a next hop type in a user-defined route, you can't specify an ExpressRoute gateway as the next hop type in a user-defined route. </br></br> **NOTE:** *You can't use remote gateways if you already have a gateway configured in your virtual network. To learn more about using a gateway for transit, see [Configure a VPN gateway for transit in a virtual network peering](../vpn-gateway/vpn-gateway-peering-gateway-transit.md)*. | - :::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page." lightbox="./media/virtual-network-manage-peering/add-peering-expanded.png"::: + :::image type="content" source="./media/virtual-network-manage-peering/add-peering.png" alt-text="Screenshot of peering configuration page."::: > [!NOTE]- > If you use a Virtual Network Gateway to send on-premises traffic transitively to a peered VNet, the peered VNet IP range for the on-premises VPN device must be set to 'interesting' traffic. You may need to add all Azure VNet's CIDR addresses to the Site-2-Site IPSec VPN Tunnel configuration on the on-premises VPN device. CIDR addresses include resources like such as Hub, Spokes, and Point-2-Site IP address pools. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet. - > Intersting traffic is communicated through Phase 2 security associations. The security association creates a dedicated VPN tunnel for each specified subnet. The on-premises and Azure VPN Gateway tier have to support the same number of Site-2-Site VPN tunnels and Azure VNet subnets. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet. Consult your on-premises VPN documentation for instructions to create Phase 2 security associations for each specified Azure VNet subnet. + > If you use a Virtual Network Gateway to send on-premises traffic transitively to a peered virtual network, the peered virtual network IP range for the on-premises VPN device must be set to 'interesting' traffic. You may need to add all Azure virtual network's CIDR addresses to the Site-2-Site IPSec VPN Tunnel configuration on the on-premises VPN device. CIDR addresses include resources like such as **Hub**, Spokes, and Point-2-Site IP address pools. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet. + > Interesting traffic is communicated through Phase 2 security associations. The security association creates a dedicated VPN tunnel for each specified subnet. The on-premises and Azure VPN Gateway tier have to support the same number of Site-2-Site VPN tunnels and Azure VNet subnets. Otherwise, your on-premises resources won't be able to communicate with resources in the peered VNet. Consult your on-premises VPN documentation for instructions to create Phase 2 security associations for each specified Azure VNet subnet. -1. Select the **Refresh** button after a few seconds, and the peering status will change from *Updating* to *Connected*. +1. Select the **Refresh** button after a few seconds, and the peering status will change from **Updating** to **Connected**. :::image type="content" source="./media/virtual-network-manage-peering/vnet-peering-connected.png" alt-text="Screenshot of virtual network peering status on peerings page."::: For step-by-step instructions for implementing peering between virtual networks Use [Add-AzVirtualNetworkPeering](/powershell/module/az.network/add-azvirtualnetworkpeering) to create virtual network peerings. ```azurepowershell-interactive-## Place the virtual network VNetA configuration into a variable. ## -$vnetA = Get-AzVirtualNetwork -Name VNetA -ResourceGroupName myResourceGroup -## Place the virtual network VNetB configuration into a variable. ## -$vnetB = Get-AzVirtualNetwork -Name VNetB -ResourceGroupName myResourceGroup -## Create peering from VNetA to VNetB. ## -Add-AzVirtualNetworkPeering -Name VNetAtoVNetB -VirtualNetwork $vnetA -RemoteVirtualNetworkId $vnetB.Id -## Create peering from VNetB to VNetA. ## -Add-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetwork $vnetB -RemoteVirtualNetworkId $vnetA.Id +## Place the virtual network vnet-1 configuration into a variable. ## +$net-1 = @{ + Name = 'vnet-1' + ResourceGroupName = 'test-rg' +} +$vnet-1 = Get-AzVirtualNetwork @net-1 ++## Place the virtual network vnet-2 configuration into a variable. ## +$net-2 = @{ + Name = 'vnet-2' + ResourceGroupName = 'test-rg-2' +} +$vnet-2 = Get-AzVirtualNetwork @net-2 ++## Create peering from vnet-1 to vnet-2. ## +$peer1 = @{ + Name = 'vnet-1-to-vnet-2' + VirtualNetwork = $vnet-1 + RemoteVirtualNetworkId = $vnet-2.Id +} +Add-AzVirtualNetworkPeering @peer1 ++## Create peering from vnet-2 to vnet-1. ## +$peer2 = @{ + Name = 'vnet-2-to-vnet-1' + VirtualNetwork = $vnet-2 + RemoteVirtualNetworkId = $vnet-1.Id +} +Add-AzVirtualNetworkPeering @peer2 ``` # [**Azure CLI**](#tab/peering-cli) Add-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetwork $vnetB -RemoteVir 1. Use [az network vnet peering create](/cli/azure/network/vnet/peering#az-network-vnet-peering-create) to create virtual network peerings. ```azurecli-interactive-## Create peering from VNetA to VNetB. ## -az network vnet peering create --name VNetAtoVNetB --vnet-name VNetA --remote-vnet VNetB --resource-group myResourceGroup --allow-vnet-access --allow-forwarded-traffic -## Create peering from VNetB to VNetA. ## -az network vnet peering create --name VNetBtoVNetA --vnet-name VNetB --remote-vnet VNetA --resource-group myResourceGroup --allow-vnet-access --allow-forwarded-traffic +## Create peering from vnet-1 to vnet-2. ## +az network vnet peering create \ + --name vnet-1-to-vnet-2 \ + --vnet-name vnet-1 \ + --remote-vnet vnet-2 \ + --resource-group test-rg \ + --allow-vnet-access \ + --allow-forwarded-traffic ++## Create peering from vnet-2 to vnet-1. ## +az network vnet peering create \ + --name vnet-2-to-vnet-1 \ + --vnet-name vnet-2 \ + --remote-vnet vnet-1 \ + --resource-group test-rg-2 \ + --allow-vnet-access \ + --allow-forwarded-traffic ``` Before changing a peering, familiarize yourself with the [requirements and const # [**Portal**](#tab/peering-portal) -1. Select the virtual network that you would like to view or change its peering settings. +1. In the search box at the top of the Azure portal, enter **Virtual network**. Select **Virtual networks** in the search results. - :::image type="content" source="./media/virtual-network-manage-peering/vnet-list.png" alt-text="Screenshot of the list of virtual networks in the subscription."::: +1. Select the virtual network that you would like to view or change its peering settings in **Virtual networks**. -1. Select **Peerings** under *Settings* and then select the peering you want to view or change settings for. +1. Select **Peerings** in **Settings** and then select the peering you want to view or change settings for. :::image type="content" source="./media/virtual-network-manage-peering/select-peering.png" alt-text="Screenshot of select a peering to change settings from the virtual network."::: Before changing a peering, familiarize yourself with the [requirements and const :::image type="content" source="./media/virtual-network-manage-peering/change-peering-settings.png" alt-text="Screenshot of changing virtual network peering settings."::: - # [**PowerShell**](#tab/peering-powershell) Use [Get-AzVirtualNetworkPeering](/powershell/module/az.network/get-azvirtualnetworkpeering) to list peerings of a virtual network and their settings. ```azurepowershell-interactive-Get-AzVirtualNetworkPeering -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup +$peer = @{ + VirtualNetworkName = 'vnet-1' + ResourceGroupName = 'test-rg' +} +Get-AzVirtualNetworkPeering @peer ``` Use [Set-AzVirtualNetworkPeering](/powershell/module/az.network/set-azvirtualnetworkpeering) to change peering settings. ```azurepowershell-interactive ## Place the virtual network peering configuration into a variable. ##-$peering = Get-AzVirtualNetworkPeering -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup -Name VNetAtoVNetB +$peer = @{ + Name = 'vnet-1-to-vnet-2' + ResourceGroupName = 'test-rg' +} +$peering = Get-AzVirtualNetworkPeering @peer + # Allow traffic forwarded from remote virtual network. ## $peering.AllowForwardedTraffic = $True+ ## Update the peering with changes made. ## Set-AzVirtualNetworkPeering -VirtualNetworkPeering $peering ``` - # [**Azure CLI**](#tab/peering-cli) Use [az network vnet peering list](/cli/azure/network/vnet/peering#az-network-vnet-peering-list) to list peerings of a virtual network. ```azurecli-interactive-az network vnet peering list --resource-group myResourceGroup --vnet-name VNetA --out table +az network vnet peering list \ + --resource-group test-rg \ + --vnet-name vnet-1 \ + --out table ``` Use [az network vnet peering show](/cli/azure/network/vnet/peering#az-network-vnet-peering-show) to show settings for a specific peering. ```azurecli-interactive-az network vnet peering show --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA +az network vnet peering show \ + --resource-group test-rg \ + --name vnet-1-to-vnet-2 \ + --vnet-name vnet-1 ``` Use [az network vnet peering update](/cli/azure/network/vnet/peering#az-network-vnet-peering-update) to change peering settings. ```azurecli-interactive ## Block traffic forwarded from remote virtual network. ##-az network vnet peering update --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA --set allowForwardedTraffic=false +az network vnet peering update \ + --resource-group test-rg \ + --name vnet-1-to-vnet-2 \ + --vnet-name vnet-1 \ + --set allowForwardedTraffic=false ``` Before deleting a peering, familiarize yourself with the [requirements and const # [**Portal**](#tab/peering-portal) -When a peering between two virtual networks is deleted, traffic can no longer flow between the virtual networks. If you want virtual networks to communicate sometimes, but not always, rather than deleting a peering, you can set the **Traffic to remote virtual network** setting to **Block all traffic to the remote virtual network** instead. You may find disabling and enabling network access easier than deleting and recreating peerings. +When a peering between two virtual networks is deleted, traffic can no longer flow between the virtual networks. If you want virtual networks to communicate sometimes, but not always, rather than deleting a peering, +deselect the **Allow traffic to remote virtual network** setting if you want to block traffic to the remote virtual network. You may find disabling and enabling network access easier than deleting and recreating peerings. -1. Select the virtual network in the list that you want to delete a peering for. +1. In the search box at the top of the Azure portal, enter **Virtual network**. Select **Virtual networks** in the search results. - :::image type="content" source="./media/virtual-network-manage-peering/vnet-list.png" alt-text="Screenshot of selecting a virtual network in the subscription."::: +1. Select the virtual network that you would like to view or change its peering settings in **Virtual networks**. -1. Select **Peerings** under *Settings*. +1. Select **Peerings** in **Settings**. :::image type="content" source="./media/virtual-network-manage-peering/select-peering.png" alt-text="Screenshot of select a peering to delete from the virtual network."::: When a peering between two virtual networks is deleted, traffic can no longer fl Use [Remove-AzVirtualNetworkPeering](/powershell/module/az.network/remove-azvirtualnetworkpeering) to delete virtual network peerings ```azurepowershell-interactive-## Delete VNetA to VNetB peering. ## -Remove-AzVirtualNetworkPeering -Name VNetAtoVNetB -VirtualNetworkName VNetA -ResourceGroupName myResourceGroup -## Delete VNetB to VNetA peering. ## -Remove-AzVirtualNetworkPeering -Name VNetBtoVNetA -VirtualNetworkName VNetB -ResourceGroupName myResourceGroup +## Delete vnet-1 to vnet-2 peering. ## +$peer1 = @{ + Name = 'vnet-1-to-vnet-2' + ResourceGroupName = 'test-rg' +} +Remove-AzVirtualNetworkPeering @peer1 ++## Delete vnet-2 to vnet-1 peering. ## +$peer2 = @{ + Name = 'vnet-2-to-vnet-1' + ResourceGroupName = 'test-rg-2' +} +Remove-AzVirtualNetworkPeering @peer2 ``` - # [**Azure CLI**](#tab/peering-cli) Use [az network vnet peering delete](/cli/azure/network/vnet/peering#az-network-vnet-peering-delete) to delete virtual network peerings. ```azurecli-interactive-## Delete VNetA to VNetB peering. ## -az network vnet peering delete --resource-group myResourceGroup --name VNetAtoVNetB --vnet-name VNetA -## Delete VNetB to VNetA peering. ## -az network vnet peering delete --resource-group myResourceGroup --name VNetBtoVNetA --vnet-name VNetB +## Delete vnet-1 to vnet-2 peering. ## +az network vnet peering delete \ + --resource-group test-rg \ + --name vnet-1-to-vnet-2 \ + --vnet-name vnet-1 ++## Delete vnet-2 to vnet-1 peering. ## +az network vnet peering delete \ + --resource-group test-rg-2 \ + --name vnet-2-to-vnet-1 \ + --vnet-name vnet-2 ``` ## Requirements and constraints -- <a name="cross-region"></a>You can peer virtual networks in the same region, or different regions. Peering virtual networks in different regions is also referred to as *Global VNet Peering*.+- <a name="cross-region"></a>You can peer virtual networks in the same region, or different regions. Peering virtual networks in different regions is also referred to as **Global Virtual Network Peering**. -- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a VNet in Azure public cloud can't be peered to a VNet in Microsoft Azure operated by 21Vianet cloud.+- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a virtual network in Azure public cloud can't be peered to a virtual network in Microsoft Azure operated by 21Vianet cloud. -- Resources in one virtual network can't communicate with the front-end IP address of a Basic Load Balancer (internal or public) in a globally peered virtual network. Support for Basic Load Balancer only exists within the same region. Support for Standard Load Balancer exists for both, VNet Peering and Global VNet Peering. Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global VNet Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).+- Resources in one virtual network can't communicate with the front-end IP address of a basic load balancer (internal or public) in a globally peered virtual network. Support for basic load balancer only exists within the same region. Support for standard load balancer exists for both, Virtual Network Peering and Global Virtual Network Peering. Some services that use a basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global Virtual Network Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers). - You can use remote gateways or allow gateway transit in globally peered virtual networks and locally peered virtual networks. - The virtual networks can be in the same, or different [subscriptions](#next-steps). When you peer virtual networks in different subscriptions, both subscriptions can be associated to the same or different Azure Active Directory tenant. If you don't already have an AD tenant, you can [create one](../active-directory/develop/quickstart-create-new-tenant.md). -- The virtual networks you peer must have non-overlapping IP address spaces.+- The virtual networks you peer must have nonoverlapping IP address spaces. - You can peer two virtual networks deployed through Resource Manager or a virtual network deployed through Resource Manager with a virtual network deployed through the classic deployment model. You can't peer two virtual networks created through the classic deployment model. If you're not familiar with Azure deployment models, read the [Understand Azure deployment models](../azure-resource-manager/management/deployment-models.md) article. You can use a [VPN Gateway](../vpn-gateway/design.md#V2V) to connect two virtual networks created through the classic deployment model. -- When peering two virtual networks created through Resource Manager, a peering must be configured for each virtual network in the peering. You see one of the following types for peering status:+- When you peer two virtual networks created through Resource Manager, a peering must be configured for each virtual network in the peering. You see one of the following types for peering status: - - *Initiated:* When you create the first peering, its status is *Initiated*. - - *Connected:* When you create the second peering, peering status becomes *Connected* for both peerings. The peering isn't successfully established until the peering status for both virtual network peerings is *Connected*. + - **Initiated:** When you create the first peering, its status is *Initiated*. + + - **Connected:** When you create the second peering, peering status becomes **Connected** for both peerings. The peering isn't successfully established until the peering status for both virtual network peerings is **Connected**. ++- When peering a virtual network created through Resource Manager with a virtual network created through the classic deployment model, you only configure a peering for the virtual network deployed through Resource Manager. You can't configure peering for a virtual network (classic), or between two virtual networks deployed through the classic deployment model. When you create the peering from the virtual network (Resource Manager) to the virtual network (Classic), the peering status is **Updating**, then shortly changes to **Connected**. -- When peering a virtual network created through Resource Manager with a virtual network created through the classic deployment model, you only configure a peering for the virtual network deployed through Resource Manager. You can't configure peering for a virtual network (classic), or between two virtual networks deployed through the classic deployment model. When you create the peering from the virtual network (Resource Manager) to the virtual network (Classic), the peering status is *Updating*, then shortly changes to *Connected*. - A peering is established between two virtual networks. Peerings by themselves aren't transitive. If you create peerings between: - VirtualNetwork1 and VirtualNetwork2 + - VirtualNetwork2 and VirtualNetwork3 - There's no connectivity between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want VirtualNetwork1 and VirtualNetwork3 to directly communicate, you have to create an explicit peering between VirtualNetwork1 and VirtualNetwork3, or go through an NVA in the Hub network. To learn more, see [Hub-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). + There's no connectivity between VirtualNetwork1 and VirtualNetwork3 through VirtualNetwork2. If you want VirtualNetwork1 and VirtualNetwork3 to directly communicate, you have to create an explicit peering between VirtualNetwork1 and VirtualNetwork3, or go through an NVA in the **Hub** network. To learn more, see [**Hub**-spoke network topology in Azure](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke). - You can't resolve names in peered virtual networks using default Azure name resolution. To resolve names in other virtual networks, you must use [Azure Private DNS](../dns/private-dns-overview.md) or a custom DNS server. To learn how to set up your own DNS server, see [Name resolution using your own DNS server](virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server). - Resources in peered virtual networks in the same region can communicate with each other with the same latency as if they were within the same virtual network. The network throughput is based on the bandwidth that's allowed for the virtual machine, proportionate to its size. There isn't any extra restriction on bandwidth within the peering. Each virtual machine size has its own maximum network bandwidth. To learn more about maximum network bandwidth for different virtual machine sizes, see [Sizes for virtual machines in Azure](../virtual-machines/sizes.md). - A virtual network can be peered to another virtual network, and also be connected to another virtual network with an Azure virtual network gateway. When virtual networks are connected through both peering and a gateway, traffic between the virtual networks flows through the peering configuration, rather than the gateway.+ - Point-to-Site VPN clients must be downloaded again after virtual network peering has been successfully configured to ensure the new routes are downloaded to the client.+ - There's a nominal charge for ingress and egress traffic that utilizes a virtual network peering. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/virtual-network). ## Permissions az network vnet peering delete --resource-group myResourceGroup --name VNetBtoVN The accounts you use to work with virtual network peering must be assigned to the following roles: - [Network Contributor](../role-based-access-control/built-in-roles.md#network-contributor): For a virtual network deployed through Resource Manager.-- [Classic Network Contributor](../role-based-access-control/built-in-roles.md#classic-network-contributor): For a virtual network deployed through the classic deployment model.++- [Classic Network Contributor](../role-based-access-control/built-in-roles.md#classic-network-contributor): For a virtual network deployed through, the classic deployment model. If your account isn't assigned to one of the previous roles, it must be assigned to a [custom role](../role-based-access-control/custom-roles.md) that is assigned the necessary actions from the following table: | Action | Name | | | |-| Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write | Required to create a peering from virtual network A to virtual network B. Virtual network A must be a virtual network (Resource Manager) | -| Microsoft.Network/virtualNetworks/peer/action | Required to create a peering from virtual network B (Resource Manager) to virtual network A | -| Microsoft.ClassicNetwork/virtualNetworks/peer/action | Required to create a peering from virtual network B (classic) to virtual network A | -| Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read | Read a virtual network peering | -| Microsoft.Network/virtualNetworks/virtualNetworkPeerings/delete | Delete a virtual network peering | +| **Microsoft.Network/virtualNetworks/virtualNetworkPeerings/write** | Required to create a peering from virtual network A to virtual network B. Virtual network A must be a virtual network (Resource Manager) | +| **Microsoft.Network/virtualNetworks/peer/action** | Required to create a peering from virtual network B (Resource Manager) to virtual network A | +| **Microsoft.ClassicNetwork/virtualNetworks/peer/action** | Required to create a peering from virtual network B (classic) to virtual network A | +| **Microsoft.Network/virtualNetworks/virtualNetworkPeerings/read** | Read a virtual network peering | +| **Microsoft.Network/virtualNetworks/virtualNetworkPeerings/delete** | Delete a virtual network peering | ## Next steps If your account isn't assigned to one of the previous roles, it must be assigned |One Resource Manager, one classic |[Same](create-peering-different-deployment-models.md)| | |[Different](create-peering-different-deployment-models-subscriptions.md)| -- Learn how to create a [hub and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke)+- Learn how to create a [**hub** and spoke network topology](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke) + - Create a virtual network peering using [PowerShell](powershell-samples.md) or [Azure CLI](cli-samples.md) sample scripts, or using Azure [Resource Manager templates](template-samples.md)+ - Create and assign [Azure Policy definitions](./policy-reference.md) for virtual networks |
virtual-wan | About Vpn Profile Download | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-vpn-profile-download.md | |
virtual-wan | Azure Vpn Client Optional Configurations Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/azure-vpn-client-optional-configurations-windows.md | description: Learn how to configure the Azure VPN Client optional configuration Previously updated : 07/20/2022 Last updated : 08/24/2023 |
virtual-wan | Certificates Point To Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/certificates-point-to-site.md | description: Learn how to create a self-signed root certificate, export a public Previously updated : 07/06/2022 Last updated : 08/23/2023 |
virtual-wan | Cross Tenant Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/cross-tenant-vnet.md | To use the steps in this article, you must have the following configuration alre * A virtual WAN and virtual hub in your parent subscription * A virtual network configured in a subscription in a different (remote) tenant -Make sure that the virtual network address space in the remote tenant does not overlap with any other address space within any other virtual networks already connected to the parent virtual hub. +Make sure that the virtual network address space in the remote tenant doesn't overlap with any other address space within any other virtual networks already connected to the parent virtual hub. ### Working with Azure PowerShell In the following steps, you'll use commands to add a static route to the virtual ``` - This update command will remove the previous manual configuration route in your routing table. + This update command removes the previous manual configuration route in your routing table. 1. Verify that the static route is established to a next-hop IP address. |
virtual-wan | How To Forced Tunnel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-forced-tunnel.md | description: Learn to configure forced tunneling for P2S VPN in Virtual WAN. Previously updated : 07/12/2022 Last updated : 08/24/2023 An example EAP XML file is the following. ### IKEv2 with RADIUS server authentication with user certificates (EAP-TLS) -To use certificate-based RADIUS authentication (EAP-TLS) to authenticate remote users, use the sample PowerShell script below. Note that in order to import the contents of the VpnSettings and EAP XML files into PowerShell, you will have to navigate to the appropriate directory before running the **Get-Content** PowerShell command. +To use certificate-based RADIUS authentication (EAP-TLS) to authenticate remote users, use the sample PowerShell script below. Note that in order to import the contents of the VpnSettings and EAP XML files into PowerShell, you'll have to navigate to the appropriate directory before running the **Get-Content** PowerShell command. ```azurepowershell-interactive # specify the name of the VPN Connection to be installed on the client |
virtual-wan | Howto Always On Device Tunnel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-always-on-device-tunnel.md | |
virtual-wan | Hub Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md | description: This article answers common questions about virtual hub settings an Previously updated : 07/12/2022 Last updated : 08/24/2023 |
virtual-wan | Install Client Certificates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/install-client-certificates.md | description: Learn how to install client certificates for User VPN P2S certifica Previously updated : 07/06/2022 Last updated : 08/24/2023 |
virtual-wan | Nat Rules Vpn Gateway Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/nat-rules-vpn-gateway-powershell.md | -This configuration uses a flow table to route traffic from an external (host) IP Address to an internal IP address associated with an endpoint inside a virtual network (virtual machine, computer, container, etc.). In order to use NAT, VPN devices need to use any-to-any (wildcard) traffic selectors. Policy Based (narrow) traffic selectors are not supported in conjunction with NAT configuration. +This configuration uses a flow table to route traffic from an external (host) IP Address to an internal IP address associated with an endpoint inside a virtual network (virtual machine, computer, container, etc.). In order to use NAT, VPN devices need to use any-to-any (wildcard) traffic selectors. Policy Based (narrow) traffic selectors aren't supported in conjunction with NAT configuration. ## Prerequisites |
virtual-wan | Openvpn Azure Ad Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-mfa.md | |
virtual-wan | Point To Site Ipsec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/point-to-site-ipsec.md | |
virtual-wan | Quickstart Any To Any Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/quickstart-any-to-any-template.md | description: Learn how to create an any-to-any configuration using an Azure Reso Previously updated : 06/14/2022 Last updated : 08/24/2023 |
virtual-wan | Routing Deep Dive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/routing-deep-dive.md | -[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios, it is not required any deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts. +[Azure Virtual WAN][virtual-wan-overview] is a networking solution that allows creating sophisticated networking topologies easily: it encompasses routing across Azure regions between Azure VNets and on-premises locations via Point-to-Site VPN, Site-to-Site VPN, [ExpressRoute][er] and [integrated SDWAN appliances][virtual-wan-nva], including the option to [secure the traffic][virtual-wan-secured-hub]. In most scenarios, it isn't required that you have deep knowledge of how Virtual WAN internal routing works, but in certain situations it can be useful to understand Virtual WAN routing concepts. -This document explores sample Virtual WAN scenarios that explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they are just sample topologies designed to demonstrate certain Virtual WAN functionalities. +This document explores sample Virtual WAN scenarios that explain some of the behaviors that organizations might encounter when interconnecting their VNets and branches in complex networks. The scenarios shown in this article are by no means design recommendations, they're just sample topologies designed to demonstrate certain Virtual WAN functionalities. ## Scenario 1: topology with default routing preference The first scenario in this article analyzes a topology with two Virtual WAN hubs In each hub, the VPN and SDWAN appliances serve a dual purpose: on one side they advertise their own individual prefixes (`10.4.1.0/24` over VPN in hub 1 and `10.5.3.0/24` over SDWAN in hub 2), and on the other they advertise the same prefixes as the ExpressRoute circuits in the same region (`10.4.2.0/24` in hub 1 and `10.5.2.0/24` in hub 2). This difference will be used to demonstrate how the [Virtual WAN hub routing preference][virtual-wan-hrp] works. -All VNet and branch connections are associated and propagating to the default route table. Although the hubs are secured (there is an Azure Firewall deployed in every hub), they are not configured to secure private or Internet traffic. Doing so would result in all connections propagating to the `None` route table, which would remove all non-static routes from the `Default` route table and defeat the purpose of this article since the effective route blade in the portal would be almost empty (except for the static routes to send traffic to the Azure Firewall). +All VNet and branch connections are associated and propagating to the default route table. Although the hubs are secured (there is an Azure Firewall deployed in every hub), they aren't configured to secure private or Internet traffic. Doing so would result in all connections propagating to the `None` route table, which would remove all non-static routes from the `Default` route table and defeat the purpose of this article since the effective route blade in the portal would be almost empty (except for the static routes to send traffic to the Azure Firewall). :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1.png" alt-text="Diagram that shows a Virtual WAN design with two ExpressRoute circuits and two V P N branches." ::: The NVA in VNet 12 injects the route 10.1.20.0/22 over BGP, as the Next Hop Type In hub 2 there is an integrated SDWAN Network Virtual Appliance. For more details on supported NVAs for this integration please visit [About NVAs in a Virtual WAN hub][virtual-wan-nva]. Note that the route to the SDWAN branch `10.5.3.0/24` has a next hop of `VPN_S2S_Gateway`. This type of next hop can indicate today either routes coming from an Azure Virtual Network Gateway or from NVAs integrated in the hub. -In hub 2, the route for `10.2.20.0/22` to the indirect spokes VNet 221 (10.2.21.0/24) and VNet 222 (10.2.22.0/24) is installed as a static route, as indicated by the origin `defaultRouteTable`. If you check in the effective routes for hub 1, that route is not there. The reason is because static routes are not propagated via BGP, but need to be configured in every hub. Hence, a static route is required in hub 1 to provide connectivity between the VNets and branches in hub 1 to the indirect spokes in hub 2 (VNets 221 and 222): +In hub 2, the route for `10.2.20.0/22` to the indirect spokes VNet 221 (10.2.21.0/24) and VNet 222 (10.2.22.0/24) is installed as a static route, as indicated by the origin `defaultRouteTable`. If you check in the effective routes for hub 1, that route isn't there. The reason is because static routes aren't propagated via BGP, but need to be configured in every hub. Hence, a static route is required in hub 1 to provide connectivity between the VNets and branches in hub 1 to the indirect spokes in hub 2 (VNets 221 and 222): :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-add-route.png" alt-text="Screenshot that shows how to add a static route to a Virtual WAN hub." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-1-add-route-expanded.png"::: After adding the static route, hub 1 will contain the `10.2.20.0/22` route as we ## Scenario 2: Global Reach and hub routing preference -Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and hub 2 knows the ExpressRoute prefix from circuit 1 (`10.4.2.0/24`), ExpressRoute routes from remote regions are not advertised back to on-premises ExpressRoute links. Consequently, [ExpressRoute Global Reach][er-gr] is required for the ExpressRoute locations to communicate to each other: +Even if hub 1 knows the ExpressRoute prefix from circuit 2 (`10.5.2.0/24`) and hub 2 knows the ExpressRoute prefix from circuit 1 (`10.4.2.0/24`), ExpressRoute routes from remote regions aren't advertised back to on-premises ExpressRoute links. Consequently, [ExpressRoute Global Reach][er-gr] is required for the ExpressRoute locations to communicate to each other: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-2.png" alt-text="Diagram showing a Virtual WAN design with two ExpressRoute circuits with Global Reach and two V P N branches."::: Hub 2 will show a similar table for the effective routes, where the VNets and br ## Scenario 3: Cross-connecting the ExpressRoute circuits to both hubs -In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it is often desirable connecting a single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows: +In order to add direct links between the Azure regions and the on-premises locations connected via ExpressRoute, it's often desirable connecting a single ExpressRoute circuit to multiple Virtual WAN hubs in a topology some times described as "bow tie", as the following topology shows: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3.png" alt-text="Diagram that shows a Virtual WAN design with two ExpressRoute circuits in bow tie with Global Reach and two V P N branches." ::: Virtual WAN shows that both circuits are connected to both hubs: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-circuits.png" alt-text="Screenshot of Virtual WAN showing both ExpressRoute circuits connected to both virtual hubs." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-circuits-expanded.png"::: -Going back to the default hub routing preference of ExpressRoute, the routes to remote branches and VNets in hub 1 will show again ExpressRoute as next hop. Although this time the reason is not Global Reach, but the fact that the ExpressRoute circuits bounce back the route advertisements they get from one hub to the other. For example, the effective routes of hub 1 with hub routing preference of ExpressRoute are as follows: +Going back to the default hub routing preference of ExpressRoute, the routes to remote branches and VNets in hub 1 will show again ExpressRoute as next hop. Although this time the reason isn't Global Reach, but the fact that the ExpressRoute circuits bounce back the route advertisements they get from one hub to the other. For example, the effective routes of hub 1 with hub routing preference of ExpressRoute are as follows: :::image type="content" source="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-er-hub-1.png" alt-text="Screenshot of effective routes in Virtual hub 1 in bow tie design with Global Reach and routing preference ExpressRoute." lightbox="./media/routing-deep-dive/virtual-wan-routing-deep-dive-scenario-3-er-hub-1-expanded.png"::: |
virtual-wan | Scenario 365 Expressroute Private | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-365-expressroute-private.md | |
virtual-wan | Scenario Any To Any | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-any-to-any.md | |
virtual-wan | Scenario Isolate Vnets Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets-custom.md | |
virtual-wan | Scenario Isolate Vnets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-isolate-vnets.md | When working with Virtual WAN virtual hub routing, there are quite a few availab ## <a name="design"></a>Design -In this scenario, the workload within a certain VNet remains isolated and is not able to communicate with other VNets. However, the VNets are required to reach all branches (VPN, ER, and User VPN). In order to figure out how many route tables will be needed, you can build a connectivity matrix. For this scenario it will look like the following table, where each cell represents whether a source (row) can communicate to a destination (column): +In this scenario, the workload within a certain VNet remains isolated and isn't able to communicate with other VNets. However, the VNets are required to reach all branches (VPN, ER, and User VPN). In order to figure out how many route tables will be needed, you can build a connectivity matrix. For this scenario it will look like the following table, where each cell represents whether a source (row) can communicate to a destination (column): | From | To | *VNets* | *Branches* | | -- | -- | - | | In this scenario, the workload within a certain VNet remains isolated and is not Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination prefix (the "To" side of the flow, the column headers in italics). In this scenario there are no firewalls or Network Virtual Appliances, so communications flows directly over Virtual WAN (hence the word "Direct" in the table). -This connectivity matrix gives us two different row patterns, which translate to two route tables. Virtual WAN already has a Default route table, so we will need another route table. For this example, we will name the route table **RT_VNET**. +This connectivity matrix gives us two different row patterns, which translate to two route tables. Virtual WAN already has a Default route table, so we'll need another route table. For this example, we'll name the route table **RT_VNET**. -VNets will be associated to this **RT_VNET** route table. Because they need connectivity to branches, branches will need to propagate to **RT_VNET** (otherwise the VNets would not learn the branch prefixes). Since the branches are always associated to the Default route table, VNets will need to propagate to the Default route table. As a result, this is the final design: +VNets will be associated to this **RT_VNET** route table. Because they need connectivity to branches, branches need to propagate to **RT_VNET** (otherwise the VNets wouldn't learn the branch prefixes). Since the branches are always associated to the Default route table, VNets need to propagate to the Default route table. As a result, this is the final design: * Virtual networks: * Associated route table: **RT_VNET** In order to configure this scenario, take the following steps into consideration 2. When you create the **RT_VNet** route table, configure the following settings: * **Association**: Select the VNets you want to isolate.- * **Propagation**: Select the option for branches, implying branch(VPN/ER/P2S) connections will propagate routes to this route table. + * **Propagation**: Select the option for branches, implying branch(VPN/ER/P2S) connections propagate routes to this route table. :::image type="content" source="./media/routing-scenarios/isolated/isolated-vnets.png" alt-text="Isolated VNets"::: |
virtual-wan | Scenario Shared Services Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/scenario-shared-services-vnet.md | We can use a connectivity matrix to summarize the requirements of this scenario: Each of the cells in the previous table describes whether a Virtual WAN connection (the "From" side of the flow, the row headers) communicates with a destination (the "To" side of the flow, the column headers in italics). In this scenario there are no firewalls or Network Virtual Appliances, so communication flows directly over Virtual WAN (hence the word "Direct" in the table). -Similarly to the [Isolated VNet scenario](scenario-isolate-vnets.md), this connectivity matrix gives us two different row patterns, which translate to two route tables (the shared services VNets and the branches have the same connectivity requirements). Virtual WAN already has a Default route table, so we will need another custom route table, which we will call **RT_SHARED** in this example. +Similarly to the [Isolated VNet scenario](scenario-isolate-vnets.md), this connectivity matrix gives us two different row patterns, which translate to two route tables (the shared services VNets and the branches have the same connectivity requirements). Virtual WAN already has a Default route table, so we'll need another custom route table, which we will call **RT_SHARED** in this example. -VNets will be associated to the **RT_SHARED** route table. Because they need connectivity to branches and to the shared service VNets, the shared service VNet and branches will need to propagate to **RT_SHARED** (otherwise the VNets would not learn the branch and shared VNet prefixes). Because the branches are always associated to the Default route table, and the connectivity requirements are the same for shared services VNets, we will associate the shared service VNets to the Default route table too. +VNets will be associated to the **RT_SHARED** route table. Because they need connectivity to branches and to the shared service VNets, the shared service VNet and branches will need to propagate to **RT_SHARED** (otherwise the VNets wouldn't learn the branch and shared VNet prefixes). Because the branches are always associated to the Default route table, and the connectivity requirements are the same for shared services VNets, we'll associate the shared service VNets to the Default route table too. As a result, this is the final design: To configure the scenario, consider the following steps: 2. Create a custom route table. In the example, we refer to the route table as **RT_SHARED**. For steps to create a route table, see [How to configure virtual hub routing](how-to-virtual-hub-routing.md). Use the following values as a guideline: * **Association**- * For **VNets *except* the shared services VNet**, select the VNets to isolate. This will imply that all these VNets (except the shared services VNet) will be able to reach destination based on the routes of RT_SHARED route table. + * For **VNets *except* the shared services VNet**, select the VNets to isolate. This implies that all these VNets (except the shared services VNet) will be able to reach destination based on the routes of RT_SHARED route table. * **Propagation** * For **Branches**, propagate routes to this route table, in addition to any other route tables you may have already selected. Because of this step, the RT_SHARED route table will learn routes from all branch connections (VPN/ER/User VPN). * For **VNets**, select the **shared services VNet**. Because of this step, RT_SHARED route table will learn routes from the shared services VNet connection. -This will result in the routing configuration shown in the following figure: +This results in the routing configuration shown in the following figure: :::image type="content" source="./media/routing-scenarios/shared-service-vnet/shared-services.png" alt-text="Diagram for shared services VNet." lightbox="./media/routing-scenarios/shared-service-vnet/shared-services.png"::: |
virtual-wan | Sd Wan Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/sd-wan-connectivity-architecture.md | |
virtual-wan | Virtual Wan Expressroute About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-about.md | This article provides details on ExpressRoute connections in Azure Virtual WAN. A virtual hub can contain gateways for site-to-site, ExpressRoute, or point-to-site functionality. Users using private connectivity in Virtual WAN can connect their ExpressRoute circuits to an ExpressRoute gateway in a Virtual WAN hub. For a tutorial on connecting an ExpressRoute circuit to an Azure Virtual WAN hub, see [How to Connect an ExpressRoute Circuit to Virtual WAN](virtual-wan-expressroute-portal.md). ## ExpressRoute circuit SKUs supported in Virtual WAN-The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus). +The following ExpressRoute circuit SKUs can be connected to the hub gateway: Local, Standard, and Premium. To learn more about different SKUs, visit [ExpressRoute Circuit SKUs](../expressroute/expressroute-faqs.md#what-is-the-connectivity-scope-for-different-expressroute-circuit-skus). ExpressRoute Local circuits can only be connected to ExpressRoute gateways in the same region, but they can still access resources in spoke virtual networks located in other regions. ## ExpressRoute performance |
virtual-wan | Virtual Wan Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-faq.md | Virtual WAN comes in two flavors: Basic and Standard. In Basic Virtual WAN, hubs ### How are Availability Zones and resiliency handled in Virtual WAN? -Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there is resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions. +Virtual WAN is a collection of hubs and services made available inside the hub. The user can have as many Virtual WAN per their need. In a Virtual WAN hub, there are multiple services like VPN, ExpressRoute etc. Each of these services is automatically deployed across Availability Zones (except Azure Firewall), if the region supports Availability Zones. If a region becomes an Availability Zone after the initial deployment in the hub, the user can recreate the gateways, which will trigger an Availability Zone deployment. All gateways are provisioned in a hub as active-active, implying there's resiliency built in within a hub. Users can connect to multiple hubs if they want resiliency across regions. -Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There is currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall. +Currently, Azure Firewall can be deployed to support Availability Zones using Azure Firewall Manager Portal, [PowerShell](/powershell/module/az.network/new-azfirewall#example-6--create-a-firewall-with-no-rules-and-with-availability-zones) or CLI. There's currently no way to configure an existing Firewall to be deployed across availability zones. You'll need to delete and redeploy your Azure Firewall. While the concept of Virtual WAN is global, the actual Virtual WAN resource is Resource Manager-based and deployed regionally. If the virtual WAN region itself were to have an issue, all hubs in that virtual WAN will continue to function as is, but the user won't be able to create new hubs until the virtual WAN region is available. A Network Virtual Appliance (NVA) can be deployed inside a virtual hub. For step No. The spoke VNet can't have a virtual network gateway if it's connected to the virtual hub. +### Can a spoke VNet have an Azure Route Server? ++No. The spoke VNet can't have a Route Server if it's connected to the virtual WAN hub. + ### Is there support for BGP in VPN connectivity? Yes, BGP is supported. When you create a VPN site, you can provide the BGP parameters in it. This will imply that any connections created in Azure for that site will be enabled for BGP. A simple configuration of one Virtual WAN with one hub and one vpnsite can be cr ### Can spoke VNets connected to a virtual hub communicate with each other (V2V Transit)? -Yes. Standard Virtual WAN supports VNet-to-VNet transitive connectivity via the Virtual WAN hub that the VNets are connected to. In Virtual WAN terminology, we refer to these paths as "local Virtual WAN VNet transit" for VNets connected to a Virtual Wan hub within a single region, and "global Virtual WAN VNet transit" for VNets connected through multiple Virtual WAN hubs across two or more regions. +Yes. Standard Virtual WAN supports VNet-to-VNet transitive connectivity via the Virtual WAN hub that the VNets are connected to. In Virtual WAN terminology, we refer to these paths as "local Virtual WAN VNet transit" for VNets connected to a Virtual WAN hub within a single region, and "global Virtual WAN VNet transit" for VNets connected through multiple Virtual WAN hubs across two or more regions. In some scenarios, spoke VNets can also be directly peered with each other using [virtual network peering](../virtual-network/virtual-network-peering-overview.md) in addition to local or global Virtual WAN VNet transit. In this case, VNet Peering takes precedence over the transitive connection via the Virtual WAN hub. If a virtual hub learns the same route from multiple remote hubs, the order in w * **AS Path** 1. Prefer routes with the shortest BGP AS-Path length irrespective of the source of the route advertisements. - Note: In vWANs with multiple remote virtual hubs, If there is a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred. + Note: In vWANs with multiple remote virtual hubs, If there's a tie between remote routes and remote site-to-site VPN routes. Remote site-to-site VPN will be preferred. 2. Prefer routes from local virtual hub connections over routes learned from remote virtual hub. 3. If there are routes from both ExpressRoute and Site-to-site VPN connections: Transit between ER-to-ER is always via Global reach. Virtual hub gateways are de ### Is there a concept of weight in Azure Virtual WAN ExpressRoute circuits or VPN connections -When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There is no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub. +When multiple ExpressRoute circuits are connected to a virtual hub, routing weight on the connection provides a mechanism for the ExpressRoute in the virtual hub to prefer one circuit over the other. There's no mechanism to set a weight on a VPN connection. Azure always prefers an ExpressRoute connection over a VPN connection within a single hub. ### Does Virtual WAN prefer ExpressRoute over VPN for traffic egressing Azure The current behavior is to prefer the ExpressRoute circuit path over hub-to-hub ### When there's an ExpressRoute circuit connected as a bow-tie to a Virtual WAN hub and a non Virtual WAN VNet, what is the path for the non Virtual WAN VNet to reach the Virtual WAN hub? -The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It is recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE). +The current behavior is to prefer the ExpressRoute circuit path for non Virtual WAN VNet to Virtual WAN connectivity. It's recommended that the customer [create a Virtual Network connection](howto-connect-vnet-hub.md) to directly connect the non Virtual WAN VNet to the Virtual WAN hub. Afterwards, VNet to VNet traffic will traverse through the Virtual WAN router instead of the ExpressRoute path (which traverses through the Microsoft Enterprise Edge routers/MSEE). ### Can hubs be created in different resource groups in Virtual WAN? Yes. For a list of Managed Service Provider (MSP) solutions enabled via Azure Ma Both Azure Virtual WAN hub and Azure Route Server provide Border Gateway Protocol (BGP) peering capabilities that can be utilized by NVAs (Network Virtual Appliance) to advertise IP addresses from the NVA to the userΓÇÖs Azure virtual networks. The deployment options differ in the sense that Azure Route Server is typically deployed by a self-managed customer hub VNet whereas Azure Virtual WAN provides a zero-touch fully meshed hub service to which customers connect their various spokes end points (Azure VNet, on-premises branches with site-to-site VPN or SDWAN, remote users with point-to-site/Remote User VPN and Private connections with ExpressRoute) and enjoy BGP Peering for NVAs deployed in spoke VNet along with other vWAN capabilities such as transit connectivity for VNet-to-VNet, transit connectivity between VPN and ExpressRoute, custom/advanced routing, custom route association and propagation, routing intent/policies for no hassle inter-region security, Secure Hub/Azure firewall etc. For more details about Virtual WAN BGP Peering, please see [How to peer BGP with a virtual hub](scenario-bgp-peering-hub.md). -### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure Portal? +### If I'm using a third-party security provider (Zscaler, iBoss or Checkpoint) to secure my internet traffic, why don't I see the VPN site associated to the third-party security provider in the Azure portal? -When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure Portal. +When you choose to deploy a security partner provider to protect Internet access for your users, the third-party security provider creates a VPN site on your behalf. Because the third-party security provider is created automatically by the provider and isn't a user-created VPN site, this VPN site won't show up in the Azure portal. For more information regarding the available options third-party security providers and how to set this up, see [Deploy a security partner provider](../firewall-manager/deploy-trusted-security-partner.md). Yes, BGP communities generated by on-premises will be preserved in Virtual WAN. ### <a name="why-am-i-seeing-a-message-and-button-called-update-router-to-latest-software-version-in-portal."></a>Why am I seeing a message and button called "Update router to latest software version" in portal? -Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure Portal. If the button is not visible, please open a support case. +Azure-wide Cloud Services-based infrastructure is deprecating. As a result, the Virtual WAN team has been working on upgrading virtual routers from their current Cloud Services infrastructure to Virtual Machine Scale Sets based deployments. **All newly created Virtual Hubs will automatically be deployed on the latest Virtual Machine Scale Sets based infrastructure.** If you navigate to your Virtual WAN hub resource and see this message and button, then you can upgrade your router to the latest version by clicking on the button. If you would like to take advantage of new Virtual WAN features, such as [BGP peering with the hub](create-bgp-peering-hub-portal.md), you'll have to update your virtual hub router via Azure portal. If the button isn't visible, please open a support case. -YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks are not deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. +YouΓÇÖll only be able to update your virtual hub router if all the resources (gateways/route tables/VNet connections) in your hub are in a succeeded state. Please make sure all your spoke virtual networks are in active/enabled subscriptions and that your spoke virtual networks aren't deleted. Additionally, as this operation requires deployment of new virtual machine scale sets based virtual hub routers, youΓÇÖll face an expected downtime of 1-2 minutes for VNet-to-VNet traffic through the same hub and 5-7 minutes for all other traffic flows through the hub. Within a single Virtual WAN resource, hubs should be updated one at a time instead of updating multiple at the same time. When the Router Version says ΓÇ£LatestΓÇ¥, then the hub is done updating. There will be no routing behavior changes after this update. There are several limitations with the virtual hub router upgrade -* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you will have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you will also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON. --* If your Virtual WAN hub is connected to a combination of spoke virtual networks in the same region as the hub and a separate region than the hub, then you may experience a lack of connectivity to these respective spoke virtual networks. To resolve this and restore connectivity to these virtual networks, you can modify any of the virtual network connection properties (For example, you can modify the connection to propagate to a dummy label). We are actively working on removing this requirement. +* If you have already configured BGP peering between your Virtual WAN hub and an NVA in a spoke VNet, then you'll have to [delete and then recreate the BGP peer](create-bgp-peering-hub-portal.md). Since the virtual hub router's IP addresses change after the upgrade, you'll also have to reconfigure your NVA to peer with the virtual hub router's new IP addresses. These IP addresses are represented as the "virtualRouterIps" field in the Virtual Hub's Resource JSON. -* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We are actively working on removing this limitation. +* Your Virtual WAN hub router can not currently be upgraded if you have a network virtual appliance in the virtual hub. We're actively working on removing this limitation. * If your Virtual WAN hub is connected to more than 100 spoke virtual networks, then the upgrade may fail. -If the update fails for any reason, your hub will be auto recovered to the old version to ensure there is still a working setup. +If the update fails for any reason, your hub will be auto recovered to the old version to ensure there's still a working setup. Additional things to note: * The user will need to have an **owner** or **contributor** role to see an accurate status of the hub router version. If a user is assigned a **reader** role to the Virtual WAN resource and subscription, then Azure portal will display to that user that the hub router needs to be upgraded to the latest version, even if the hub is already on the latest version. -* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you will need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label). +* If you change your spoke virtual network's subscription status from disabled to enabled and then upgrade the virtual hub, you'll need to update your virtual network connection after the virtual hub upgrade (Ex: you can configure the virtual network connection to propagate to a dummy label). ### Is there a route limit for OpenVPN clients connecting to an Azure P2S VPN gateway? |
virtual-wan | Virtual Wan Point To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md | The instructions you follow depend on the authentication method you want to use. [!INCLUDE [Point to site page](../../includes/virtual-wan-p2s-gateway-include.md)] + ## <a name="download"></a>Generate client configuration files When you connect to VNet using User VPN (P2S), you can use the VPN client that is natively installed on the operating system from which you're connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients. |
virtual-wan | Virtual Wan Point To Site Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-powershell.md | |
virtual-wan | Virtual Wan Route Table Nva Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva-portal.md | Verify that you have met the following criteria: * A private IP address must be assigned to the NVA network interface. - * The NVA is not deployed in the virtual hub. It must be deployed in a separate virtual network. + * The NVA isn't deployed in the virtual hub. It must be deployed in a separate virtual network. * The NVA virtual network may have one or many virtual networks connected to it. In this article, we refer to the NVA virtual network as an 'indirect spoke VNet'. These virtual networks can be connected to the NVA VNet by using VNet peering. The VNet Peering links are depicted by black arrows in the above figure between VNet 1, VNet 2, and NVA VNet. * You have created two virtual networks. They will be used as spoke VNets. Verify that you have met the following criteria: * Ensure there are no virtual network gateways in any of the VNets. - * The VNets do not require a gateway subnet. + * The VNets don't require a gateway subnet. ## <a name="signin"></a>1. Sign in Repeat the following procedure for each virtual network that you want to connect * **Connection name** - Name your connection. * **Hubs** - Select the hub you want to associate with this connection. * **Subscription** - Verify the subscription.- * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network cannot have an already existing virtual network gateway. + * **Virtual network** - Select the virtual network you want to connect to this hub. The virtual network can't have an already existing virtual network gateway. 4. Click **OK** to create the connection. ## Next steps |
virtual-wan | Virtual Wan Route Table Nva | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-route-table-nva.md | Verify that you have met the following criteria: * You have a Network Virtual Appliance (NVA). This is a third-party software of your choice that is typically provisioned from Azure Marketplace in a virtual network. * You have a private IP assigned to the NVA network interface. -* The NVA cannot be deployed in the virtual hub. It must be deployed in a separate VNet. For this article, the NVA VNet is referred to as the 'DMZ VNet'. +* The NVA can't be deployed in the virtual hub. It must be deployed in a separate VNet. For this article, the NVA VNet is referred to as the 'DMZ VNet'. * The ΓÇÿDMZ VNetΓÇÖ may have one or many virtual networks connected to it. In this article, this VNet is referred to as ΓÇÿIndirect spoke VNetΓÇÖ. These VNets can be connected to the DMZ VNet using VNet peering. * Verify that you have 2 VNets already created. These will be used as spoke VNets. For this article, the VNet spoke address spaces are 10.0.2.0/24 and 10.0.3.0/24. If you need information on how to create a VNet, see [Create a virtual network using PowerShell](../virtual-network/quick-create-powershell.md). * Ensure there are no virtual network gateways in any VNets. ## <a name="signin"></a>1. Sign in -Make sure you install the latest version of the Resource Manager PowerShell cmdlets. For more information about installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). This is important because earlier versions of the cmdlets do not contain the current values that you need for this exercise. +Make sure you install the latest version of the Resource Manager PowerShell cmdlets. For more information about installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/install-azure-powershell). This is important because earlier versions of the cmdlets don't contain the current values that you need for this exercise. -1. Open your PowerShell console with elevated privileges, and sign in to your Azure account. This cmdlet prompts you for the sign-in credentials. After signing in, it downloads your account settings so that they are available to Azure PowerShell. +1. Open your PowerShell console with elevated privileges, and sign in to your Azure account. This cmdlet prompts you for the sign-in credentials. After signing in, it downloads your account settings so that they're available to Azure PowerShell. ```powershell Connect-AzAccount |
virtual-wan | Vpn Client Certificate Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-client-certificate-windows.md | description: Learn how to configure VPN clients on Windows computers for User VP Previously updated : 07/25/2022 Last updated : 08/24/2023 |
virtual-wan | Vpn Over Expressroute | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-over-expressroute.md | This article shows you how to use Azure Virtual WAN to establish an IPsec/IKE VP The following diagram shows an example of VPN connectivity over ExpressRoute private peering: The diagram shows a network within the on-premises network connected to the Azure hub VPN gateway over ExpressRoute private peering. The connectivity establishment is straightforward: In both of these examples, Azure will send traffic to 10.0.1.0/24 over the VPN c The following Azure resources and the corresponding on-premises configurations must be in place before you proceed: -- An Azure virtual WAN-- A virtual WAN hub with an [ExpressRoute gateway](virtual-wan-expressroute-portal.md) and a [VPN gateway](virtual-wan-site-to-site-portal.md)+- An Azure virtual WAN. +- A virtual WAN hub with an [ExpressRoute gateway](virtual-wan-expressroute-portal.md) and a [VPN gateway](virtual-wan-site-to-site-portal.md). For the steps to create an Azure virtual WAN and a hub with an ExpressRoute association, see [Create an ExpressRoute association using Azure Virtual WAN](virtual-wan-expressroute-portal.md). For the steps to create a VPN gateway in the virtual WAN, see [Create a site-to-site connection using Azure Virtual WAN](virtual-wan-site-to-site-portal.md). The site resource is the same as the non-ExpressRoute VPN sites for a virtual WA > The IP address for the on-premises VPN device *must* be part of the address prefixes advertised to the virtual WAN hub via Azure ExpressRoute private peering. > -1. Go to the Azure portal in your browser. -1. Select the hub that you created. On the virtual WAN hub page, under **Connectivity**, select **VPN sites**. -1. On the **VPN sites** page, select **+Create site**. -1. On the **Create site** page, fill in the following fields: - * **Subscription**: Verify the subscription. - * **Resource Group**: Select or create the resource group that you want to use. - * **Region**: Enter the Azure region for the VPN site resource. - * **Name**: Enter the name by which you want to refer to your on-premises site. - * **Device vendor**: Enter the vendor of the on-premises VPN device. +1. Go to **YourVirtualWAN > VPN sites** and create a site for your on-premises network. For basic steps, see [Create a site](virtual-wan-site-to-site-portal.md). Keep in mind the following settings values: + * **Border Gateway Protocol**: Select "Enable" if your on-premises network uses BGP. * **Private address space**: Enter the IP address space that's located on your on-premises site. Traffic destined for this address space is routed to the on-premises network via the VPN gateway.- * **Hubs**: Select one or more hubs to connect this VPN site. The selected hubs must have VPN gateways already created. -1. Select **Next: Links >** for the VPN link settings: - * **Link Name**: The name by which you want to refer to this connection. ++1. Select **Links** to add information about the physical links. Keep in mind the following settings information: + * **Provider Name**: The name of the internet service provider for this site. For an ExpressRoute on-premises network, it's the name of the ExpressRoute service provider. * **Speed**: The speed of the internet service link or ExpressRoute circuit. * **IP address**: The public IP address of the VPN device that resides on your on-premises site. Or, for ExpressRoute on-premises, it's the private IP address of the VPN device via ExpressRoute. - If BGP is enabled, it will apply to all connections created for this site in Azure. Configuring BGP on a virtual WAN is equivalent to configuring BGP on an Azure VPN gateway. - - Your on-premises BGP peer address *must not* be the same as the IP address of your VPN to the device or the virtual network address space of the VPN site. Use a different IP address on the VPN device for your BGP peer IP. It can be an address assigned to the loopback interface on the device. However, it *can't* be an APIPA (169.254.*x*.*x*) address. Specify this address in the corresponding VPN site that represents the location. For BGP prerequisites, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md). + * If BGP is enabled, it applies to all connections created for this site in Azure. Configuring BGP on a virtual WAN is equivalent to configuring BGP on an Azure VPN gateway. ++ * Your on-premises BGP peer address *must not* be the same as the IP address of your VPN to the device or the virtual network address space of the VPN site. Use a different IP address on the VPN device for your BGP peer IP. It can be an address assigned to the loopback interface on the device. However, it *can't* be an APIPA (169.254.*x*.*x*) address. Specify this address in the corresponding VPN site that represents the location. For BGP prerequisites, see [About BGP with Azure VPN Gateway](../vpn-gateway/vpn-gateway-bgp-overview.md). -1. Select **Next: Review + create >** to check the setting values and create the VPN site. If you selected **Hubs** to connect, the connection will be established between the on-premises network and the hub VPN gateway. +1. Select **Next: Review + create >** to check the setting values and create the VPN site, then **Create** the site. +1. Next, connect the site to the hub using these basic [Steps](virtual-wan-site-to-site-portal.md#connectsites) as a guideline. It can take up to 30 minutes to update the gateway. ## <a name="hub"></a>3. Update the VPN connection setting to use ExpressRoute After you create the VPN site and connect to the hub, use the following steps to configure the connection to use ExpressRoute private peering: -1. Go back to the virtual WAN resource page, and select the hub resource. Or navigate from the VPN site to the connected hub. +1. Go to the virtual hub. You can either do this by going to the Virtual WAN and selecting the hub to open the hub page, or you can go to the connected virtual hub from the VPN site. - :::image type="content" source="./media/vpn-over-expressroute/hub-selection.png" alt-text="Select a hub"::: 1. Under **Connectivity**, select **VPN (Site-to-Site)**. - :::image type="content" source="./media/vpn-over-expressroute/vpn-select.png" alt-text="Select VPN (Site-to-Site)"::: -1. Select the ellipsis (**...**) on the VPN site over ExpressRoute, and select **Edit VPN connection to this hub**. +1. Select the ellipsis (**...**) or right click the VPN site over ExpressRoute, and select **Edit VPN connection to this hub**. - :::image type="content" source="./media/vpn-over-expressroute/config-menu.png" alt-text="Enter configuration menu"::: -1. For **Use Azure Private IP Address**, select **Yes**. The setting configures the hub VPN gateway to use private IP addresses within the hub address range on the gateway for this connection, instead of the public IP addresses. This will ensure that the traffic from the on-premises network traverses the ExpressRoute private peering paths rather than using the public internet for this VPN connection. The following screenshot shows the setting: +1. On the **Basics** page, leave the defaults. - :::image type="content" source="./media/vpn-over-expressroute/vpn-link-configuration.png" alt-text="Setting for using a private IP address for the VPN connection" border="false"::: -1. Select **Save**. +1. On the **Link connection 1** page, configure the following settings: -After you save your changes, the hub VPN gateway will use the private IP addresses on the VPN gateway to establish the IPsec/IKE connections with the on-premises VPN device over ExpressRoute. + - For **Use Azure Private IP Address**, select **Yes**. The setting configures the hub VPN gateway to use private IP addresses within the hub address range on the gateway for this connection, instead of the public IP addresses. This ensures that the traffic from the on-premises network traverses the ExpressRoute private peering paths rather than using the public internet for this VPN connection. +1. Click **Create** to update the settings. After the settings have been created, the hub VPN gateway will use the private IP addresses on the VPN gateway to establish the IPsec/IKE connections with the on-premises VPN device over ExpressRoute. ## <a name="associate"></a>4. Get the private IP addresses for the hub VPN gateway The device configuration file contains the settings to use when you're configuri "Instance0":"10.51.230.4" "Instance1":"10.51.230.5" ```- * Configuration details for the VPN gateway connection, such as BGP and pre-shared key. The pre-shared key is automatically generated for you. You can always edit the connection on the **Overview** page for a custom pre-shared key. + * Configuration details for the VPN gateway connection, such as BGP and preshared key. The preshared key is automatically generated for you. You can always edit the connection on the **Overview** page for a custom preshared key. ### Example device configuration file The device configuration file contains the settings to use when you're configuri If you need instructions to configure your device, you can use the instructions on the [VPN device configuration scripts page](~/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md#configscripts) with the following caveats: -* The instructions on the VPN device page are not written for a virtual WAN. But you can use the virtual WAN values from the configuration file to manually configure your VPN device. +* The instructions on the VPN device page aren't written for a virtual WAN. But you can use the virtual WAN values from the configuration file to manually configure your VPN device. * The downloadable device configuration scripts that are for the VPN gateway don't work for the virtual WAN, because the configuration is different. * A new virtual WAN can support both IKEv1 and IKEv2. * A virtual WAN can use only route-based VPN devices and device instructions. |
virtual-wan | Vpn Profile Intune | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/vpn-profile-intune.md | |
vpn-gateway | Ipsec Ike Policy Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ipsec-ike-policy-howto.md | This section walks you through the steps to create a Site-to-Site VPN connection :::image type="content" source="./media/ipsec-ike-policy-howto/site-to-site-diagram.png" alt-text="Site-to-Site policy" border="false" lightbox="./media/ipsec-ike-policy-howto/site-to-site-diagram.png"::: -### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet1 +### Step 1: Create the virtual network, VPN gateway, and local network gateway for TestVNet1 Create the following resources.For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md). Create the following resources.For steps, see [Create a Site-to-Site VPN connect * **Enable active-active mode:** Disabled * **Configure BGP:** Disabled -### Step 2 - Configure the local network gateway and connection resources +### Step 2: Configure the local network gateway and connection resources 1. Create the local network gateway resource **Site6** using the following values. Create the following resources.For steps, see [Create a Site-to-Site VPN connect * **Shared key:** abc123 (example value - must match the on-premises device key used) * **IKE protocol:** IKEv2 -### Step 3 - Configure a custom IPsec/IKE policy on the S2S VPN connection +### Step 3: Configure a custom IPsec/IKE policy on the S2S VPN connection Configure a custom IPsec/IKE policy with the following algorithms and parameters: The steps to create a VNet-to-VNet connection with an IPsec/IKE policy are simil :::image type="content" source="./media/ipsec-ike-policy-howto/vnet-policy.png" alt-text="Screenshot shows VNet-to-VNet policy diagram." border="false" lightbox="./media/ipsec-ike-policy-howto/vnet-policy.png"::: -### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet2 +### Step 1: Create the virtual network, VPN gateway, and local network gateway for TestVNet2 Use the steps in the [Create a VNet-to-VNet connection](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) article to create TestVNet2 and create a VNet-to-VNet connection to TestVNet1. Example values: * **Enable active-active mode:** Disabled * **Configure BGP:** Disabled -### Step 2 - Configure the VNet-to-VNet connection +### Step 2: Configure the VNet-to-VNet connection 1. From the VNet1GW gateway, add a VNet-to-VNet connection to VNet2GW, **VNet1toVNet2**. Example values: :::image type="content" source="./media/ipsec-ike-policy-howto/vnet-connections.png" alt-text="Screenshot shows VNet-to-VNet connections." border="false" lightbox="./media/ipsec-ike-policy-howto/vnet-connections.png"::: -### Step 3 - Configure a custom IPsec/IKE policy on VNet1toVNet2 +### Step 3: Configure a custom IPsec/IKE policy on VNet1toVNet2 1. From the **VNet1toVNet2** connection resource, go to the **Configuration** page. Example values: 1. Select **Save** at the top of the page to apply the policy changes on the connection resource. -### Step 4 - Configure a custom IPsec/IKE policy on VNet2toVNet1 +### Step 4: Configure a custom IPsec/IKE policy on VNet2toVNet1 1. Apply the same policy to the VNet2toVNet1 connection, VNet2toVNet1. If you don't, the IPsec/IKE VPN tunnel won't connect due to policy mismatch. |
vpn-gateway | Openvpn Azure Ad Tenant Multi App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant-multi-app.md | description: Learn how to set up an Azure AD tenant for P2S OpenVPN authenticati Previously updated : 10/25/2022 Last updated : 08/18/2023 Assign the users to your applications. 1. Go to your Azure Active Directory and select **Enterprise applications**. 1. From the list, locate the application you just registered and click to open it.-1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**, then **Save**. +1. Click **Properties**. On the **Properties** page, verify that **Enabled for users to sign in** is set to **Yes**. If not, change the value to **Yes**. +1. For **Assignment required**, change the value to **Yes**. For more information about this setting, see [Application properties](../active-directory/manage-apps/application-properties.md#enabled-for-users-to-sign-in). +1. If you've made changes, click **Save** to save your settings. 1. In the left pane, click **Users and groups**. On the **Users and groups** page, click **+ Add user/group** to open the **Add Assignment** page. 1. Click the link under **Users and groups** to open the **Users and groups** page. Select the users and groups that you want to assign, then click **Select**. 1. After you finish selecting users and groups, click **Assign**. In this step, you configure P2S Azure AD authentication for the virtual network 1. Go to the virtual network gateway. In the left pane, click **Point-to-site configuration**. - :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/client-id.png"::: + :::image type="content" source="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png" alt-text="Screenshot showing point-to-site configuration page." lightbox="./media/openvpn-azure-ad-tenant-multi-app/enable-authentication.png"::: Configure the following values: In this step, you configure P2S Azure AD authentication for the virtual network For **Azure Active Directory** values, use the following guidelines for **Tenant**, **Audience**, and **Issuer** values. * **Tenant**: `https://login.microsoftonline.com/{TenantID}`- * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the ""Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered. + * **Audience ID**: Use the value that you created in the previous section that corresponds to **Application (client) ID**. Don't use the application ID for "Azure VPN" Azure AD Enterprise App - use application ID that you created and registered. If you use the application ID for the "Azure VPN" Azure AD Enterprise App instead, this will grant all users access to the VPN gateway (which would be the default way to set up access), instead of granting only the users that you assigned to the application that you created and registered. * **Issuer**: `https://sts.windows.net/{TenantID}` For the Issuer value, make sure to include a trailing **/** at the end. 1. Once you finish configuring settings, click **Save** at the top of the page. In this section, you generate and download the Azure VPN Client profile configur ## Next steps -* * To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md). +* To connect to your virtual network, you must configure the Azure VPN client on your client computers. See [Configure a VPN client for P2S VPN connections](openvpn-azure-ad-client.md). * For frequently asked questions, see the **Point-to-site** section of the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#P2S).-- |
vpn-gateway | Packet Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/packet-capture.md | description: Learn about packet capture functionality that you can use on VPN ga Previously updated : 01/31/2022 Last updated : 08/24/2023 Connectivity and performance-related problems are often complex. It can take sig There are some commonly available packet capture tools. Getting relevant packet captures with these tools can be cumbersome, especially in high-volume traffic scenarios. The filtering capabilities provided by Azure VPN Gateway packet capture are a major differentiator. You can use VPN Gateway packet capture together with commonly available packet capture tools. -## VPN Gateway packet capture filtering capabilities +## About packet capture for VPN Gateway -You can run VPN Gateway packet capture on the gateway or on a specific connection, depending on your needs. You can also run packet capture on multiple tunnels at the same time. You can capture one-way or bi-directional traffic, IKE and ESP traffic, and inner packets along with filtering on a VPN gateway. +You can run VPN Gateway packet capture on the gateway, or on a specific connection, depending on your needs. You can also run packet capture on multiple tunnels at the same time. You can capture one-way or bi-directional traffic, IKE and ESP traffic, and inner packets along with filtering on a VPN gateway. It's helpful to use a five-tuple filter (source subnet, destination subnet, source port, destination port, protocol) and TCP flags (SYN, ACK, FIN, URG, PSH, RST) when you're isolating problems in high-volume traffic. The following examples of JSON and a JSON schema provide explanations of each pr > [!NOTE] > Set the **CaptureSingleDirectionTrafficOnly** option to **false** if you want to capture both inner and outer packets. -### Example JSON +**Example JSON** + ```JSON-interactive { "TracingFlags": 11, The following examples of JSON and a JSON schema provide explanations of each pr ] } ```-### JSON schema ++**JSON schema** + ```JSON-interactive { "type": "object", The following examples of JSON and a JSON schema provide explanations of each pr } ``` -## Start packet capture - portal +### Key considerations ++- Running packet capture can affect performance. Remember to stop the packet capture when you don't need it. +- Suggested minimum packet capture duration is 600 seconds. Because of sync issues among multiple components on the path, shorter packet captures might not provide complete data. +- Packet capture data files are generated in PCAP format. Use Wireshark or other commonly available applications to open PCAP files. +- Packet captures aren't supported on policy-based gateways. +- The maximum filesize of packet capture data files is 500 MB. +- If the `SASurl` parameter isn't configured correctly, the trace might fail with Storage errors. For examples of how to correctly generate an `SASurl` parameter, see [Stop-AzVirtualNetworkGatewayPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewaypacketcapture). +- If you're configuring a User Delegated SAS, make sure the user account is granted proper RBAC permissions on the storage account such as Storage Blob Data Owner. ++## Packet capture - portal -You can set up packet capture in the Azure portal by navigating to the VPN Gateway Packet Capture blade in the Azure portal and clicking the **Start Packet Capture button** +This section helps you start and stop a packet capture using the Azure portal. -> [!NOTE] -> Do not select the **Capture Single Direction Traffic Only** option if you want to capture both inner and outer packets. +### Start packet capture - portal ++You can set up packet capture in the Azure portal. +1. Go to your VPN gateway in the Azure portal. +1. On the left, select **VPN Gateway Packet Capture** to open the VPN Gateway Packet Capture page. +1. Select **Start Packet Capture**. -## Stop packet capture - portal + :::image type="content" source="./media/packet-capture/packet-capture-portal.png" alt-text="Screenshot of start packet capture in the portal." lightbox="./media/packet-capture/packet-capture-portal.png"::: -A valid SAS (or Shared Access Signature) Uri with read/write access is required to complete a packet capture. When a packet capture is stopped, the output of the packet capture is written to the container that is referenced by the SAS Uri. To get the SAS Uri, navigate to the required storage account and generate a SAS token and URL with the correct permissions. +1. On the **Start Packet Capture** page, make any necessary adjustments. Don't select the "Capture Single Direction Traffic Only" option if you want to capture both inner and outer packets. +1. Once you've configured the settings, click **Start Packet Capture**. +### Stop packet capture - portal -* Copy the Blob SAS URL as it will be needed in the next step. +To complete a packet capture, you need to provide a valid SAS (or Shared Access Signature) URL with read/write access. When a packet capture is stopped, the output of the packet capture is written to the container that is referenced by the SAS URL. -* Navigate to the VPN Gateway Packet Capture blade in the Azure portal and clicking the **Stop Packet Capture** button +1. To get the SAS URL, go to the storage account. +1. Go to the container you want to use and right-click to show the dropdown list. Select **Generate SAS** to open the Generate SAS page. +1. On the Generate SAS page, configure your settings. Make sure that you have granted read and write access. +1. Click **Generate SAS token and URL**. +1. The SAS token and SAS URL is generated and appears below the button immediately. Copy the Blob SAS URL. -* Paste the SAS URL (from the previous step) in the **Output Sas Uri** text box and click **Stop Packet Capture**. + :::image type="content" source="./media/packet-capture/generate-sas.png" alt-text="Screenshot of generate SAS token." lightbox="./media/packet-capture/generate-sas.png"::: +1. Go back to the VPN Gateway Packet Capture page in the Azure portal and click the **Stop Packet Capture** button. -* The packet capture (pcap) file will be stored in the specified account +1. Paste the SAS URL (from the previous step) in the **Output Sas Url** text box and click **Stop Packet Capture**. ++1. The packet capture (pcap) file will be stored in the specified account. ## Packet capture - PowerShell The following examples show PowerShell commands that start and stop packet captures. For more information on parameter options, see [Start-AzVirtualnetworkGatewayPacketCapture](/powershell/module/az.network/start-azvirtualnetworkgatewaypacketcapture). -> -### Prerequisite +**Prerequisites** ++* Packet capture data needs to be logged into a storage account on your subscription. See [create storage account](../storage/common/storage-account-create.md). -* Packet capture data will need to be logged into a storage account on your subscription. See [create storage account](../storage/common/storage-account-create.md). -* To stop the packet capture, you will need to generate the `SASUrl` for your storage account. See [create a user delegation SAS](../storage/blobs/storage-blob-user-delegation-sas-create-powershell.md). +* To stop the packet capture, you'll need to generate the `SASUrl` for your storage account. See [create a user delegation SAS](../storage/blobs/storage-blob-user-delegation-sas-create-powershell.md). ### Start packet capture for a VPN gateway Stop-AzVirtualNetworkGatewayConnectionPacketCapture -ResourceGroupName "YourReso For more information on parameter options, see [Stop-AzVirtualNetworkGatewayConnectionPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewayconnectionpacketcapture). -## Key considerations --- Running packet capture can affect performance. Remember to stop the packet capture when you don't need it.-- Suggested minimum packet capture duration is 600 seconds. Because of sync issues among multiple components on the path, shorter packet captures might not provide complete data.-- Packet capture data files are generated in PCAP format. Use Wireshark or other commonly available applications to open PCAP files.-- Packet captures aren't supported on policy-based gateways.-- The maximum filesize of packet capture data files is 500MB.-- If the `SASurl` parameter isn't configured correctly, the trace might fail with Storage errors. For examples of how to correctly generate an `SASurl` parameter, see [Stop-AzVirtualNetworkGatewayPacketCapture](/powershell/module/az.network/stop-azvirtualnetworkgatewaypacketcapture).-- If you are configuring a User Delegated SAS, make sure the user account is granted proper RBAC permissions on the storage account such as Storage Blob Data Owner.--- ## Next steps -For more information about VPN Gateway, see [What is VPN Gateway?](vpn-gateway-about-vpngateways.md). +For more information about VPN Gateway, see [What is VPN Gateway?](vpn-gateway-about-vpngateways.md) |
vpn-gateway | Vpn Gateway Activeactive Rm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-activeactive-rm-powershell.md | The other properties are the same as the non-active-active gateways. * Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/). * You need to install the Azure Resource Manager PowerShell cmdlets if you don't want to use Cloud Shell in your browser. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets. -### Step 1 - Create and configure VNet1 +### Step 1: Create and configure VNet1 #### 1. Declare your variables $gwsub1 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName1 -AddressPrefix $GWS New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1 ``` -### Step 2 - Create the VPN gateway for TestVNet1 with active-active mode +### Step 2: Create the VPN gateway for TestVNet1 with active-active mode #### 1. Create the public IP addresses and gateway IP configurations To establish a cross-premises connection, you need to create a Local Network Gat Before proceeding, make sure you have completed [Part 1](#aagateway) of this exercise. -### Step 1 - Create and configure the local network gateway +### Step 1: Create and configure the local network gateway #### 1. Declare your variables New-AzResourceGroup -Name $RG5 -Location $Location5 New-AzLocalNetworkGateway -Name $LNGName51 -ResourceGroupName $RG5 -Location $Location5 -GatewayIpAddress $LNGIP51 -AddressPrefix $LNGPrefix51 -Asn $LNGASN5 -BgpPeeringAddress $BGPPeerIP51 ``` -### Step 2 - Connect the VNet gateway and local network gateway +### Step 2: Connect the VNet gateway and local network gateway #### 1. Get the two gateways The connection should be established after a few minutes, and the BGP peering se :::image type="content" source="./media/vpn-gateway-activeactive-rm-powershell/active-active.png" alt-text="Diagram showing active-active connection." lightbox="./media/vpn-gateway-activeactive-rm-powershell/active-active.png"::: -### Step 3 - Connect two on-premises VPN devices to the active-active VPN gateway +### Step 3: Connect two on-premises VPN devices to the active-active VPN gateway If you have two VPN devices at the same on-premises network, you can achieve dual redundancy by connecting the Azure VPN gateway to the second VPN device. Once the connection (tunnels) are established, you'll have dual redundant VPN de This section creates an active-active VNet-to-VNet connection with BGP. The following instructions continue from the previous steps. You must complete [Part 1](#aagateway) to create and configure TestVNet1 and the VPN Gateway with BGP. -### Step 1 - Create TestVNet2 and the VPN gateway +### Step 1: Create TestVNet2 and the VPN gateway It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges. Create the VPN gateway with the AS number and the "EnableActiveActiveFeature" fl New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gw2ipconf1,$gw2ipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet2ASN -EnableActiveActiveFeature ``` -### Step 2 - Connect the TestVNet1 and TestVNet2 gateways +### Step 2: Connect the TestVNet1 and TestVNet2 gateways In this example, both gateways are in the same subscription. You can complete this step in the same PowerShell session. |
vpn-gateway | Vpn Gateway Bgp Resource Manager Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md | To establish a cross-premises connection, you need to create a *local network ga Before proceeding, make sure you enabled BGP for the VPN gateway in the previous section. -### Step 1 - Create and configure the local network gateway +### Step 1: Create and configure the local network gateway #### 1. Declare your variables Create the local network gateway. Notice the two additional parameters for the l New-AzLocalNetworkGateway -Name $LNGName5 -ResourceGroupName $RG5 -Location $Location5 -GatewayIpAddress $LNGIP5 -AddressPrefix $LNGPrefix50 -Asn $LNGASN5 -BgpPeeringAddress $BGPPeerIP5 ``` -### Step 2 - Connect the VNet gateway and local network gateway +### Step 2: Connect the VNet gateway and local network gateway #### 1. Get the two gateways This section adds a VNet-to-VNet connection with BGP, as shown in the Diagram 4. The following instructions continue from the previous steps. You must first complete the steps in the [Enable BGP for the VPN gateway](#enablebgp) section to create and configure TestVNet1 and the VPN gateway with BGP. -### Step 1 - Create TestVNet2 and the VPN gateway +### Step 1: Create TestVNet2 and the VPN gateway It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges. Create the VPN gateway with the AS number. You must override the default ASN on New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gwipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet2ASN ``` -### Step 2 - Connect the TestVNet1 and TestVNet2 gateways +### Step 2: Connect the TestVNet1 and TestVNet2 gateways In this example, both gateways are in the same subscription. You can complete this step in the same PowerShell session. |
vpn-gateway | Vpn Gateway Classic Resource Manager Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-classic-resource-manager-migration.md | |
vpn-gateway | Vpn Gateway Connect Multiple Policybased Rm Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-connect-multiple-policybased-rm-ps.md | The workflow to enable this connectivity: This section shows you how to enable policy-based traffic selectors on a connection. Make sure you have completed [Part 3 of the Configure IPsec/IKE policy article](vpn-gateway-ipsecikepolicy-rm-powershell.md). The steps in this article use the same parameters. -### Step 1 - Create the virtual network, VPN gateway, and local network gateway +### Step 1: Create the virtual network, VPN gateway, and local network gateway #### Connect to your subscription and declare your variables This section shows you how to enable policy-based traffic selectors on a connect New-AzLocalNetworkGateway -Name $LNGName6 -ResourceGroupName $RG1 -Location $Location1 -GatewayIpAddress $LNGIP6 -AddressPrefix $LNGPrefix61,$LNGPrefix62 ``` -### Step 2 - Create an S2S VPN connection with an IPsec/IKE policy +### Step 2: Create an S2S VPN connection with an IPsec/IKE policy 1. Create an IPsec/IKE policy. |
vpn-gateway | Vpn Gateway Delete Vnet Gateway Classic Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-classic-powershell.md | -The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md). +The steps in this article apply to the classic deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-delete-vnet-gateway-powershell.md)**. ## <a name="connect"></a>Step 1: Connect to Azure |
vpn-gateway | Vpn Gateway Delete Vnet Gateway Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-portal.md | -> * [PowerShell (classic)](vpn-gateway-delete-vnet-gateway-classic-powershell.md) +> * [PowerShell (classic - legacy gateways)](vpn-gateway-delete-vnet-gateway-classic-powershell.md) This article helps you delete a virtual network gateway. There are a couple of different approaches you can take when you want to delete a gateway for a VPN gateway configuration. If you aren't concerned about keeping any of your resources in the resource grou 1. In **All resources**, locate the resource group and click to open the blade. 1. Click **Delete**. On the Delete blade, view the affected resources. Make sure that you want to delete all of these resources. If not, use the steps in Delete a VPN gateway at the top of this article. 1. To proceed, type the name of the resource group that you want to delete, then click **Delete**.++## Next steps ++For FAQ information, see the [Azure VPN Gateway FAQ](vpn-gateway-vpn-faq.md). |
vpn-gateway | Vpn Gateway Delete Vnet Gateway Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-delete-vnet-gateway-powershell.md | +* If you want to delete everything and start over, as in the case of a test environment, you can delete the resource group. When you delete a resource group, it deletes all the resources within the group. This is method is only recommended if you don't want to keep any of the resources in the resource group. You can't selectively delete only a few resources using this approach. -- If you want to keep some of the resources in your resource group, deleting a virtual network gateway becomes slightly more complicated. Before you can delete the virtual network gateway, you must first delete any resources that are dependent on the gateway. The steps you follow depend on the type of connections that you created and the dependent resources for each connection.+* If you want to keep some of the resources in your resource group, deleting a virtual network gateway becomes slightly more complicated. Before you can delete the virtual network gateway, you must first delete any resources that are dependent on the gateway. The steps you follow depend on the type of connections that you created and the dependent resources for each connection. -## Before beginning +## <a name="S2S"></a>Delete a site-to-site VPN gateway +To delete a virtual network gateway for a S2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. In the following examples, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes: +* VNet name: VNet1 +* Resource Group name: TestRG1 +* Virtual network gateway name: VNet1GW -### 1. Download the latest Azure Resource Manager PowerShell cmdlets. +1. Get the virtual network gateway that you want to delete. -Download and install the latest version of the Azure Resource Manager PowerShell cmdlets. For more information about downloading and installing PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/). + ```azurepowershell-interactive + $GW=get-Azvirtualnetworkgateway -Name "VNet1GW" -ResourceGroupName "TestRG1" + ``` -### 2. Connect to your Azure account. +1. Check to see if the virtual network gateway has any connections. -Open your PowerShell console and connect to your account. Use the following example to help you connect: + ```azurepowershell-interactive + get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} + $Conns=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} + ``` -```powershell -Connect-AzAccount -``` +1. Delete all connections. You may be prompted to confirm the deletion of each of the connections. -Check the subscriptions for the account. + ```azurepowershell-interactive + $Conns | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName} + ``` -```powershell -Get-AzSubscription -``` +1. Delete the virtual network gateway. You may be prompted to confirm the deletion of the gateway. If you have a P2S configuration to this VNet in addition to your S2S configuration, deleting the virtual network gateway will automatically disconnect all P2S clients without warning. -If you have more than one subscription, specify the subscription that you want to use. + ```azurepowershell-interactive + Remove-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1" + ``` -```powershell -Select-AzSubscription -SubscriptionName "Replace_with_your_subscription_name" -``` + At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used. -## <a name="S2S"></a>Delete a Site-to-Site VPN gateway +1. To delete the local network gateways, first get the list of the corresponding local network gateways. -To delete a virtual network gateway for a S2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When working with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes: + ```azurepowershell-interactive + $LNG=Get-AzLocalNetworkGateway -ResourceGroupName "TestRG1" | where-object {$_.Id -In $Conns.LocalNetworkGateway2.Id} + ``` -VNet name: VNet1<br> -Resource Group name: RG1<br> -Virtual network gateway name: GW1<br> + Next, delete the local network gateways. You may be prompted to confirm the deletion of each of the local network gateway. -The following steps apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). + ```azurepowershell-interactive + $LNG | ForEach-Object {Remove-AzLocalNetworkGateway -Name $_.Name -ResourceGroupName $_.ResourceGroupName} + ``` -### 1. Get the virtual network gateway that you want to delete. +1. To delete the Public IP address resources, first get the IP configurations of the virtual network gateway. -```powershell -$GW=get-Azvirtualnetworkgateway -Name "GW1" -ResourceGroupName "RG1" -``` + ```azurepowershell-interactive + $GWIpConfigs = $Gateway.IpConfigurations + ``` -### 2. Check to see if the virtual network gateway has any connections. + Next, get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses. -```powershell -get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} -$Conns=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} -``` + ```azurepowershell-interactive + $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id} + ``` -### 3. Delete all connections. + Delete the Public IP resources. -You may be prompted to confirm the deletion of each of the connections. + ```azurepowershell-interactive + $PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "TestRG1"} + ``` -```powershell -$Conns | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName} -``` +1. Delete the gateway subnet and set the configuration. -### 4. Delete the virtual network gateway. --You may be prompted to confirm the deletion of the gateway. If you have a P2S configuration to this VNet in addition to your S2S configuration, deleting the virtual network gateway will automatically disconnect all P2S clients without warning. ---```powershell -Remove-AzVirtualNetworkGateway -Name "GW1" -ResourceGroupName "RG1" -``` --At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used. --### 5 Delete the local network gateways. --Get the list of the corresponding local network gateways. --```powershell -$LNG=Get-AzLocalNetworkGateway -ResourceGroupName "RG1" | where-object {$_.Id -In $Conns.LocalNetworkGateway2.Id} -``` --Delete the local network gateways. You may be prompted to confirm the deletion of each of the local network gateway. --```powershell -$LNG | ForEach-Object {Remove-AzLocalNetworkGateway -Name $_.Name -ResourceGroupName $_.ResourceGroupName} -``` --### 6. Delete the Public IP address resources. --Get the IP configurations of the virtual network gateway. --```powershell -$GWIpConfigs = $Gateway.IpConfigurations -``` --Get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses. --```powershell -$PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id} -``` --Delete the Public IP resources. --```powershell -$PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "RG1"} -``` --### 7. Delete the gateway subnet and set the configuration. --```powershell -$GWSub = Get-AzVirtualNetwork -ResourceGroupName "RG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -Set-AzVirtualNetwork -VirtualNetwork $GWSub -``` + ```azurepowershell-interactive + $GWSub = Get-AzVirtualNetwork -ResourceGroupName "TestRG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" + Set-AzVirtualNetwork -VirtualNetwork $GWSub + ``` ## <a name="v2v"></a>Delete a VNet-to-VNet VPN gateway -To delete a virtual network gateway for a V2V configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When working with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes: --VNet name: VNet1<br> -Resource Group name: RG1<br> -Virtual network gateway name: GW1<br> --The following steps apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). --### 1. Get the virtual network gateway that you want to delete. +To delete a virtual network gateway for a V2V configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. In the following examples, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes: -```powershell -$GW=get-Azvirtualnetworkgateway -Name "GW1" -ResourceGroupName "RG1" -``` +* VNet name: VNet1 +* Resource Group name: TestRG1 +* Virtual network gateway name: VNet1GW -### 2. Check to see if the virtual network gateway has any connections. +1. Get the virtual network gateway that you want to delete. -```powershell -get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} -``` - -There may be other connections to the virtual network gateway that are part of a different resource group. Check for additional connections in each additional resource group. In this example, we are checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway. + ```azurepowershell-interactive + $GW=get-Azvirtualnetworkgateway -Name "VNet1GW" -ResourceGroupName "TestRG1" + ``` -```powershell -get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG2" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id} -``` +1. Check to see if the virtual network gateway has any connections. -### 3. Get the list of connections in both directions. + ```azurepowershell-interactive + get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} + ``` -Because this is a VNet-to-VNet configuration, you need the list of connections in both directions. +1. There may be other connections to the virtual network gateway that are part of a different resource group. Check for additional connections in each additional resource group. In this example, we're checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway. -```powershell -$ConnsL=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} -``` - -In this example, we are checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway. + ```azurepowershell-interactive + get-Azvirtualnetworkgatewayconnection -ResourceGroupName "RG2" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id} + ``` -```powershell - $ConnsR=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "<NameOfResourceGroup2>" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id} - ``` +1. Get the list of connections in both directions. Because this is a VNet-to-VNet configuration, you need the list of connections in both directions. -### 4. Delete all connections. + ```azurepowershell-interactive + $ConnsL=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "TestRG1" | where-object {$_.VirtualNetworkGateway1.Id -eq $GW.Id} + ``` -You may be prompted to confirm the deletion of each of the connections. +1. In this example, we're checking for connections from RG2. Run this for each resource group that you have which may have a connection to the virtual network gateway. -```powershell -$ConnsL | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName} -$ConnsR | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName} -``` + ```azurepowershell-interactive + $ConnsR=get-Azvirtualnetworkgatewayconnection -ResourceGroupName "<NameOfResourceGroup2>" | where-object {$_.VirtualNetworkGateway2.Id -eq $GW.Id} + ``` -### 5. Delete the virtual network gateway. +1. Delete all connections. You may be prompted to confirm the deletion of each of the connections. -You may be prompted to confirm the deletion of the virtual network gateway. If you have P2S configurations to your VNets in addition to your V2V configuration, deleting the virtual network gateways will automatically disconnect all P2S clients without warning. + ```azurepowershell-interactive + $ConnsL | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName} + $ConnsR | ForEach-Object {Remove-AzVirtualNetworkGatewayConnection -Name $_.name -ResourceGroupName $_.ResourceGroupName} + ``` -```powershell -Remove-AzVirtualNetworkGateway -Name "GW1" -ResourceGroupName "RG1" -``` +1. Delete the virtual network gateway. You may be prompted to confirm the deletion of the virtual network gateway. If you have P2S configurations to your VNets in addition to your V2V configuration, deleting the virtual network gateways will automatically disconnect all P2S clients without warning. -At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used. + ```azurepowershell-interactive + Remove-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1" + ``` -### 6. Delete the Public IP address resources + At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used. -Get the IP configurations of the virtual network gateway. +1. To delete the Public IP address resources, get the IP configurations of the virtual network gateway. -```powershell -$GWIpConfigs = $Gateway.IpConfigurations -``` + ```azurepowershell-interactive + $GWIpConfigs = $Gateway.IpConfigurations + ``` -Get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses. +1. Next, get the list of Public IP address resources used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses. -```powershell -$PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id} -``` + ```azurepowershell-interactive + $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id} + ``` -Delete the Public IP resources. You may be prompted to confirm the deletion of the Public IP. +1. Delete the Public IP resources. You may be prompted to confirm the deletion of the Public IP. -```powershell -$PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"} -``` + ```azurepowershell-interactive + $PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"} + ``` -### 7. Delete the gateway subnet and set the configuration. +1. Delete the gateway subnet and set the configuration. -```powershell -$GWSub = Get-AzVirtualNetwork -ResourceGroupName "RG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -Set-AzVirtualNetwork -VirtualNetwork $GWSub -``` + ```azurepowershell-interactive + $GWSub = Get-AzVirtualNetwork -ResourceGroupName "TestRG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" + Set-AzVirtualNetwork -VirtualNetwork $GWSub + ``` -## <a name="deletep2s"></a>Delete a Point-to-Site VPN gateway +## <a name="deletep2s"></a>Delete a point-to-site VPN gateway -To delete a virtual network gateway for a P2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When working with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes: --VNet name: VNet1<br> -Resource Group name: RG1<br> -Virtual network gateway name: GW1<br> --The following steps apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). +To delete a virtual network gateway for a P2S configuration, you must first delete each resource that pertains to the virtual network gateway. Resources must be deleted in a certain order due to dependencies. When you work with the examples below, some of the values must be specified, while other values are an output result. We use the following specific values in the examples for demonstration purposes: +* VNet name: VNet1 +* Resource Group name: TestRG1 +* Virtual network gateway name: VNet1GW >[!NOTE] > When you delete the VPN gateway, all connected clients will be disconnected from the VNet without warning.-> -> --### 1. Get the virtual network gateway that you want to delete. --```powershell -$GW=get-Azvirtualnetworkgateway -Name "GW1" -ResourceGroupName "RG1" -``` -### 2. Delete the virtual network gateway. +1. Get the virtual network gateway that you want to delete. -You may be prompted to confirm the deletion of the virtual network gateway. + ```azurepowershell-interactive + GW=get-Azvirtualnetworkgateway -Name "VNet1GW" -ResourceGroupName "TestRG1" + ``` -```powershell -Remove-AzVirtualNetworkGateway -Name "GW1" -ResourceGroupName "RG1" -``` +1. Delete the virtual network gateway. You may be prompted to confirm the deletion of the virtual network gateway. -At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used. + ```azurepowershell-interactive + Remove-AzVirtualNetworkGateway -Name "VNet1GW" -ResourceGroupName "TestRG1" + ``` -### 3. Delete the Public IP address resources + At this point, your virtual network gateway has been deleted. You can use the next steps to delete any resources that are no longer being used. -Get the IP configurations of the virtual network gateway. +1. To delete the Public IP address resources, first get the IP configurations of the virtual network gateway. -```powershell -$GWIpConfigs = $Gateway.IpConfigurations -``` + ```azurepowershell-interactive + $GWIpConfigs = $Gateway.IpConfigurations + ``` -Get the list of Public IP addresses used for this virtual network gateway. If the virtual network gateway was active-active, you will see two Public IP addresses. + Next, get the list of Public IP addresses used for this virtual network gateway. If the virtual network gateway was active-active, you'll see two Public IP addresses. -```powershell -$PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id} -``` + ```azurepowershell-interactive + $PubIP=Get-AzPublicIpAddress | where-object {$_.Id -In $GWIpConfigs.PublicIpAddress.Id} + ``` -Delete the Public IPs. You may be prompted to confirm the deletion of the Public IP. +1. Delete the Public IPs. You may be prompted to confirm the deletion of the Public IP. -```powershell -$PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"} -``` + ```azurepowershell-interactive + $PubIP | foreach-object {remove-AzpublicIpAddress -Name $_.Name -ResourceGroupName "<NameOfResourceGroup1>"} + ``` -### 4. Delete the gateway subnet and set the configuration. +1. Delete the gateway subnet and set the configuration. -```powershell -$GWSub = Get-AzVirtualNetwork -ResourceGroupName "RG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -Set-AzVirtualNetwork -VirtualNetwork $GWSub -``` + ```azurepowershell-interactive + $GWSub = Get-AzVirtualNetwork -ResourceGroupName "TestRG1" -Name "VNet1" | Remove-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" + Set-AzVirtualNetwork -VirtualNetwork $GWSub + ``` ## <a name="delete"></a>Delete a VPN gateway by deleting the resource group -If you are not concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. The following steps apply only to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md). +If you aren't concerned about keeping any of your resources in the resource group and you just want to start over, you can delete an entire resource group. This is a quick way to remove everything. -### 1. Get a list of all the resource groups in your subscription. +1. Get a list of all the resource groups in your subscription. -```powershell -Get-AzResourceGroup -``` + ```azurepowershell-interactive + Get-AzResourceGroup + ``` -### 2. Locate the resource group that you want to delete. +1. Locate the resource group that you want to delete. -Locate the resource group that you want to delete and view the list of resources in that resource group. In the example, the name of the resource group is RG1. Modify the example to retrieve a list of all the resources. + Locate the resource group that you want to delete and view the list of resources in that resource group. In the example, the name of the resource group is TestRG1. Modify the example to retrieve a list of all the resources. -```powershell -Find-AzResource -ResourceGroupNameContains RG1 -``` + ```azurepowershell-interactive + Find-AzResource -ResourceGroupNameContains TestRG1 + ``` -### 3. Verify the resources in the list. +1. Verify the resources in the list. -When the list is returned, review it to verify that you want to delete all the resources in the resource group, as well as the resource group itself. If you want to keep some of the resources in the resource group, use the steps in the earlier sections of this article to delete your gateway. + When the list is returned, review it to verify that you want to delete all the resources in the resource group, and the resource group itself. If you want to keep some of the resources in the resource group, use the steps in the earlier sections of this article to delete your gateway. -### 4. Delete the resource group and resources. +1. Delete the resource group and resources. To delete the resource group and all the resource contained in the resource group, modify the example and run. -To delete the resource group and all the resource contained in the resource group, modify the example and run. + ```azurepowershell-interactive + Remove-AzResourceGroup -Name TestRG1 + ``` -```powershell -Remove-AzResourceGroup -Name RG1 -``` +1. Check the status. It takes some time for Azure to delete all the resources. You can check the status of your resource group by using this cmdlet. -### 5. Check the status. + ```azurepowershell-interactive + Get-AzResourceGroup -ResourceGroupName TestRG1 + ``` -It takes some time for Azure to delete all the resources. You can check the status of your resource group by using this cmdlet. + The result that is returned shows 'Succeeded'. -```powershell -Get-AzResourceGroup -ResourceGroupName RG1 -``` + ```azurepowershell-interactive + ResourceGroupName : TestRG1 + Location : eastus + ProvisioningState : Succeeded + ``` -The result that is returned shows 'Succeeded'. +## Next steps -``` -ResourceGroupName : RG1 -Location : eastus -ProvisioningState : Succeeded -``` +For FAQ information, see the [Azure VPN Gateway FAQ](vpn-gateway-vpn-faq.md). |
vpn-gateway | Vpn Gateway Howto Point To Site Classic Azure Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md | description: Learn how to create a classic Point-to-Site VPN Gateway connection Previously updated : 06/09/2023 Last updated : 08/21/2023 # Configure a Point-to-Site connection by using certificate authentication (classic) -This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md). +This article shows you how to create a VNet with a Point-to-Site connection using the classic (legacy) deployment model. This configuration uses certificates to authenticate the connecting client, either self-signed or CA issued. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-point-to-site-resource-manager-portal.md)**. You use a Point-to-Site (P2S) VPN gateway to create a secure connection to your virtual network from an individual client computer. Point-to-Site VPN connections are useful when you want to connect to your VNet from a remote location. When you have only a few clients that need to connect to a VNet, a P2S VPN is a useful solution to use instead of a Site-to-Site VPN. A P2S VPN connection is established by starting it from the client computer. |
vpn-gateway | Vpn Gateway Howto Site To Site Classic Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-classic-portal.md | -This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md). +This article shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](./tutorial-site-to-site-portal.md)**. A Site-to-Site VPN gateway connection is used to connect your on-premises network to an Azure virtual network over an IPsec/IKE (IKEv1 or IKEv2) VPN tunnel. This type of connection requires a VPN device located on-premises that has an externally facing public IP address assigned to it. For more information about VPN gateways, see [About VPN gateway](vpn-gateway-about-vpngateways.md). |
vpn-gateway | Vpn Gateway Howto Vnet Vnet Portal Classic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-vnet-vnet-portal-classic.md | -The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md). +The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-vnet-vnet-resource-manager-portal.md).** :::image type="content" source="./media/vpn-gateway-howto-vnet-vnet-portal-classic/classic-diagram.png" alt-text="Diagram showing classic VNet-to-VNet architecture."::: |
vpn-gateway | Vpn Gateway Ipsecikepolicy Rm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md | The steps of creating a VNet-to-VNet connection with an IPsec/IKE policy are sim See [Create a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) for more detailed steps for creating a VNet-to-VNet connection. -### Step 1 - Create the second virtual network and VPN gateway +### Step 1: Create the second virtual network and VPN gateway #### 1. Declare your variables New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Lo It can take about 45 minutes or more to create the VPN gateway. -### Step 2 - Create a VNet-toVNet connection with the IPsec/IKE policy +### Step 2: Create a VNet-toVNet connection with the IPsec/IKE policy Similar to the S2S VPN connection, create an IPsec/IKE policy, then apply the policy to the new connection. If you used Azure Cloud Shell, your connection may have timed out. If so, re-connect and state the necessary variables again. |
vpn-gateway | Vpn Gateway Multi Site | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-multi-site.md | -The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md). +The steps in this article apply to the classic (legacy) deployment model and don't apply to the current deployment model, Resource Manager. **Unless you want to work in the classic deployment model specifically, we recommend that you use the [Resource Manager version of this article](vpn-gateway-howto-multi-site-to-site-resource-manager-portal.md)**. [!INCLUDE [deployment models](../../includes/vpn-gateway-classic-deployment-model-include.md)] |
vpn-gateway | Vpn Gateway Peering Gateway Transit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-peering-gateway-transit.md | This article helps you configure gateway transit for virtual network peering. [V :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png" alt-text="Diagram of Gateway transit." lightbox="./media/vpn-gateway-peering-gateway-transit/gatewaytransit.png"::: -In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks. The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the classic deployment model. +In the diagram, gateway transit allows the peered virtual networks to use the Azure VPN gateway in Hub-RM. Connectivity available on the VPN gateway, including S2S, P2S, and VNet-to-VNet connections, applies to all three virtual networks. ++The transit option is available for peering between the same, or different deployment models. If you're configuring transit between different deployment models, the hub virtual network and virtual network gateway must be in the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md), not the legacy classic deployment model. > -In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks will propagate to the routing tables for the peered virtual networks using gateway transit. You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md). +In hub-and-spoke network architecture, gateway transit allows spoke virtual networks to share the VPN gateway in the hub, instead of deploying VPN gateways in every spoke virtual network. Routes to the gateway-connected virtual networks or on-premises networks propagate to the routing tables for the peered virtual networks using gateway transit. ++You can disable the automatic route propagation from the VPN gateway. Create a routing table with the "**Disable BGP route propagation**" option, and associate the routing table to the subnets to prevent the route distribution to those subnets. For more information, see [Virtual network routing table](../virtual-network/manage-route-table.md). -There are two scenarios in this article: +There are two scenarios in this article. Select the scenario that applies to your environment. Most people use the **Same deployment model** scenario. If you aren't working with a classic deployment model VNet (legacy VNet) that already exists in your environment, you won't need to work with the **Different deployment models** scenario. * **Same deployment model**: Both virtual networks are created in the Resource Manager deployment model.-* **Different deployment models**: The spoke virtual network is created in the classic deployment model, and the hub virtual network and gateway are in the Resource Manager deployment model. +* **Different deployment models**: The spoke virtual network is created in the classic deployment model, and the hub virtual network and gateway are in the Resource Manager deployment model. This scenario is useful when you need to connect a legacy VNet that already exists in the classic deployment model. >[!NOTE] > If you make a change to the topology of your network and have Windows VPN clients, the VPN client package for Windows clients must be downloaded and installed again in order for the changes to be applied to the client. There are two scenarios in this article: ## Prerequisites -Before you begin, verify that you have the following virtual networks and permissions: +This article requires the following VNets and permissions. If you aren't working with the different deployment model scenario, you don't need to create the classic VNet. ### <a name="vnet"></a>Virtual networks -| VNet | Deployment model | Virtual network gateway | -||--|| +| VNet | Configuration steps| Virtual network gateway| +|||| | Hub-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | [Yes](tutorial-create-gateway-portal.md) | | Spoke-RM | [Resource Manager](./tutorial-site-to-site-portal.md) | No | | Spoke-Classic | [Classic](vpn-gateway-howto-site-to-site-classic-portal.md#CreatVNet) | No | Learn more about [built-in roles](../role-based-access-control/built-in-roles.md ## <a name="same"></a>Same deployment model -In this scenario, the virtual networks are both in the Resource Manager deployment model. Use the following steps to create or update the virtual network peerings to enable gateway transit. +This is the more common scenario. In this scenario, the virtual networks are both in the Resource Manager deployment model. Use the following steps to create or update the virtual network peerings to enable gateway transit. ### To add a peering and enable transit -1. In the [Azure portal](https://portal.azure.com), create or update the virtual network peering from the Hub-RM. Navigate to the **Hub-RM** virtual network. Select **Peerings**, then **+ Add** to open **Add peering**. +1. In the [Azure portal](https://portal.azure.com), create or update the virtual network peering from the Hub-RM. Go to the **Hub-RM** virtual network. Select **Peerings**, then **+ Add** to open **Add peering**. 1. On the **Add peering** page, configure the values for **This virtual network**. * Peering link name: Name the link. Example: **HubRMToSpokeRM** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**- * Virtual network gateway: **Use this virtual network's gateway** + * Virtual network gateway: **Use this virtual network's gateway or Route Server** :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png" alt-text="Screenshot shows add peering." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-vnet.png"::: 1. On the same page, continue on to configure the values for the **Remote virtual network**. * Peering link name: Name the link. Example: **SpokeRMtoHubRM**- * Deployment model: **Resource Manager** + * Virtual network deployment model: **Resource Manager** + * I know my resource ID: Leave blank. You only need to select this if you don't have read access to the virtual network or subscription you want to peer with. + * Subscription: Select the subscription. * Virtual Network: **Spoke-RM** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**- * Virtual network gateway: **Use the remote virtual network's gateway** + * Virtual network gateway: **Use the remote virtual network's gateway or Route Server** :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-remote.png" alt-text="Screenshot shows values for remote virtual network." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-remote.png"::: In this scenario, the virtual networks are both in the Resource Manager deployme ### To modify an existing peering for transit -If the peering was already created, you can modify the peering for transit. +If you have an already existing peering, you can modify the peering for transit. -1. Navigate to the virtual network. Select **Peerings** and select the peering that you want to modify. -- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-modify.png" alt-text="Screenshot shows select peerings." lightbox="./media/vpn-gateway-peering-gateway-transit/peering-modify.png"::: +1. Go to the virtual network. Select **Peerings** and select the peering that you want to modify. For example, on the Spoke-RM VNet, select the SpokeRMtoHubRM peering. 1. Update the VNet peering. * Traffic to remote virtual network: **Allow** * Traffic forwarded to virtual network; **Allow**- * Virtual network gateway: **Use remote virtual network's gateway** -- :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png" alt-text="Screenshot shows modify peering gateway." lightbox="./media/vpn-gateway-peering-gateway-transit/modify-peering-settings.png"::: + * Virtual network gateway or Route Server: **Use the remote virtual network's gateway or Route Server** 1. **Save** the peering settings. ### <a name="ps-same"></a>PowerShell sample -You can also use PowerShell to create or update the peering with the example above. Replace the variables with the names of your virtual networks and resource groups. +You can also use PowerShell to create or update the peering. Replace the variables with the names of your virtual networks and resource groups. ```azurepowershell-interactive $SpokeRG = "SpokeRG1" In this configuration, the spoke VNet **Spoke-Classic** is in the classic deploy For this configuration, you only need to configure the **Hub-RM** virtual network. You don't need to configure anything on the **Spoke-Classic** VNet. -1. In the Azure portal, navigate to the **Hub-RM** virtual network, select **Peerings**, then select **+ Add**. +1. In the Azure portal, go to the **Hub-RM** virtual network, select **Peerings**, then select **+ Add**. 1. On the **Add peering** page, configure the following values: * Peering link name: Name the link. Example: **HubRMToClassic** * Traffic to remote virtual network: **Allow** * Traffic forwarded from remote virtual network: **Allow**- * Virtual network gateway: **Use this virtual network's gateway** - * Remote virtual network: **Classic** + * Virtual network gateway or Route Server: **Use this virtual network's gateway or Route Server** + * Peering link name: This value disappears when you select Classic for the virtual network deployment model. + * Virtual network deployment model: **Classic** + * I know my resource ID: Leave blank. You only need to select this if you don't have read access to the virtual network or subscription you want to peer with. :::image type="content" source="./media/vpn-gateway-peering-gateway-transit/peering-classic.png" alt-text="Add peering page for Spoke-Classic" lightbox="./media/vpn-gateway-peering-gateway-transit/peering-classic.png"::: 1. Verify the subscription is correct, then select the virtual network from the dropdown. 1. Select **Add** to add the peering.-1. Verify the peering status as **Connected** on the Hub-RM virtual network. +1. Verify the peering status as **Connected** on the Hub-RM virtual network. For this configuration, you don't need to configure anything on the **Spoke-Classic** virtual network. Once the status shows **Connected**, the spoke virtual network can use the connectivity through the VPN gateway in the hub virtual network. ### <a name="ps-different"></a>PowerShell sample -You can also use PowerShell to create or update the peering with the example above. Replace the variables and subscription ID with the values of your virtual network and resource groups, and subscription. You only need to create virtual network peering on the hub virtual network. +You can also use PowerShell to create or update the peering. Replace the variables and subscription ID with the values of your virtual network and resource groups, and subscription. You only need to create virtual network peering on the hub virtual network. ```azurepowershell-interactive $HubRG = "HubRG1" |
vpn-gateway | Vpn Gateway Radius Mfa Nsp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-radius-mfa-nsp.md | To enable MFA, the users must be in Azure Active Directory (Azure AD), which mus -### Step 2 Configure the NPS for Azure AD MFA +### Step 2: Configure the NPS for Azure AD MFA 1. On the NPS server, [install the NPS extension for Azure AD MFA](../active-directory/authentication/howto-mfa-nps-extension.md#install-the-nps-extension). 2. Open the NPS console, right-click **RADIUS Clients**, and then select **New**. Create the RADIUS client by specifying the following settings: To enable MFA, the users must be in Azure Active Directory (Azure AD), which mus 4. Go to **Policies** > **Network Policies**, double-click **Connections to Microsoft Routing and Remote Access server** policy, select **Grant access**, and then click **OK**. -### Step 3 Configure the virtual network gateway +### Step 3: Configure the virtual network gateway 1. Log on to [Azure portal](https://portal.azure.com). 2. Open the virtual network gateway that you created. Make sure that the gateway type is set to **VPN** and that the VPN type is **route-based**. |
vpn-gateway | Vpn Gateway Troubleshoot Site To Site Cannot Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md | Check the type of the Azure VPN gateway. 1. Go to the virtual network gateway for your VNet. On the **Overview** page, you see the Gateway type, VPN type, and gateway SKU. -### Step 1. Check whether the on-premises VPN device is validated +### Step 1: Check whether the on-premises VPN device is validated 1. Check whether you're using a [validated VPN device and operating system version](vpn-gateway-about-vpn-devices.md#devicetable). If the device isn't a validated VPN device, you might have to contact the device manufacturer to see if there's a compatibility issue. 2. Make sure that the VPN device is correctly configured. For more information, see [Edit device configuration samples](vpn-gateway-about-vpn-devices.md#editing). -### Step 2. Verify the shared key +### Step 2: Verify the shared key Compare the shared key for the on-premises VPN device to the Azure Virtual Network VPN to make sure that the keys match. For the classic deployment model: Get-AzureVNetGatewayKey -VNetName -LocalNetworkSiteName ``` -### Step 3. Verify the VPN peer IPs +### Step 3: Verify the VPN peer IPs - The IP definition in the **Local Network Gateway** object in Azure should match the on-premises device IP. - The Azure gateway IP definition that is set on the on-premises device should match the Azure gateway IP. -### Step 4. Check UDR and NSGs on the gateway subnet +### Step 4: Check UDR and NSGs on the gateway subnet Check for and remove user-defined routing (UDR) or Network Security Groups (NSGs) on the gateway subnet, and then test the result. If the problem is resolved, validate the settings that UDR or NSG applied. -### Step 5. Check the on-premises VPN device external interface address +### Step 5: Check the on-premises VPN device external interface address If the Internet-facing IP address of the VPN device is included in the **Local network** definition in Azure, you might experience sporadic disconnections. -### Step 6. Verify that the subnets match exactly (Azure policy-based gateways) +### Step 6: Verify that the subnets match exactly (Azure policy-based gateways) - Verify that the virtual network address space(s) match exactly between the Azure virtual network and on-premises definitions. - Verify that the subnets match exactly between the **Local Network Gateway** and on-premises definitions for the on-premises network. -### Step 7. Verify the Azure gateway health probe +### Step 7: Verify the Azure gateway health probe 1. Open health probe by browsing to the following URL: If the Internet-facing IP address of the VPN device is included in the **Local n > Basic SKU VPN gateways do not reply to health probe. > They are not recommended for [production workloads](vpn-gateway-about-vpn-gateway-settings.md#workloads). -### Step 8. Check whether the on-premises VPN device has the perfect forward secrecy feature enabled +### Step 8: Check whether the on-premises VPN device has the perfect forward secrecy feature enabled The perfect forward secrecy feature can cause disconnection problems. If the VPN device has perfect forward secrecy enabled, disable the feature. Then update the VPN gateway IPsec policy. |
vpn-gateway | Vpn Gateway Troubleshoot Site To Site Disconnected Intermittently | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently.md | Check the type of Azure virtual network gateway: ![The overview of the gateway](media/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently/gatewayoverview.png) -### Step 1 Check whether the on-premises VPN device is validated +### Step 1: Check whether the on-premises VPN device is validated 1. Check whether you are using a [validated VPN device and operating system version](vpn-gateway-about-vpn-devices.md#devicetable). If the VPN device is not validated, you may have to contact the device manufacturer to see if there is any compatibility issue. 2. Make sure that the VPN device is correctly configured. For more information, see [Editing device configuration samples](vpn-gateway-about-vpn-devices.md#editing). -### Step 2 Check the Security Association settings(for policy-based Azure virtual network gateways) +### Step 2: Check the Security Association settings(for policy-based Azure virtual network gateways) 1. Make sure that the virtual network, subnets and, ranges in the **Local network gateway** definition in Microsoft Azure are same as the configuration on the on-premises VPN device. 2. Verify that the Security Association settings match. -### Step 3 Check for User-Defined Routes or Network Security Groups on Gateway Subnet +### Step 3: Check for User-Defined Routes or Network Security Groups on Gateway Subnet A user-defined route on the gateway subnet may be restricting some traffic and allowing other traffic. This makes it appear that the VPN connection is unreliable for some traffic and good for others. -### Step 4 Check the "one VPN Tunnel per Subnet Pair" setting (for policy-based virtual network gateways) +### Step 4: Check the "one VPN Tunnel per Subnet Pair" setting (for policy-based virtual network gateways) Make sure that the on-premises VPN device is set to have **one VPN tunnel per subnet pair** for policy-based virtual network gateways. -### Step 5 Check for Security Association Limitations +### Step 5: Check for Security Association Limitations The virtual network gateway has limit of 200 subnet Security Association pairs. If the number of Azure virtual network subnets multiplied times by the number of local subnets is greater than 200, you might see sporadic subnets disconnecting. -### Step 6 Check on-premises VPN device external interface address +### Step 6: Check on-premises VPN device external interface address If the Internet facing IP address of the VPN device is included in the **Local network gateway address space** definition in Azure, you may experience sporadic disconnections. -### Step 7 Check whether the on-premises VPN device has Perfect Forward Secrecy enabled +### Step 7: Check whether the on-premises VPN device has Perfect Forward Secrecy enabled The **Perfect Forward Secrecy** feature can cause the disconnection problems. If the VPN device has **Perfect forward Secrecy** enabled, disable the feature. Then [update the virtual network gateway IPsec policy](vpn-gateway-ipsecikepolicy-rm-powershell.md#managepolicy). |
vpn-gateway | Vpn Gateway Vnet Vnet Rm Ps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md | -This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When connecting VNets from different subscriptions, the subscriptions do not need to be associated with the same Active Directory tenant. +This article helps you connect virtual networks by using the VNet-to-VNet connection type. The virtual networks can be in the same or different regions, and from the same or different subscriptions. When you connect virtual networks from different subscriptions, the subscriptions don't need to be associated with the same Active Directory tenant. The steps in this article apply to the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and use PowerShell. You can also create this configuration using a different deployment tool or deployment model by selecting a different option from the following list: The steps in this article apply to the [Resource Manager deployment model](../az > * [Connect different deployment models - Azure portal](vpn-gateway-connect-different-deployment-models-portal.md) > * [Connect different deployment models - PowerShell](vpn-gateway-connect-different-deployment-models-powershell.md) + ## <a name="about"></a>About connecting VNets -There are multiple ways to connect VNets. The sections below describe different ways to connect virtual networks. +There are multiple ways to connect VNets. The following sections describe different ways to connect virtual networks. ### VNet-to-VNet -Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you do not see the local network gateway address space. It is automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets. +Configuring a VNet-to-VNet connection is a good way to easily connect VNets. Connecting a virtual network to another virtual network using the VNet-to-VNet connection type (VNet2VNet) is similar to creating a Site-to-Site IPsec connection to an on-premises location. Both connectivity types use a VPN gateway to provide a secure tunnel using IPsec/IKE, and both function the same way when communicating. The difference between the connection types is the way the local network gateway is configured. When you create a VNet-to-VNet connection, you don't see the local network gateway address space. It's automatically created and populated. If you update the address space for one VNet, the other VNet automatically knows to route to the updated address space. Creating a VNet-to-VNet connection is typically faster and easier than creating a Site-to-Site connection between VNets. ### Site-to-Site (IPsec) -If you are working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-create-site-to-site-rm-powershell.md) steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to update the corresponding local network gateway to reflect the change. It does not automatically update. +If you're working with a complicated network configuration, you may prefer to connect your VNets using the [Site-to-Site](vpn-gateway-create-site-to-site-rm-powershell.md) steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually. The local network gateway for each VNet treats the other VNet as a local site. This lets you specify additional address space for the local network gateway in order to route traffic. If the address space for a VNet changes, you need to update the corresponding local network gateway to reflect the change. It doesn't automatically update. ### VNet peering -You may want to consider connecting your VNets using VNet Peering. VNet peering does not use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). +You may want to consider connecting your VNets using VNet Peering. VNet peering doesn't use a VPN gateway and has different constraints. Additionally, [VNet peering pricing](https://azure.microsoft.com/pricing/details/virtual-network) is calculated differently than [VNet-to-VNet VPN Gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway). For more information, see [VNet peering](../virtual-network/virtual-network-peering-overview.md). ## <a name="why"></a>Why create a VNet-to-VNet connection? VNet-to-VNet communication can be combined with multi-site configurations. This ## <a name="steps"></a>Which VNet-to-VNet steps should I use? In this article, you see two different sets of steps. One set of steps for [VNets that reside in the same subscription](#samesub) and one for [VNets that reside in different subscriptions](#difsub).-The key difference between the sets is that you must use separate PowerShell sessions when configuring the connections for VNets that reside in different subscriptions. +The key difference between the sets is that you must use separate PowerShell sessions when configuring the connections for VNets that reside in different subscriptions. -For this exercise, you can combine configurations, or just choose the one that you want to work with. All of the configurations use the VNet-to-VNet connection type. Network traffic flows between the VNets that are directly connected to each other. In this exercise, traffic from TestVNet4 does not route to TestVNet5. +For this exercise, you can combine configurations, or just choose the one that you want to work with. All of the configurations use the VNet-to-VNet connection type. Network traffic flows between the VNets that are directly connected to each other. In this exercise, traffic from TestVNet4 doesn't route to TestVNet5. * [VNets that reside in the same subscription](#samesub): The steps for this configuration use TestVNet1 and TestVNet4. - ![Diagram that shows V Net-to-V Net steps for V Nets that reside in the same subscription.](./media/vpn-gateway-vnet-vnet-rm-ps/v2vrmps.png) - * [VNets that reside in different subscriptions](#difsub): The steps for this configuration use TestVNet1 and TestVNet5. - ![v2v diagram](./media/vpn-gateway-vnet-vnet-rm-ps/v2vdiffsub.png) - ## <a name="samesub"></a>How to connect VNets that are in the same subscription -### Before you begin +You can complete the following steps using Azure Cloud Shell. If you would rather install latest version of the Azure PowerShell module locally, see [How to install and configure Azure PowerShell](/powershell/azure/). --* Because it takes 45 minutes or more to create a gateway, Azure Cloud Shell will timeout periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal. --* If you would rather install latest version of the Azure PowerShell module locally, see [How to install and configure Azure PowerShell](/powershell/azure/). +Because it takes 45 minutes or more to create a gateway, Azure Cloud Shell times out periodically during this exercise. You can restart Cloud Shell by clicking in the upper left of the terminal. Be sure to redeclare any variables when you restart the terminal. ### <a name="Step1"></a>Step 1 - Plan your IP address ranges -In the following steps, you create two virtual networks along with their respective gateway subnets and configurations. You then create a VPN connection between the two VNets. ItΓÇÖs important to plan the IP address ranges for your network configuration. Keep in mind that you must make sure that none of your VNet ranges or local network ranges overlap in any way. In these examples, we do not include a DNS server. If you want name resolution for your virtual networks, see [Name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). +In the following steps, you create two virtual networks along with their respective gateway subnets and configurations. You then create a VPN connection between the two VNets. ItΓÇÖs important to plan the IP address ranges for your network configuration. Keep in mind that you must make sure that none of your VNet ranges or local network ranges overlap in any way. In these examples, we don't include a DNS server. If you want name resolution for your virtual networks, see [Name resolution](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md). We use the following values in the examples: We use the following values in the examples: * VNet Name: TestVNet1 * Resource Group: TestRG1 * Location: East US-* TestVNet1: 10.11.0.0/16 & 10.12.0.0/16 -* FrontEnd: 10.11.0.0/24 -* BackEnd: 10.12.0.0/24 -* GatewaySubnet: 10.12.255.0/27 +* TestVNet1: 10.1.0.0/16 +* FrontEnd: 10.1.0.0/24 +* GatewaySubnet: 10.1.255.0/27 * GatewayName: VNet1GW * Public IP: VNet1GWIP * VPNType: RouteBased We use the following values in the examples: **Values for TestVNet4:** * VNet Name: TestVNet4-* TestVNet2: 10.41.0.0/16 & 10.42.0.0/16 +* TestVNet2: 10.41.0.0/16 * FrontEnd: 10.41.0.0/24-* BackEnd: 10.42.0.0/24 -* GatewaySubnet: 10.42.255.0/27 +* GatewaySubnet: 10.41.255.0/27 * Resource Group: TestRG4 * Location: West US * GatewayName: VNet4GW We use the following values in the examples: * Connection: VNet4toVNet1 * ConnectionType: VNet2VNet - ### <a name="Step2"></a>Step 2 - Create and configure TestVNet1 -1. Verify your subscription settings. -- Connect to your account if you are running PowerShell locally on your computer. If you are using Azure Cloud Shell, you are connected automatically. -- ```azurepowershell-interactive - Connect-AzAccount - ``` -- Check the subscriptions for the account. -- ```azurepowershell-interactive - Get-AzSubscription - ``` +For the following steps, you can either use Azure Cloud Shell, or you can run PowerShell locally. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/). - If you have more than one subscription, specify the subscription that you want to use. +> [!NOTE] +> You may see warnings saying "The output object type of this cmdlet will be modified in a future release". This is expected behavior and you can safely ignore these warnings. - ```azurepowershell-interactive - Select-AzSubscription -SubscriptionName nameofsubscription - ``` -2. Declare your variables. This example declares the variables using the values for this exercise. In most cases, you should replace the values with your own. However, you can use these variables if you are running through the steps to become familiar with this type of configuration. Modify the variables if needed, then copy and paste them into your PowerShell console. +1. Declare your variables. This example declares the variables using the values for this exercise. In most cases, you should replace the values with your own. However, you can use these variables if you're running through the steps to become familiar with this type of configuration. Modify the variables if needed, then copy and paste them into your PowerShell console. ```azurepowershell-interactive $RG1 = "TestRG1" $Location1 = "East US" $VNetName1 = "TestVNet1" $FESubName1 = "FrontEnd"- $BESubName1 = "Backend" - $VNetPrefix11 = "10.11.0.0/16" - $VNetPrefix12 = "10.12.0.0/16" - $FESubPrefix1 = "10.11.0.0/24" - $BESubPrefix1 = "10.12.0.0/24" - $GWSubPrefix1 = "10.12.255.0/27" + $VNetPrefix1 = "10.1.0.0/16" + $FESubPrefix1 = "10.1.0.0/24" + $GWSubPrefix1 = "10.1.255.0/27" $GWName1 = "VNet1GW" $GWIPName1 = "VNet1GWIP" $GWIPconfName1 = "gwipconf1" $Connection14 = "VNet1toVNet4" $Connection15 = "VNet1toVNet5" ```-3. Create a resource group. ++1. Create a resource group. ```azurepowershell-interactive New-AzResourceGroup -Name $RG1 -Location $Location1 ```-4. Create the subnet configurations for TestVNet1. This example creates a virtual network named TestVNet1 and three subnets, one called GatewaySubnet, one called FrontEnd, and one called Backend. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails. For this reason, it is not assigned via variable below. - The following example uses the variables that you set earlier. In this example, the gateway subnet is using a /27. While it is possible to create a gateway subnet as small as /29, we recommend that you create a larger subnet that includes more addresses by selecting at least /28 or /27. This will allow for enough addresses to accommodate possible additional configurations that you may want in the future. +1. Create the subnet configurations for TestVNet1. This example creates a virtual network named TestVNet1 and two subnets, one called GatewaySubnet, and one called FrontEnd. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails. For this reason, it isn't assigned via variable in the example. ++ The following example uses the variables that you set earlier. In this example, the gateway subnet is using a /27. While it's possible to create a gateway subnet using /28 for this configuration, we recommend that you create a larger subnet that includes more addresses by selecting at least /27. This will allow for enough addresses to accommodate possible additional configurations that you may want in the future. ```azurepowershell-interactive $fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubName1 -AddressPrefix $FESubPrefix1- $besub1 = New-AzVirtualNetworkSubnetConfig -Name $BESubName1 -AddressPrefix $BESubPrefix1 $gwsub1 = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix $GWSubPrefix1 ```-5. Create TestVNet1. ++1. Create TestVNet1. ```azurepowershell-interactive New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 `- -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1 + -Location $Location1 -AddressPrefix $VNetPrefix1 -Subnet $fesub1,$gwsub1 ```-6. Request a public IP address to be allocated to the gateway you will create for your VNet. Notice that the AllocationMethod is Dynamic. You cannot specify the IP address that you want to use. It's dynamically allocated to your gateway. ++1. A VPN gateway must have an allocated public IP address. When you create a connection to a VPN gateway, this is the IP address that you specify. Use the following example to request a public IP address. ```azurepowershell-interactive $gwpip1 = New-AzPublicIpAddress -Name $GWIPName1 -ResourceGroupName $RG1 `- -Location $Location1 -AllocationMethod Dynamic + -Location $Location1 -AllocationMethod Static -Sku Standard ```-7. Create the gateway configuration. The gateway configuration defines the subnet and the public IP address to use. Use the example to create your gateway configuration. ++1. Create the gateway configuration. The gateway configuration defines the subnet and the public IP address to use. Use the example to create your gateway configuration. ```azurepowershell-interactive $vnet1 = Get-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 We use the following values in the examples: $gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName1 ` -Subnet $subnet1 -PublicIpAddress $gwpip1 ```-8. Create the gateway for TestVNet1. In this step, you create the virtual network gateway for your TestVNet1. VNet-to-VNet configurations require a RouteBased VpnType. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. ++1. Create the gateway for TestVNet1. In this step, you create the virtual network gateway for your TestVNet1. VNet-to-VNet configurations require a RouteBased VpnType. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. ```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 ` -Location $Location1 -IpConfigurations $gwipconf1 -GatewayType Vpn `- -VpnType RouteBased -GatewaySku VpnGw1 + -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2" ``` -After you finish the commands, it will take 45 minutes or more to create this gateway. If you are using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes. +After you finish the commands, it will take 45 minutes or more to create this gateway. If you're using Azure Cloud Shell, you can restart your Cloud Shell session by clicking in the upper left of the Cloud Shell terminal, then configure TestVNet4. You don't need to wait until the TestVNet1 gateway completes. -### Step 3 - Create and configure TestVNet4 +### Step 3: Create and configure TestVNet4 -Once you've configured TestVNet1, create TestVNet4. Follow the steps below, replacing the values with your own when needed. +Create TestVNet4. Use the following steps, replacing the values with your own when needed. 1. Connect and declare your variables. Be sure to replace the values with the ones that you want to use for your configuration. Once you've configured TestVNet1, create TestVNet4. Follow the steps below, repl $Location4 = "West US" $VnetName4 = "TestVNet4" $FESubName4 = "FrontEnd"- $BESubName4 = "Backend" - $VnetPrefix41 = "10.41.0.0/16" - $VnetPrefix42 = "10.42.0.0/16" + $VnetPrefix4 = "10.41.0.0/16" $FESubPrefix4 = "10.41.0.0/24"- $BESubPrefix4 = "10.42.0.0/24" - $GWSubPrefix4 = "10.42.255.0/27" + $GWSubPrefix4 = "10.41.255.0/27" $GWName4 = "VNet4GW" $GWIPName4 = "VNet4GWIP" $GWIPconfName4 = "gwipconf4" $Connection41 = "VNet4toVNet1" ```-2. Create a resource group. ++1. Create a resource group. ```azurepowershell-interactive New-AzResourceGroup -Name $RG4 -Location $Location4 ```-3. Create the subnet configurations for TestVNet4. ++1. Create the subnet configurations for TestVNet4. ```azurepowershell-interactive $fesub4 = New-AzVirtualNetworkSubnetConfig -Name $FESubName4 -AddressPrefix $FESubPrefix4- $besub4 = New-AzVirtualNetworkSubnetConfig -Name $BESubName4 -AddressPrefix $BESubPrefix4 $gwsub4 = New-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -AddressPrefix $GWSubPrefix4 ```-4. Create TestVNet4. ++1. Create TestVNet4. ```azurepowershell-interactive New-AzVirtualNetwork -Name $VnetName4 -ResourceGroupName $RG4 `- -Location $Location4 -AddressPrefix $VnetPrefix41,$VnetPrefix42 -Subnet $fesub4,$besub4,$gwsub4 + -Location $Location4 -AddressPrefix $VnetPrefix4 -Subnet $fesub4,$gwsub4 ```-5. Request a public IP address. ++1. Request a public IP address. ```azurepowershell-interactive $gwpip4 = New-AzPublicIpAddress -Name $GWIPName4 -ResourceGroupName $RG4 `- -Location $Location4 -AllocationMethod Dynamic + -Location $Location4 -AllocationMethod Static -Sku Standard ```-6. Create the gateway configuration. ++1. Create the gateway configuration. ```azurepowershell-interactive $vnet4 = Get-AzVirtualNetwork -Name $VnetName4 -ResourceGroupName $RG4 $subnet4 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet4 $gwipconf4 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName4 -Subnet $subnet4 -PublicIpAddress $gwpip4 ```-7. Create the TestVNet4 gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. ++1. Create the TestVNet4 gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. ```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName4 -ResourceGroupName $RG4 ` -Location $Location4 -IpConfigurations $gwipconf4 -GatewayType Vpn `- -VpnType RouteBased -GatewaySku VpnGw1 + -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2" ``` -### Step 4 - Create the connections +### Step 4: Create the connections Wait until both gateways are completed. Restart your Azure Cloud Shell session and copy and paste the variables from the beginning of Step 2 and Step 3 into the console to redeclare values. Wait until both gateways are completed. Restart your Azure Cloud Shell session a $vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $vnet4gw = Get-AzVirtualNetworkGateway -Name $GWName4 -ResourceGroupName $RG4 ```-2. Create the TestVNet1 to TestVNet4 connection. In this step, you create the connection from TestVNet1 to TestVNet4. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete. ++1. Create the TestVNet1 to TestVNet4 connection. In this step, you create the connection from TestVNet1 to TestVNet4. You'll see a shared key referenced in the examples. You can use your own values for the shared key. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete. ```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name $Connection14 -ResourceGroupName $RG1 ` -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet4gw -Location $Location1 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```-3. Create the TestVNet4 to TestVNet1 connection. This step is similar to the one above, except you are creating the connection from TestVNet4 to TestVNet1. Make sure the shared keys match. The connection will be established after a few minutes. ++1. Create the TestVNet4 to TestVNet1 connection. This step is similar to previous step, except you're creating the connection from TestVNet4 to TestVNet1. Make sure the shared keys match. The connection will be established after a few minutes. ```azurepowershell-interactive New-AzVirtualNetworkGatewayConnection -Name $Connection41 -ResourceGroupName $RG4 ` -VirtualNetworkGateway1 $vnet4gw -VirtualNetworkGateway2 $vnet1gw -Location $Location4 ` -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```-4. Verify your connection. See the section [How to verify your connection](#verify). ++1. Verify your connection. See the section [How to verify your connection](#verify). ## <a name="difsub"></a>How to connect VNets that are in different subscriptions -In this scenario, you connect TestVNet1 and TestVNet5. TestVNet1 and TestVNet5 reside in different subscriptions. The subscriptions do not need to be associated with the same Active Directory tenant. +In this scenario, you connect TestVNet1 and TestVNet5. TestVNet1 and TestVNet5 reside in different subscriptions. The subscriptions don't need to be associated with the same Active Directory tenant. The difference between these steps and the previous set is that some of the configuration steps need to be performed in a separate PowerShell session in the context of the second subscription. Especially when the two subscriptions belong to different organizations. Due to changing subscription context in this exercise, you may find it easier to use PowerShell locally on your computer, rather than using the Azure Cloud Shell, when you get to Step 8. -### Step 5 - Create and configure TestVNet1 +### Step 5: Create and configure TestVNet1 -You must complete [Step 1](#Step1) and [Step 2](#Step2) from the previous section to create and configure TestVNet1 and the VPN Gateway for TestVNet1. For this configuration, you are not required to create TestVNet4 from the previous section, although if you do create it, it will not conflict with these steps. Once you complete Step 1 and Step 2, continue with Step 6 to create TestVNet5. +You must complete [Step 1](#Step1) and [Step 2](#Step2) from the previous section to create and configure TestVNet1 and the VPN Gateway for TestVNet1. For this configuration, you aren't required to create TestVNet4 from the previous section, although if you do create it, it won't conflict with these steps. Once you complete Step 1 and Step 2, continue with Step 6 to create TestVNet5. -### Step 6 - Verify the IP address ranges +### Step 6: Verify the IP address ranges -It is important to make sure that the IP address space of the new virtual network, TestVNet5, does not overlap with any of your VNet ranges or local network gateway ranges. In this example, the virtual networks may belong to different organizations. For this exercise, you can use the following values for the TestVNet5: +It's important to make sure that the IP address space of the new virtual network, TestVNet5, doesn't overlap with any of your VNet ranges or local network gateway ranges. In this example, the virtual networks may belong to different organizations. For this exercise, you can use the following values for the TestVNet5: **Values for TestVNet5:** * VNet Name: TestVNet5 * Resource Group: TestRG5 * Location: Japan East-* TestVNet5: 10.51.0.0/16 & 10.52.0.0/16 +* TestVNet5: 10.51.0.0/16 * FrontEnd: 10.51.0.0/24-* BackEnd: 10.52.0.0/24 -* GatewaySubnet: 10.52.255.0.0/27 +* GatewaySubnet: 10.51.255.0.0/27 * GatewayName: VNet5GW * Public IP: VNet5GWIP * VPNType: RouteBased * Connection: VNet5toVNet1 * ConnectionType: VNet2VNet -### Step 7 - Create and configure TestVNet5 +### Step 7: Create and configure TestVNet5 This step must be done in the context of the new subscription. This part may be performed by the administrator in a different organization that owns the subscription. This step must be done in the context of the new subscription. This part may be $Location5 = "Japan East" $VnetName5 = "TestVNet5" $FESubName5 = "FrontEnd"- $BESubName5 = "Backend" $GWSubName5 = "GatewaySubnet"- $VnetPrefix51 = "10.51.0.0/16" - $VnetPrefix52 = "10.52.0.0/16" + $VnetPrefix5 = "10.51.0.0/16" $FESubPrefix5 = "10.51.0.0/24"- $BESubPrefix5 = "10.52.0.0/24" - $GWSubPrefix5 = "10.52.255.0/27" + $GWSubPrefix5 = "10.51.255.0/27" $GWName5 = "VNet5GW" $GWIPName5 = "VNet5GWIP" $GWIPconfName5 = "gwipconf5" $Connection51 = "VNet5toVNet1" ```-2. Connect to subscription 5. Open your PowerShell console and connect to your account. Use the following sample to help you connect: ++1. Connect to subscription 5. Open your PowerShell console and connect to your account. Use the following sample to help you connect: ```azurepowershell-interactive Connect-AzAccount This step must be done in the context of the new subscription. This part may be ```azurepowershell-interactive Select-AzSubscription -SubscriptionName $Sub5 ```-3. Create a new resource group. ++1. Create a new resource group. ```azurepowershell-interactive New-AzResourceGroup -Name $RG5 -Location $Location5 ```-4. Create the subnet configurations for TestVNet5. ++1. Create the subnet configurations for TestVNet5. ```azurepowershell-interactive $fesub5 = New-AzVirtualNetworkSubnetConfig -Name $FESubName5 -AddressPrefix $FESubPrefix5- $besub5 = New-AzVirtualNetworkSubnetConfig -Name $BESubName5 -AddressPrefix $BESubPrefix5 $gwsub5 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName5 -AddressPrefix $GWSubPrefix5 ```-5. Create TestVNet5. ++1. Create TestVNet5. ```azurepowershell-interactive New-AzVirtualNetwork -Name $VnetName5 -ResourceGroupName $RG5 -Location $Location5 `- -AddressPrefix $VnetPrefix51,$VnetPrefix52 -Subnet $fesub5,$besub5,$gwsub5 + -AddressPrefix $VnetPrefix5 -Subnet $fesub5,$gwsub5 ```-6. Request a public IP address. ++1. Request a public IP address. ```azurepowershell-interactive $gwpip5 = New-AzPublicIpAddress -Name $GWIPName5 -ResourceGroupName $RG5 `- -Location $Location5 -AllocationMethod Dynamic + -Location $Location5 -AllocationMethod Static -Sku Standard ```-7. Create the gateway configuration. ++1. Create the gateway configuration. ```azurepowershell-interactive $vnet5 = Get-AzVirtualNetwork -Name $VnetName5 -ResourceGroupName $RG5 $subnet5 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet5 $gwipconf5 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName5 -Subnet $subnet5 -PublicIpAddress $gwpip5 ```-8. Create the TestVNet5 gateway. ++1. Create the TestVNet5 gateway. ```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName5 -ResourceGroupName $RG5 -Location $Location5 `- -IpConfigurations $gwipconf5 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 + -IpConfigurations $gwipconf5 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw2 -VpnGatewayGeneration "Generation2" ``` -### Step 8 - Create the connections +### Step 8: Create the connections In this example, because the gateways are in the different subscriptions, we've split this step into two PowerShell sessions marked as [Subscription 1] and [Subscription 5]. In this example, because the gateways are in the different subscriptions, we've These two elements will have values similar to the following example output: - ``` + ```azurepowershell-interactive PS D:\> $vnet1gw.Name VNet1GW PS D:\> $vnet1gw.Id /subscriptions/b636ca99-6f88-4df4-a7c3-2f8dc4545509/resourceGroupsTestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW ```-2. **[Subscription 5]** Get the virtual network gateway for Subscription 5. Sign in and connect to Subscription 5 before running the following example: ++1. **[Subscription 5]** Get the virtual network gateway for Subscription 5. Sign in and connect to Subscription 5 before running the following example: ```azurepowershell-interactive $vnet5gw = Get-AzVirtualNetworkGateway -Name $GWName5 -ResourceGroupName $RG5 In this example, because the gateways are in the different subscriptions, we've These two elements will have values similar to the following example output: - ``` + ```azurepowershell-interactive PS C:\> $vnet5gw.Name VNet5GW PS C:\> $vnet5gw.Id /subscriptions/66c8e4f1-ecd6-47ed-9de7-7e530de23994/resourceGroups/TestRG5/providers/Microsoft.Network/virtualNetworkGateways/VNet5GW ```-3. **[Subscription 1]** Create the TestVNet1 to TestVNet5 connection. In this step, you create the connection from TestVNet1 to TestVNet5. The difference here is that $vnet5gw cannot be obtained directly because it is in a different subscription. You will need to create a new PowerShell object with the values communicated from Subscription 1 in the steps above. Use the example below. Replace the Name, ID, and shared key with your own values. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete. ++1. **[Subscription 1]** Create the TestVNet1 to TestVNet5 connection. In this step, you create the connection from TestVNet1 to TestVNet5. The difference here is that $vnet5gw can't be obtained directly because it is in a different subscription. You'll need to create a new PowerShell object with the values communicated from Subscription 1 in the previous steps. Use the following example. Replace the Name, ID, and shared key with your own values. The important thing is that the shared key must match for both connections. Creating a connection can take a short while to complete. Connect to Subscription 1 before running the following example: In this example, because the gateways are in the different subscriptions, we've $Connection15 = "VNet1toVNet5" New-AzVirtualNetworkGatewayConnection -Name $Connection15 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet5gw -Location $Location1 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' ```-4. **[Subscription 5]** Create the TestVNet5 to TestVNet1 connection. This step is similar to the one above, except you are creating the connection from TestVNet5 to TestVNet1. The same process of creating a PowerShell object based on the values obtained from Subscription 1 applies here as well. In this step, be sure that the shared keys match. ++1. **[Subscription 5]** Create the TestVNet5 to TestVNet1 connection. This step is similar previous step, except you're creating the connection from TestVNet5 to TestVNet1. The same process of creating a PowerShell object based on the values obtained from Subscription 1 applies here as well. In this step, be sure that the shared keys match. Connect to Subscription 5 before running the following example: In this example, because the gateways are in the different subscriptions, we've ## <a name="faq"></a>VNet-to-VNet FAQ +For more information about VNet-to-VNet connections, see the [VPN Gateway FAQ](vpn-gateway-vpn-faq.md#V2VMulti). ## Next steps |
web-application-firewall | Protect Azure Open Ai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/protect-azure-open-ai.md | + + Title: Protect Azure OpenAI using Azure Web Application Firewall on Azure Front Door +description: Learn how to Protect Azure OpenAI using Azure Web Application Firewall on Azure Front Door ++++ Last updated : 08/28/2023+++# Protect Azure OpenAI using Azure Web Application Firewall on Azure Front Door ++There are a growing number of enterprises using Azure OpenAI APIs, and the number and complexity of security attacks against web applications is constantly evolving. A strong security strategy is necessary to protect Azure OpenAI APIs from various web application attacks. ++Azure Web Application Firewall (WAF) is an Azure Networking product that protects web applications and APIs from various OWASP top 10 web attacks, Common Vulnerabilities and Exposures (CVEs), and malicious bot attacks. ++This article describes how to use Azure Web Application Firewall (WAF) on Azure Front Door to protect Azure OpenAI endpoints. ++## Prerequisites ++If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +++## Create Azure OpenAI instance using the gpt-35-turbo model +First, create an OpenAI instance. +++1. Create an Azure OpenAI instance and deploy a gpt-35-turbo model using [Create and deploy an Azure OpenAI Service resource](../../ai-services/openai/how-to/create-resource.md). +1. Identify the Azure OpenAI endpoint and the API key. ++ Open the Azure OpenAI studio and open the **Chat** option under **Playground**. + Use the **View code** option to display the endpoint and the API key. + :::image type="content" source="../media/protect-azure-open-ai/view-code.png" alt-text="Screenshot showing Azure AI Studio Chat playground." lightbox="../media/protect-azure-open-ai/view-code.png"::: + <br> ++ :::image type="content" source="../media/protect-azure-open-ai/sample-code.png" alt-text="Screenshot showing Azure OpenAI sample code with Endpoint and Key."::: ++1. Validate Azure OpenAI call using [Postman](https://www.postman.com/). + Use the Azure OpenAPI endpoint and api-key values found in the earlier steps. + Use these lines of code in the POST body: ++ ```json + { + "model":"gpt-35-turbo", + "messages": [ + { + "role": "user", + "content": "What is Azure OpenAI?" + } + ] + } ++ ``` + :::image type="content" source="../media/protect-azure-open-ai/postman-body.png" alt-text="Screenshot showing the post body." lightbox="../media/protect-azure-open-ai/postman-body.png"::: +1. In response to the POST, you should receive a *200 OK*: + :::image type="content" source="../media/protect-azure-open-ai/post-200-ok.png" alt-text="Screenshot showing the POST 200 OK." lightbox="../media/protect-azure-open-ai/post-200-ok.png"::: ++ The Azure OpenAI also generates a response using the GPT model. ++## Create an Azure Front Door instance with Azure WAF ++Now use the Azure portal to create an Azure Front Door instance with Azure WAF. ++1. Create an Azure Front Door premium optimized tier with an associated WAF security policy in the same resource group. Use the **Custom create** option. ++ 1. [Quickstart: Create an Azure Front Door profile - Azure portal](../../frontdoor/create-front-door-portal.md#create-a-front-door-for-your-application) +1. Add endpoints and routes. +1. Add the origin hostname: The origin hostname is `testazureopenai.openai.azure.com`. +1. Add the WAF policy. +++## Configure a WAF policy to protect against web application and API vulnerabilities ++Enable the WAF policy in prevention mode and ensure **Microsoft_DefaultRuleSet_2.1** and **Microsoft_BotManagerRuleSet_1.0** are enabled. +++## Verify access to Azure OpenAI via Azure Front Door endpoint ++Now verify your Azure Front Door endpoint. ++1. Retrieve the Azure Front Door endpoint from the Front Door Manager. ++ :::image type="content" source="../media/protect-azure-open-ai/front-door-endpoint.png" alt-text="Screenshot showing the Azure Front Door endpoint." lightbox="../media/protect-azure-open-ai/front-door-endpoint.png"::: +2. Use Postman to send a POST request to the Azure Front Door endpoint. + 1. Replace the Azure OpenAI endpoint with the AFD endpoint in Postman POST request. + :::image type="content" source="../media/protect-azure-open-ai/test-final.png" alt-text="Screenshot showing the final POST." lightbox="../media/protect-azure-open-ai/test-final.png"::: ++ Azure OpenAI also generates a response using the GPT model. ++## Validate WAF blocks an OWASP attack ++Send a POST request simulating an OWASP attack on the Azure OpenAI endpoint. WAF blocks the call with a *403 Forbidden response* code. ++## Configure IP restriction rules using WAF ++To restrict access to the Azure OpenAI endpoint to the required IP addresses, see [Configure an IP restriction rule with a WAF for Azure Front Door](waf-front-door-configure-ip-restriction.md). ++## Common issues ++The following items are common issues you may encounter when using Azure OpenAI with Azure Front Door and Azure WAF. ++- You get a *401: Access Denied* message when you send a POST request to your Azure OpenAI endpoint. ++ If you attempt to send a POST request to your Azure OpenAI endpoint immediately after you create it, you may receive a *401: Access Denied* message even if you have the correct API key in your request. This issue will usually resolve itself after some time without any direct intervention. ++- You get a *415: Unsupported Media Type* message when you send a POST request to your Azure OpenAI endpoint. ++ If you attempt to send a POST request to your Azure OpenAI endpoint with the Content-Type header `text/plain`, you get this message. Make sure to update your Content-Type header to `application/json` in the header section in Postman. |
web-application-firewall | Waf Front Door Drs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-drs.md | The following rule groups and rules are available when you use Azure Web Applica |941150|XSS Filter - Category 5: Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|-|941180|Node-Validator Blacklist Keywords| +|941180|Node-Validator Blocklist Keywords| |941190|XSS using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript| The following rule groups and rules are available when you use Azure Web Applica |941370|JavaScript global variable found| |941380|AngularJS client side template injection detected| ->[!NOTE] -> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. - ### <a name="drs942-21"></a> SQLI: SQL injection |RuleId|Description| ||| The following rule groups and rules are available when you use Azure Web Applica |941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|-|941180|Node-Validator Blacklist Keywords.| +|941180|Node-Validator Blocklist Keywords.| |941190|XSS Using style sheets.| |941200|XSS using VML frames.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).| The following rule groups and rules are available when you use Azure Web Applica |941370|JavaScript global variable found.| |941380|AngularJS client side template injection detected.| ->[!NOTE] -> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. - ### <a name="drs942-20"></a> SQLI: SQL injection |RuleId|Description| ||| The following rule groups and rules are available when you use Azure Web Applica |941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|-|941180|Node-Validator Blacklist Keywords.| +|941180|Node-Validator Blocklist Keywords.| |941190|IE XSS Filters - Attack Detected.| |941200|IE XSS Filters - Attack Detected.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)) found.| The following rule groups and rules are available when you use Azure Web Applica |941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.| ->[!NOTE] -> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. - ### <a name="drs942-11"></a> SQLI: SQL injection |RuleId|Description| ||| The following rule groups and rules are available when you use Azure Web Applica |941150|XSS Filter - Category 5: Disallowed HTML Attributes.| |941160|NoScript XSS InjectionChecker: HTML Injection.| |941170|NoScript XSS InjectionChecker: Attribute Injection.|-|941180|Node-Validator Blacklist Keywords.| +|941180|Node-Validator Blocklist Keywords.| |941190|XSS Using style sheets.| |941200|XSS using VML frames.| |941210|IE XSS Filters - Attack Detected or Text4Shell ([CVE-2022-42889](https://nvd.nist.gov/vuln/detail/CVE-2022-42889)).| The following rule groups and rules are available when you use Azure Web Applica |941340|IE XSS Filters - Attack Detected.| |941350|UTF-7 Encoding IE XSS - Attack Detected.| ->[!NOTE] -> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article. - ### <a name="drs942-10"></a> SQLI: SQL injection |RuleId|Description| ||| |
web-application-firewall | Waf Front Door Geo Filtering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/afds/waf-front-door-geo-filtering.md | You can configure a geo-filtering policy for your Azure Front Door instance by u | TM | Turkmenistan| | TN | Tunisia| | TO | Tonga|-| TR | Turkey| +| TR | T├╝rkiye| | TT | Trinidad and Tobago| | TV | Tuvalu| | TW | Taiwan| |
web-application-firewall | Application Gateway Crs Rulegroups Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-crs-rulegroups-rules.md | The following rule groups and rules are available when using Web Application Fir |941150|XSS Filter - Category 5: Disallowed HTML Attributes| |941160|NoScript XSS InjectionChecker: HTML Injection| |941170|NoScript XSS InjectionChecker: Attribute Injection|-|941180|Node-Validator Blacklist Keywords| +|941180|Node-Validator Blocklist Keywords| |941190|XSS Using style sheets| |941200|XSS using VML frames| |941210|XSS using obfuscated JavaScript| The following rule groups and rules are available when using Web Application Fir |941380|AngularJS client side template injection detected| >[!NOTE]-> This article contains references to the term *blacklist*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. +> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article. ### <a name="drs942-21"></a> SQLI - SQL Injection |RuleId|Description| |
web-application-firewall | Application Gateway Customize Waf Rules Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/application-gateway-customize-waf-rules-cli.md | az network application-gateway waf-config set --resource-group AdatumAppGatewayR ## Mandatory rules -The following list contains conditions that cause the WAF to block the request while in Prevention Mode (in Detection Mode they are logged as exceptions). These can't be configured or disabled: +The following list contains conditions that cause the WAF to block the request while in Prevention Mode (in Detection Mode they're logged as exceptions). These conditions can't be configured or disabled: * Failure to parse the request body results in the request being blocked, unless body inspection is turned off (XML, JSON, form data) * Request body (with no files) data length is larger than the configured limit The following list contains conditions that cause the WAF to block the request w CRS 3.x specific: -* Inbound anomaly score exceeded threshold +* Inbound `anomaly score` exceeded threshold ## Next steps |
web-application-firewall | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/best-practices.md | Title: Best practices for Web Application Firewall on Azure Application Gateway -description: In this tutorial, you learn about the best practices for using the web application firewall with Application Gateway. + Title: Best practices for Azure Web Application Firewall (WAF) on Azure Application Gateway +description: In this article, you learn about the best practices for using the Azure Web Application Firewall (WAF) on Azure Application Gateway. - Previously updated : 09/06/2022+ Last updated : 08/28/2023 -# Best practices for Web Application Firewall on Application Gateway +# Best practices for Azure Web Application Firewall (WAF) on Azure Application Gateway -This article summarizes best practices for using the web application firewall (WAF) on Azure Application Gateway. +This article summarizes best practices for using Azure Web Application Firewall (WAF) on Azure Application Gateway. ## General best practices ### Enable the WAF -For internet-facing applications, we recommend you enable a web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks. +For Internet-facing applications, we recommend you enable a web application firewall (WAF) and configure it to use managed rules. When you use a WAF and Microsoft-managed rules, your application is protected from a range of attacks. ### Use WAF policies For more information, see [Troubleshoot Web Application Firewall (WAF) for Azure ### Use prevention mode -After you've tuned your WAF, you should configure it to [run in prevention mode](create-waf-policy-ag.md#configure-waf-rules-optional). By running in prevention mode, you ensure the WAF actually blocks requests that it detects are malicious. Running in detection mode is useful while you tune and configure your WAF, but provides no protection. +After you tune your WAF, you should configure it to [run in **prevention** mode](create-waf-policy-ag.md#configure-waf-rules-optional). By running in **prevention** mode, you ensure the WAF actually blocks requests that it detects as malicious. Running in **detection** mode is useful for testing purposes while you tune and configure your WAF but it provides no protection. It logs the traffic, but it doesn't take any actions such as *allow* or *deny*. ### Define your WAF configuration as code When you tune your WAF for your application workload, you typically create a set of rule exclusions to reduce false positive detections. If you manually configure these exclusions by using the Azure portal, then when you upgrade your WAF to use a newer ruleset version, you need to reconfigure the same exceptions against the new ruleset version. This process can be time-consuming and error-prone. -Instead, consider defining your WAF rule exclusions and other configuration as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions. +Instead, consider defining your WAF rule exclusions and other configurations as code, such as by using the Azure CLI, Azure PowerShell, Bicep or Terraform. Then, when you need to update your WAF ruleset version, you can easily reuse the same exclusions. ## Managed ruleset best practices ### Enable core rule sets -Microsoft's core rule sets are designed to protect your application by detecting and blocking common attacks. The rules are based on a various sources including the OWASP top 10 attack types and information from Microsoft Threat Intelligence. +Microsoft's core rule sets are designed to protect your application by detecting and blocking common attacks. The rules are based on various sources including the OWASP top 10 attack types and information from Microsoft Threat Intelligence. For more information, see [Web Application Firewall CRS rule groups and rules](application-gateway-crs-rulegroups-rules.md). For more information, see [Geomatch custom rules](geomatch-custom-rules.md). ### Add diagnostic settings to save your WAF's logs -Application Gateway's WAF integrates with Azure Monitor. It's important to save the WAF logs to a destination like Log Analytics. You should review the WAF logs regularly. Reviewing logs helps you to [tune your WAF policies to reduce false-positive detections](#tune-your-waf), and to understand whether your application has been the subject of attacks. +Application Gateway's WAF integrates with Azure Monitor. It's important to enable the diagnostic settings and save the WAF logs to a destination like Log Analytics. You should review the WAF logs regularly. Reviewing logs helps you to [tune your WAF policies to reduce false-positive detections](#tune-your-waf), and to understand whether your application has been the subject of attacks. For more information, see [Azure Web Application Firewall Monitoring and Logging](application-gateway-waf-metrics.md). |
web-application-firewall | Create Waf Policy Ag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/create-waf-policy-ag.md | Associating a WAF policy with listeners allows for multiple sites behind a singl You can make as many policies as you want. Once you create a policy, it must be associated to an Application Gateway to go into effect, but it can be associated with any combination of Application Gateways and listeners. -If your Application Gateway has an associated policy, and then you associated a different policy to a listener on that Application Gateway, the listener's policy will take effect, but just for the listener(s) that they're assigned to. The Application Gateway policy still applies to all other listeners that don't have a specific policy assigned to them. +If your Application Gateway has an associated policy, and then you associate a different policy to a listener on that Application Gateway, the listener's policy takes effect, but just for the listener(s) that they're assigned to. The Application Gateway policy still applies to all other listeners that don't have a specific policy assigned to them. > [!NOTE] > Once a Firewall Policy is associated to a WAF, there must always be a policy associated to that WAF. You may overwrite that policy, but disassociating a policy from the WAF entirely isn't supported. If it also shows Policy Settings and Managed Rules, then it's a full Web Applica ## Upgrade to WAF Policy -If you have a Custom Rules only WAF Policy, then you may want to move to the new WAF Policy. Going forward, the firewall policy will support WAF policy settings, managed rulesets, exclusions, and disabled rule-groups. Essentially, all the WAF configurations that were previously done inside the Application Gateway are now done through the WAF Policy. +If you have a Custom Rules only WAF Policy, then you may want to move to the new WAF Policy. Going forward, the firewall policy supports WAF policy settings, managed rulesets, exclusions, and disabled rule-groups. Essentially, all the WAF configurations that were previously done inside the Application Gateway are now done through the WAF Policy. Edits to the custom rule only WAF policy are disabled. To edit any WAF settings such as disabling rules, adding exclusions, etc. you have to upgrade to a new top-level firewall policy resource. Optionally, you can use a migration script to upgrade to a WAF policy. For more ## Force mode -If you don't want to copy everything into a policy that is exactly the same as your current config, you can set the WAF into "force" mode. Run the following Azure PowerShell code and your WAF will be in force mode. Then you can associate any WAF Policy to your WAF, even if it doesn't have the exact same settings as your config. +If you don't want to copy everything into a policy that is exactly the same as your current config, you can set the WAF into "force" mode. Run the following Azure PowerShell code to put your WAF in force mode. Then you can associate any WAF Policy to your WAF, even if it doesn't have the exact same settings as your config. ```azurepowershell-interactive $appgw = Get-AzApplicationGateway -Name <your Application Gateway name> -ResourceGroupName <your Resource Group name> |
web-application-firewall | Rate Limiting Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-configure.md | + + Title: Create rate limiting custom rules for Application Gateway WAF v2 (preview) ++description: Learn how to configure rate limit custom rules for Application Gateway WAF v2. +++ Last updated : 08/16/2023+++++# Create rate limiting custom rules for Application Gateway WAF v2 (preview) ++> [!IMPORTANT] +> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Rate limiting enables you to detect and block abnormally high levels of traffic destined for your application. Rate Limiting works by counting all traffic that that matches the configured Rate Limit rule and performing the configured action for traffic matching that rule which exceeds the configured threshold. For more information, see [Rate limiting overview](rate-limiting-overview.md). ++## Configure Rate Limit Custom Rules ++Use the following information to configure Rate Limit Rules for Application Gateway WAFv2. ++**Scenario One** - Create rule to rate-limit traffic by Client IP that exceed the configured threshold, matching all traffic. ++#### [Portal](#tab/browser) ++1. Open an existing Application Gateway WAF Policy +1. Select Custom Rules +1. Add Custom Rule +1. Add Name for the Custom Rule +1. Select the Rate limit Rule Type radio button +1. Enter a Priority for the rule +1. Choose 1 minute for Rate limit duration +1. Enter 200 for Rate limit threshold (requests) +1. Select Client address for Group rate limit traffic by +1. Under Conditions, choose IP address for Match Type +1. For Operation, select the Does not contain radio button +1. For match condition, under IP address or range, enter 255.255.255.255/32 +1. Leave action setting to Deny traffic +1. Select Add to add the custom rule to the policy +1. Select Save to save the configuration and make the custom rule active for the WAF policy. ++#### [PowerShell](#tab/powershell) ++```azurepowershell +$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr +$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator IPMatch -MatchValue 255.255.255.255/32 -NegationCondition $True +$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName ClientAddr +$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable +$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name ClientIPRateLimitRule -Priority 90 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled +``` +#### [CLI](#tab/cli) +```azurecli +az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name ClientIPRateLimitRule --priority 90 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"ClientAddr"'}]}]' +az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator IPMatch --policy-name ExamplePolicy --name ClientIPRateLimitRule --resource-group ExampleRG --value 255.255.255.255/32 --negate true +``` +* * * ++**Scenario Two** - Create Rate Limit Custom Rule to match all traffic except for traffic originating from the United States. Traffic will be grouped, counted and rate limited based on the GeoLocation of the Client Source IP address ++#### [Portal](#tab/browser) ++1. Open an existing Application Gateway WAF Policy +1. Select Custom Rules +1. Add Custom Rule +1. Add Name for the Custom Rule +1. Select the Rate limit Rule Type radio button +1. Enter a Priority for the rule +1. Choose 1 minute for Rate limit duration +1. Enter 500 for Rate limit threshold (requests) +1. Select Geo location for Group rate limit traffic by +1. Under Conditions, choose Geo location for Match Type +1. In the Match variables section, select RemoteAddr for Match variable +1. Select the Is not radio button for operation +1. Select United States for Country/Region +1. Leave action setting to Deny traffic +1. Select Add to add the custom rule to the policy +1. Select Save to save the configuration and make the custom rule active for the WAF policy. ++#### [PowerShell](#tab/powershell) +```azurepowershell +$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr +$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator GeoMatch -MatchValue "US" -NegationCondition $True +$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariablde -VariableName GeoLocation +$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable +$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name GeoRateLimitRule -Priority 95 -RateLimitDuration OneMin -RateLimitThreshold 500 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled +``` +#### [CLI](#tab/cli) +```azurecli +az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name GeoRateLimitRule --priority 95 --rule-type RateLimitRule --rate-limit-threshold 500 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"GeoLocation"'}]}]' +az network application-gateway waf-policy custom-rule match-condition add --match-variables RemoteAddr --operator GeoMatch --policy-name ExamplePolicy --name GeoRateLimitRule --resource-group ExampleRG --value US --negate true +``` +* * * ++**Scenario Three** - Create Rate Limit Custom Rule matching all traffic for the login page, and using the GroupBy None variable. This will group and count all traffic which matches the rule as one, and apply the action across all traffic matching the rule (/login). ++#### [Portal](#tab/browser) ++1. Open an existing Application Gateway WAF Policy +1. Select Custom Rules +1. Add Custom Rule +1. Add Name for the Custom Rule +1. Select the Rate limit Rule Type radio button +1. Enter a Priority for the rule +1. Choose 1 minute for Rate limit duration +1. Enter 100 for Rate limit threshold (requests) +1. Select None for Group rate limit traffic by +1. Under Conditions, choose String for Match Type +1. In the Match variables section, select RequestUri for Match variable +1. Select the Is not radio button for operation +1. For Operator select contains +1. Enter Login page path for match Value. In this example we use /login +1. Leave action setting to Deny traffic +1. Select Add to add the custom rule to the policy +1. Select Save to save the configuration and make the custom rule active for the WAF policy. ++#### [PowerShell](#tab/powershell) +```azurepowershell +$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestUri +$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator Contains -MatchValue "/login" -NegationCondition $True +$groupByVariable = New-AzApplicationGatewayFirewallCustomRuleGroupByVariable -VariableName None +$groupByUserSession = New-AzApplicationGatewayFirewallCustomRuleGroupByUserSession -GroupByVariable $groupByVariable +$ratelimitrule = New-AzApplicationGatewayFirewallCustomRule -Name LoginRateLimitRule -Priority 99 -RateLimitDuration OneMin -RateLimitThreshold 100 -RuleType RateLimitRule -MatchCondition $condition -GroupByUserSession $groupByUserSession -Action Block -State Enabled +``` +#### [CLI](#tab/cli) +```azurecli +az network application-gateway waf-policy custom-rule create --policy-name ExamplePolicy --resource-group ExampleRG --action Block --name LoginRateLimitRule --priority 99 --rule-type RateLimitRule --rate-limit-threshold 100 --group-by-user-session '[{'"groupByVariables"':[{'"variableName"':'"None"'}]}]' +az network application-gateway waf-policy custom-rule match-condition add --match-variables RequestUri --operator Contains --policy-name ExamplePolicy --name LoginRateLimitRule --resource-group ExampleRG --value '/login' +``` +* * * ++## Next steps ++[Customize web application firewall rules](application-gateway-customize-waf-rules-portal.md) |
web-application-firewall | Rate Limiting Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/rate-limiting-overview.md | + + Title: Azure Web Application Firewall (WAF) rate limiting (preview) +description: This article is an overview of Azure Web Application Firewall (WAF) on Application Gateway rate limiting. ++++ Last updated : 08/16/2023++++# What is rate limiting for Web Application Firewall on Application Gateway (preview)? ++> [!IMPORTANT] +> Rate limiting for Web Application Firewall on Application Gateway is currently in PREVIEW. +> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++Rate limiting for Web Application Firewall on Application Gateway (preview) allows you to detect and block abnormally high levels of traffic destined for your application. By using rate limiting on Application Gateway WAF_v2, you can mitigate many types of denial-of-service attacks, protect against clients that have accidentally been misconfigured to send large volumes of requests in a short time period, or control traffic rates to your site from specific geographies. ++## Rate limiting policies ++Rate limiting is configured using custom WAF rules in a policy. ++> [!NOTE] +> Rate limit rules are only supported on Web Application Firewalls running the [latest WAF engine](waf-engine.md). In order to ensure you are using the latest engine, select CRS 3.2 for the default rule set. ++When you configure a rate limit rule, you must specify the threshold: the number of requests allowed within the specified time period. Rate limiting on Application Gateway WAF_v2 uses a sliding window algorithm to determine when traffic has breached the threshold and needs to be dropped. During the first window where the threshold for the rule is breached, any more traffic matching the rate limit rule is dropped. From the second window onwards, traffic up to the threshold within the window configured is allowed, producing a throttling effect. ++You must also specify a match condition, which tells the WAF when to activate the rate limit. You can configure multiple rate limit rules that match different variables and paths within your policy. ++Application Gateway WAF_v2 also introduces a *GroupByUserSession*, which must be configured. The *GroupByUserSession* specifies how requests are grouped and counted for a matching rate limit rule. ++The following three *GroupByVariables* are currently available: +- *ClientAddr* ΓÇô This is the default setting and it means that each rate limit threshold and mitigation applies independently to every unique source IP address. +- *GeoLocation* - Traffic is grouped by their geography based on a Geo-Match on the client IP address. So for a rate limit rule, traffic from the same geography is grouped together. +- *None* - All traffic is grouped together and counted against the threshold of the Rate Limit rule. When the threshold is breached, the action triggers against all traffic matching the rule and doesn't maintain independent counters for each client IP address or geography. It's recommended to use *None* with specific match conditions such as a sign-in page or a list of suspicious User-Agents. ++## Rate limiting details ++The configured rate limit thresholds are counted and tracked independently for each endpoint the Web Application Firewall policy is attached to. For example, a single WAF policy attached to five different listeners maintains independent counters and threshold enforcement for each of the listeners. ++The rate limit thresholds aren't always enforced exactly as defined, so it shouldn't be used for fine-grain control of application traffic. Instead, it's recommended for mitigating anomalous rates of traffic and for maintaining application availability. ++The sliding window algorithm blocks all matching traffic for the first window in which the threshold is exceeded, and then throttles traffic in future windows. Use caution when defining thresholds for configuring wide-matching rules with either *GeoLocation* or *None* as the *GroupByVariables*. Incorrectly configured thresholds could lead to frequent short outages for matching traffic. ++## Availability ++Currently, Rate limiting is not available in the Azure Government and Azure China sovereign regions. ++## Next step ++- [Create rate limiting custom rules for Application Gateway WAF v2 (preview)](rate-limiting-configure.md) |
web-application-firewall | Waf Engine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-engine.md | description: This article provides an overview of the Azure WAF engine. Previously updated : 05/03/2022 Last updated : 08/25/2023 The new WAF engine is a high-performance, scalable Microsoft proprietary engine The new engine, released with CRS 3.2, provides the following benefits: * **Improved performance:** Significant improvements in WAF latency, including P99 POST and GET latencies. We observed a significant reduction in P99 tail latencies with up to approximately 8x reduction in processing POST requests and approximately 4x reduction in processing GET requests. -* **Increased scale:** Higher requests per second (RPS), using the same compute power and with the ability to process larger request sizes. Our next-generation engine can scale up to 8 times more RPS using the same compute power, and has an ability to process 16 times larger request sizes (up to 2 MB request sizes), which was not possible with the previous engine. +* **Increased scale:** Higher requests per second (RPS), using the same compute power and with the ability to process larger request sizes. Our next-generation engine can scale up to eight times more RPS using the same compute power, and has an ability to process 16 times larger request sizes (up to 2-MB request sizes), which wasn't possible with the previous engine. * **Better protection:** New redesigned engine with efficient regex processing offers better protection against RegEx denial of service (DOS) attacks while maintaining a consistent latency experience. * **Richer feature set:** New features and future enhancement are available only through the new engine. There are many new features that are only supported in the Azure WAF engine. The * HTTP listeners limit * WAF IP address ranges per match condition * Exclusions limit+* [Rate-limit Custom Rules](rate-limiting-overview.md) -New WAF features will only be released with later versions of CRS on the new WAF engine. +New WAF features are only released with later versions of CRS on the new WAF engine. ## Request logging for custom rules |
web-application-firewall | Waf Sensitive Data Protection Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/waf-sensitive-data-protection-configure.md | $logScrubbingRuleConfig = New-AzApplicationGatewayFirewallPolicyLogScrubbingConf ``` #### [CLI](#tab/cli) -The Azure CLI commands to enable and configure Sensitive Data Protection are coming soon. +Use the following Command Line Interface commands to [create and configure](/cli/azure/network/application-gateway/waf-policy/policy-setting) Log Scrubbing rules for Sensitive Data Protection: +```CLI +az network application-gateway waf-policy policy-setting update -g <MyResourceGroup> --policy-name <MyPolicySetting> --log-scrubbing-state <Enabled/Disabled> --scrubbing-rules "[{state:<Enabled/Disabled>,match-variable:<MatchVariable>,selector-match-operator:<Operator>,selector:<Selector>}]" +``` |
web-application-firewall | Web Application Firewall Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md | Activity logging is automatically enabled for every Resource Manager resource. Y ``` > [!TIP]->Activity logs do not require a separate storage account. The use of storage for access and performance logging incurs service charges. +>Activity logs do not *require* a separate storage account. The use of storage for access and performance logging incurs service charges. ### Enable logging through the Azure portal Activity logging is automatically enabled for every Resource Manager resource. Y * Performance log * Firewall log -2. To start collecting data, select **Turn on diagnostics**. +2. Select **Add diagnostic setting**. - ![Turning on diagnostics][1] -3. The **Diagnostics settings** page provides the settings for the resource logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the resource logs. +3. The **Diagnostic setting** page provides the settings for the resource logs. In this example, Log Analytics stores the logs. You can also use an event hub, a storage account, or a partner solution to save the resource logs. - ![Starting the configuration process][2] + :::image type="content" source="../media/web-application-firewall-logs/figure2.png" alt-text="Screenshot showing Diagnostic settings."::: 5. Type a name for the settings, confirm the settings, and select **Save**. The performance log is generated only if you have enabled it on each Application ## Firewall log -The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged: +The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the destination that you specified when you enabled the logging. The following data is logged: |Value |Description | We have published a Resource Manager template that installs and runs the popular * [Visualize your Azure activity log with Power BI](https://powerbi.microsoft.com/blog/monitor-azure-audit-logs-with-power-bi/) blog post. * [View and analyze Azure activity logs in Power BI and more](https://azure.microsoft.com/blog/analyze-azure-audit-logs-in-powerbi-more/) blog post. -[1]: ../media/web-application-firewall-logs/figure1.png -[2]: ../media/web-application-firewall-logs/figure2.png -[3]: ./media/application-gateway-diagnostics/figure3.png -[4]: ./media/application-gateway-diagnostics/figure4.png -[5]: ./media/application-gateway-diagnostics/figure5.png -[6]: ./media/application-gateway-diagnostics/figure6.png -[7]: ./media/application-gateway-diagnostics/figure7.png -[8]: ./media/application-gateway-diagnostics/figure8.png -[9]: ./media/application-gateway-diagnostics/figure9.png -[10]: ./media/application-gateway-diagnostics/figure10.png |
web-application-firewall | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/overview.md | description: This article provides an overview of Azure Web Application Firewall Previously updated : 06/10/2022 Last updated : 08/23/2023 |
web-application-firewall | Waf Custom Rules Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/scripts/waf-custom-rules-powershell.md | - Title: Azure PowerShell Script Sample that uses WAF custom rules -description: Azure PowerShell Script Sample - Create Web Application Firewall on Application Gateway custom rules ---- Previously updated : 09/30/2019-----# Create WAF custom rules with Azure PowerShell --This script creates an Application Gateway Web Application Firewall that uses custom rules. The custom rule blocks traffic if the request header contains User-Agent *evilbot*. --## Prerequisites --### Azure PowerShell module --If you choose to install and use Azure PowerShell locally, this script requires the Azure PowerShell module version 2.1.0 or later. --1. To find the version, run `Get-Module -ListAvailable Az`. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). -2. To create a connection with Azure, run `Connect-AzAccount`. ---## Sample script --[!code-powershell[main](../../../powershell_scripts/application-gateway/waf-rules/waf-custom-rules.ps1 "Custom WAF rules")] --## Clean up deployment --Run the following command to remove the resource group, application gateway, and all related resources. --```powershell -Remove-AzResourceGroup -Name CustomRulesTest -``` --## Script explanation --This script uses the following commands to create the deployment. Each item in the table links to command specific documentation. --| Command | Notes | -||| -| [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) | Creates a resource group in which all resources are stored. | -| [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig) | Creates the subnet configuration. | -| [New-AzVirtualNetwork](/powershell/module/az.network/new-azvirtualnetwork) | Creates the virtual network using with the subnet configurations. | -| [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) | Creates the public IP address for the application gateway. | -| [New-AzApplicationGatewayIPConfiguration](/powershell/module/az.network/new-azapplicationgatewayipconfiguration) | Creates the configuration that associates a subnet with the application gateway. | -| [New-AzApplicationGatewayFrontendIPConfig](/powershell/module/az.network/new-azapplicationgatewayfrontendipconfig) | Creates the configuration that assigns a public IP address to the application gateway. | -| [New-AzApplicationGatewayFrontendPort](/powershell/module/az.network/new-azapplicationgatewayfrontendport) | Assigns a port to be used to access the application gateway. | -| [New-AzApplicationGatewayBackendAddressPool](/powershell/module/az.network/new-azapplicationgatewaybackendaddresspool) | Creates a backend pool for an application gateway. | -| [New-AzApplicationGatewayBackendHttpSettings](/powershell/module/az.network/new-azapplicationgatewaybackendhttpsetting) | Configures settings for a backend pool. | -| [New-AzApplicationGatewayHttpListener](/powershell/module/az.network/new-azapplicationgatewayhttplistener) | Creates a listener. | -| [New-AzApplicationGatewayRequestRoutingRule](/powershell/module/az.network/new-azapplicationgatewayrequestroutingrule) | Creates a routing rule. | -| [New-AzApplicationGatewaySku](/powershell/module/az.network/new-azapplicationgatewaysku) | Specify the tier and capacity for an application gateway. | -| [New-AzApplicationGateway](/powershell/module/az.network/new-azapplicationgateway) | Create an application gateway. | -|[Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) | Removes a resource group and all resources contained within. | -|[New-AzApplicationGatewayAutoscaleConfiguration](/powershell/module/az.network/New-AzApplicationGatewayAutoscaleConfiguration)|Creates an autoscale configuration for the Application Gateway.| -|[New-AzApplicationGatewayFirewallMatchVariable](/powershell/module/az.network/New-AzApplicationGatewayFirewallMatchVariable)|Creates a match variable for firewall condition.| -|[New-AzApplicationGatewayFirewallCondition](/powershell/module/az.network/New-AzApplicationGatewayFirewallCondition)|Creates a match condition for custom rule.| -|[New-AzApplicationGatewayFirewallCustomRule](/powershell/module/az.network/New-AzApplicationGatewayFirewallCustomRule)|Creates a new custom rule for the application gateway firewall policy.| -|[New-AzApplicationGatewayFirewallPolicy](/powershell/module/az.network/New-AzApplicationGatewayFirewallPolicy)|Creates a application gateway firewall policy.| -|[New-AzApplicationGatewayWebApplicationFirewallConfiguration](/powershell/module/az.network/New-AzApplicationGatewayWebApplicationFirewallConfiguration)|Creates a WAF configuration for an application gateway.| --## Next steps --- For more information about WAF custom rules, see [Custom rules for Web Application Firewall](../ag/custom-waf-rules-overview.md)-- For more information on the Azure PowerShell module, see [Azure PowerShell documentation](/powershell/azure/). |